cancel
Showing results for 
Search instead for 
Did you mean: 

PSoC 5, 3 & 1 MCU

Anonymous
Not applicable

Hello,

   

I am just starting with CY8CKIT-050 Development Kit.

   

I am using two SAR-ADC (Sampling frequency 100 kHz) and some time-critical calculation in a hight priority interrupt service routine (prio 0). Everything works fine until I implement an additional interrupt service (rising edge @ 10Hz, priority 7) to set a flag for further processing. This further processing is done in den main-loop and sends some data to the display.

   

When the the low level interrupt is enabled, I measure a Jitter (about 1.3us) in the signal processing in the high-priority interrupt service routine.

   

This is done with a toggle Bit as digital output at the end of the routine. The service routine is triggered by eoc of one ADC and needs about 4 us for the signal processing (without code optimization).

   

The reason for this Jitter is the code in the main loop (LCD functions) :

   

CY_ISR(DISP_ISR_LOC)
{
  Disp_Flag = 1; 
}

   

int main(void) .....

   

         if (Disp_Flag == 1)
        {
            // Doing nothing for testing!
           
            phi_disp = (phi_LUT_int * (uint64)360000) >> 16; // für P = 2
                   
            sprintf(displayStr,"%4d %4d ",(int)voltCount_sin,(int)voltCount_cos);
            LCD_Position(0,0);
            LCD_PrintString(displayStr);
            result = sprintf(displayStr,"%6d mGrad",(int)phi_disp);
            LCD_Position(1,0);
            LCD_PrintString(displayStr);
            Disp_Flag = 0;
        }

   

There is also a Jitter in the signal processing, depending on the input signal values and the adjustment of code optimization. But with constant input values (ignoring the ADC-Data) this jitter is not existing.

   

Could you give me some hints for solving this problem? 

   

with regards

   

steve_c

0 Likes
Reply
1 Solution
Honored Contributor II

Its nice when I get something right without consulting my crystal ball 😉

   

One thing to try: I remember some discussions about being able to execute code from sRAM isntead of Flash. Maybe AN89610 might help. Google otherwise yields http://www.cypress.com/?rID=61932 but I lack the time right now to look further. Doing so might make the code faster even without cache. (I should have thought of that earlier)

View solution in original post

0 Likes
Reply
22 Replies
Esteemed Contributor II

Welcome in the forum, Steve_c!

   

The only idea I've got concerning the jitter has to do with the CPU internal handling of interrupts: When a handler finishes and returns from interrupt a check is made for any interrupt pending. If so, the call for the next handler is shortened (ARM tries to save cycles). This may cause some jitter depending on whether the isr_disp interrupt occurs during execution of your ADC_COS_ISR_LOC-handler or out of that time.

   

 

   

Bob

   

PS: I am located near Bremen, where do you live?

0 Likes
Reply
Anonymous
Not applicable

Hello Bob,

   

thank you for the fast answer.

   

The high priority interrupt handler interrupts in most cases the loop in main() and rarely the very short low priority interrupt handler.  I measure the jitter by triggerung a oscilloscope with the sample_clock (soc of ADC) and looking at the time behaviour of the toggled digital output.  If I toggle the digital output directly at the begin of the high priority interrupt handler, the jitter is much smaller. So there is an additional jitter in the processing of the high priority interrupt handler. 

   

This looks like an interruption of this handler, which should not occur, because there is no interrupt-source with higher priority.

   

The context switch by the interrupts shouild be done in about 12 cycles (@ 64 MHz less than 200ns), or?

   

 

   

Steve_c

   

PS: I am living in Wetzlar, 50 km north of Frankfurt

0 Likes
Reply
Esteemed Contributor

The code you are using in the ISR looks like ?

   

 

   

Regards, Dana.

0 Likes
Reply
Anonymous
Not applicable

Hello Dana,

   

I attached ther code in the first posting.

   

But here is the summary for the high priority handler:

   

CY_ISR(ADC_COS_ISR_LOC)
{
    int16 x,y,z; // and someother local  variables
   
   // reading the ADC-samples   

   

   x = CY_GET_REG16(ADC_COS_SAR_WRK0_PTR)-2048;
   y = CY_GET_REG16(ADC_SIN_SAR_WRK0_PTR)-2048;
   
    // doing some calculation
    // filtering, division, table look up for atan-Interpolation
  
    // writing a result in a Control Register
    phi_mod_Control=result;
    // writing the toggle bit directly
    Pin_Period_DR  = (Pin_Period_DR & 0xBF)| period;
   
    // toggle the bit for the next cycle
    if (period == 0)
      period = 64;  // direct the right bit position
    else
      period = 0;
     
  }

   

I placed most signal processing in the ISR to handle the not time-critical processing in main().

   

regards

   

steve_c

0 Likes
Reply
Honored Contributor II

I would see what you are seeing is a caching effect. The PSoc5LP has a 128 byte instruction cache. If only your ISR runs, its probably fully cached and can run with full speed any time. If the main loop does something (every time Disp_Flag==1) it gets loaded into the cache, and the ISR runs a little bit slower since it sees cache misses. That also explains why the jitter is better when toggling the flag at the beginning of the ISR - there you have less cache misses to handle and the delay is shorter.

   

(from the TRM, table 5-1: the instruction fetch time from memory is up to 4 clock cycles)

0 Likes
Reply
Contributor II

 From a very high level point of view, there are a few  things you can do to avoid the jitter.  Pick one or none, your choice!

   

  1) enable interrupts in your low priority ISR at the beginning. This will reduce jitter, but it should make it much less than a microsecond.

   

 2) Process the flag test at the end of the high priority ISR as an afterthought, if the timing of the event allows.

   

  3) use your low priority input to set a Set Reset Flip Flop that feeds a control register.  Read the control register in your main loop, and reset the FF via software in your main using a control register.  If your FF is set, process it the same way you would have done with the flag set in software.

   

  4) Depending upon what you need to do software, design some hardware to do an output that you need for the high priority event and process the event's information in your software a bit later, in an ISR that can stand some jitter without upsetting the apple cart.

   

  5) If this is a periodic event, and can stand one event's delay, consider allowing the information to go out one event later.  the ISR can jitter, but set up for the next output event that occurs on a clock (i.e. your input signal that triggers the isr).

0 Likes
Reply
Contributor II

 One more thing, if the output bit can always be offset a couple of microseconds without issues, you could use a 2us clock into a flip flop synchronizing pair and set or reset the D line.  It would then set/reset 2us later with no "jitter" seen on a scope.

0 Likes
Reply
Esteemed Contributor II

Excerpt from ARM Cortex M0 Generic User Guide

   

"Tail-chaining This mechanism speeds up exception servicing. On completion of
an exception handler, if there is a pending exception that meets the
requirements for exception entry, the stack pop is skipped and
control transfers to the new exception handler.
Late-arriving This mechanism speeds up preemption. If a higher priority
exception occurs during state saving for a previous exception, the
processor switches to handle the higher priority exception and
initiates the vector fetch for that exception. State saving is not
affected by late arrival because the state saved would be the same
for both exceptions. On return from the exception handler of the
late-arriving exception, the normal tail-chaining rules apply."

   

 

   

Nothing to do with the jitter, but the variable Disp_Flag should be declared as "volatile" or you will get a bad surprise when compiling with build option "Release".

   

 

   

Bob

0 Likes
Reply
Anonymous
Not applicable

The output of the high priority interrupt handler is written to a control register which is in Sync-Mode with the sample_clock for the further signal processing in hardware.

   

So the Jitter is at the output not visible. The problem is, that the Jitter reduces the maximum possible sample frequency or the available time for some more signal processing in the ISR.

   

That´s the reason I want to understand the reason for the Jitter.

   

The next step to proceed  is to measure Cache Hits or Misses. Does anybody has some sample code for this topic?

   

...and to declare Disp_Flag as volatile ...

   

Is there an influence on the timing from the debug mode instead of the release mode ?

0 Likes
Reply
Honored Contributor II

One question: why are you concerned about the jitter? It seems there is a low (or even non-existing) jitter at the start of the ISR, but the execution times differs. But I don't see anything in the ISR that is timing-critical.

   

It might be better, in the long run, to use DMA with two buffers per ADC, ping-pong between them and then do mass-data processing whenever one of the buffers is filled up. That way you don't need to do any long-running calculation in an ISR (which is a bad practice anyway)

0 Likes
Reply
Honored Contributor II

(Oh, you answered my question while I was writing on it 🙂

   

I'm not sure how easy it is to measure cache misses, I'm not aware of something like performance counters in the Cortex-M3 cores. Looks like you will need to read up the available ARM documentation...

   

Release mode is faster than Debug mode, since the compiler is able to do much more optimizations.

0 Likes
Reply
Esteemed Contributor

You have jitter in the code due to the "period" test/assign

   

code, so possibly move this outside ISR if you can. Or

   

write in asm and equalize the used cycles with NOPs.

   

One would have to be concerned with compiler optimization

   

in ASM (if there is any, of this I am not sure).

   

 

   

You might confirm this by looking at .lst file and counting instruction

   

cycles.

   

 

   

Regards, Dana.

0 Likes
Reply
Honored Contributor II

I don't think so. a) the port toggle that steve is using to determine the jitter comes before the test condition, and b) the jitter only happens when the main loop does something meaningful (updating the LCD).

0 Likes
Reply
Anonymous
Not applicable

Yes, changing only one code line in the main loop results in a jitter of about 1 us!

I have done some measurments (.pdf attached)

The Code is now compiled with the optimization set to speed, but this changes nearly nothing in the amount of jitter.

The optimization works fine: The time for the signal processing is reduced to about 1.8 us due to a better utilization of the cpu registers.

   

I increased the sampling rate to 200 kHz.

   

There is no change between debug and release version.


With only one code line in the main loop, there is no jitter at all!:

if (Disp_Flag == 1)
        {
             phi_disp = (phi_LUT_int * (uint64)360000) >> 16; // für P = 2
             Disp_Flag = 0;
         }

Adding one code line results in a jitter of about 1 us:

 if (Disp_Flag == 1)
        {
            phi_disp = (phi_LUT_int * (uint64)360000) >> 16; // für P = 2
            sprintf(displayStr,"%4d %4d ",(int)voltCount_sin,(int)voltCount_cos);
         
            Disp_Flag = 0;
        }

0 Likes
Reply
Anonymous
Not applicable

second try to add the attachment

0 Likes
Reply
Anonymous
Not applicable

third try with a .zip attachment

0 Likes
Reply
Honored Contributor II

I think this makes my cache miss theory more likely. The small loop is one that easily fits into the instruction cache together with you ISR (how many bytes is this?), whereas sprintf is a large function that will clear the instruction cache, making the ISR slower.

   

Still - why are you concerned about jitter? Is it because you fear the ISR as such might be too slow for your high sample rate? In that case you should use DMA to store the ADC results and do mass processing - then the cache will be utilized.

0 Likes
Reply
Anonymous
Not applicable

First about the cache miss theory. If a miss adds 4 additonal clock-cycles, for 1 us there should be 16 misses (@64MHz bus clock) in one calolof the high priority handler. Is this possible?

   

The high priority interrupt handler has about 71 instruction code lines, looking at the disassembler window. This would fit to your theory. So, starting the interrupt handler, the cached instructions doesn´t fit and it needs up to about 16 misses to fill the cache again with the needed instructions?

   

Your suggestion to do a block signal processing is a good thing in many cases. Unfortunately in this case, I have to keep  the latency of the signal processing at a minimum. This is not possible with cumulating data in a block.

   

The Jitter reduces the maximum possible sampling rate or the possible amount of signal processing in that interrupt routine.

   

For example, at the moment the signal processing needs only about 1.8 us, so a jitter of 1 us is significant.

   

Another possibilty is to shift some of the signal processing to UDB-blocks or the DFB-Bock. The DFB-Block is already reserved for some processing and I use already some UDB-Blocks. I will check this in the ongoing development.

0 Likes
Reply
Honored Contributor II

Its interesting. The TRM talks about cache lines (and that there are multiple ones), but doesn't state how large they are.

   

So it seems that each cache miss fetches an entire cache line so it spans multiple instructions.

   

If you really want to test - the instruction cache can be disabled, see chapter 5 in the TRM. When I'm right there should be no jitter then, but the ISR would be slower now.

0 Likes
Reply
Honored Contributor II

OK, if you need low latency than the DMA is problematic. And yes, you need to consider the slowest ISR execution time for your calculation. And for just 71 instructions the cache effect is non-negligible. (To quote: "almost all programming can be viewed as an exercise in caching", by Terje Mathisen via Michael Abrash)

   

The PSoC5 has 24 UDBs, each with one DataPath in it, thats quite a large amount of processing power. I think using the DFB for multiple tasks will be difficult, so the UDB seems to be the best option.

0 Likes
Reply
Anonymous
Not applicable

Yes, indeed - without cache (CYDEV_INSTRUCT_CACHE_ENABLED 0 in cyfitter.h):

   

signal processing time increases from 1.8us to 4 us

   

Jitter decreases from 1us to about 90 ns

   

That´s the proof - now it is not a theory - it is a fact 🙂

   

Thank you for the support.

   

I assume, there is no way to convince the cache controller to keep some code in cache and other not .....

0 Likes
Reply
Honored Contributor II

Its nice when I get something right without consulting my crystal ball 😉

   

One thing to try: I remember some discussions about being able to execute code from sRAM isntead of Flash. Maybe AN89610 might help. Google otherwise yields http://www.cypress.com/?rID=61932 but I lack the time right now to look further. Doing so might make the code faster even without cache. (I should have thought of that earlier)

View solution in original post

0 Likes
Reply