How the jitters adjust the task tick count?

Hi, In FreeRTOS demo, it uses a high frequency timer to measure the jitters. If a timer set up its clock_time() with return xTaskGetTickCount(), what do the jitters effects the task tick count or how to adjust the task tick counts with jitters? I wonder if someone can contribute any idea regarding my question. thanks, bill yang

How the jitters adjust the task tick count?

I’m not sure what your question is.  The xTaskGetTickCount() function can only return a tick count with the resolution of the tick frequency.  This tick count will not suffer from an accumulating error over time – unless you are leaving interrupts disabled for a period greater than one tick period.  If you want a higher resolution timer then you can use a different peripheral timer in the same way that the demo does. Regards.

How the jitters adjust the task tick count?

I am Sorry, Richard. I did not clear state my question. The reason I ask this question because I am using a Linux version program in my project. The Linux program has a timer and uses jiffies to adjust the timer when it resets. My project also needs a timer, but it used the task tick counts by returning value from xTaskGetTickCount(). Since the FreeRTOS demo can measure the jitters from a high frequency timer,  Question1: Do I have to use the jitters to adjust my timer? Question 2: If it needs to adjust timer by the jitters, how to covert the jitters to a task tick count? Note: jitters are counted in ns (1/1000000000) and task ticks are counted in ms. So the time of jitters used can be computed by; time = (jitters * 1/1000000000) / ((1/configTICK_RATE_HZ) * 1000000) If I am wrong, what is your suggestion? Regards, Bill

How the jitters adjust the task tick count?

Can the FreeRTOS demo measure the jitters from a high frequency timer? I’d be very surprised if it can. As far as I understand, jitter does not accumulate and does not affect the overall average frequency. The only factor affecting clock frequency is the crystal’s accuracy and is expressed in PPM, which you want to be as low as possible.

How the jitters adjust the task tick count?

Some of the Cortex M3 and also I think PIC32 demos include a 20KHz interrupt that is assigned a priority above any interrupt priority used by the kernel.  The priority is also above the priority that gets masked within critical sections.  The idea is to demonstrate that interrupt service routines can be defined to perform functions that require very accurate timing.  The priority assignment being above that masked by critical sections and above the kernel interrupt priority means the 20KHz interrupt should never get delayed or otherwise effected by something the kernel is doing.  To prove this the 20KHz ISR just measures the time between its invocations to calculate the jitter – it uses a timer peripheral that is running at a fast rate to take a measurement in nano seconds. I have probably managed to make that sound much more complex than it actually is, but the important thing to note is that the jitter being measured is the jitter in the 20KHz interrupt NOT the jitter in the RTOS tick interrupt.  No adjustment in the RTOS tick is needed unless the application is doing something that means the tick interrupt cannot execute for a period greater than one tick. Again I think I made that second bit sound more complex than it actually is – its been a long day! Regards.

How the jitters adjust the task tick count?

Thanks Richard, I got it. Bill