I think I have found a source of potential obscure and not frequent bugs in the task and queue API calls.
In a few places of code in functions xTaskResumeAll, xQueueGenericReceive and xQueueGenericSend, portYILED_WITHIN_API macro is called before exitCRITICAL_SECTION is called. The subsequent macro is called only after portYILED_WITHIN_API is executed.
There is an example of one possible scenario to reveal what may happen.
Suppose, application has created 3 independent tasks- task1, task2, and task3 and two binary semaphores m2 for IPC between task1 and task2, and m3 for IPC between task2 and task3. Lets list the priorities in the ascending order:
task1 – 1
task2 -2
task3 -3
The scheduler is started. Both semaphores are taken during initialization.
Task1 is running. Task 2 is blocked on m2 and task 3 is blocked on m3(waiting when semaphore will be given).
Suddenly task1code invokes xSemaphoreGive that is covered by xQueueGenericSend. Inside xQueueGenericSend function current task1 enters critical section(macro taskENTER_CRITICAL).
In a vast majority of existed ports for FREERTOS increment nesting counter is provided to keep track of how many times portENTER_CRITICAL was called. It increments every time program comes across with portENTER_CRITICAL and decrements on every portEXIT_CRITICAL. Now the nesting counter is incremented.
Then task copies data to the queue (lines 465-468, file queue.c) and finally checks whether any task is blocked waiting for a data on this queue(lines 472-473 file queue.c). According to a given scenario precondition task2 is blocked currently on the m1. Since task1 gives mutex and task 2 is higher priority task, then task1 will call portYIELD_WITHIN_API macro(line 480). This macro will consequently perform a context switch.
Now task1 is inside critical section and tick interrupts are disabled.
Lets assume that on the next step task2 does a few things like sending some information via uart for argument’s sake. And after that, within 1 ms period, it gives mutex- m3 and task3 is waiting for it.
Now task2 invokes xQueueGenericSend where the first thing it does – it enters critical section and nesting counter for a crittical section is incremented again, finally its value is 2. Then it checks whether any tasks are already waiting for this mutex. According to our scenario task3 is waiting. Now task3 is gonna be run, where tick interrupts are enabled.
Lets assume at some point context switch inside tick interrupt handler will point to a task1.
Now the task1 is at the point where portYIELD_WITHIN_API was called. The next instruction is portEXIT_CRITICAL. Most realizations of this macro(for a vast majority of existing ports) rely on “increment nesting counter” by comparison it with 0, and if its value is 0, then tick interrupt is enabled again, otherwise only a nesting counter is decremented. In our scenario nesting counter currently holds 2. So task1 will decrement nesting counter by one and return with the state when tick interrupts are disabled. Normally it should be in the state when tick interrupts are enabled, but now it is not !!!!!.
It is not normal?. Am I correct.
I suppose that portYIELD_WITHIN_API (line 480 queue.c) must be preceded with taskEXIT_CRITICAL if taskENTER_CRITICAL was called above. And taskEXIT_CRITICAL that is called below should be excluded from the code. See example below. The same thing may be done in function xTaskResumeAll(file tasks.c) and function xQueueGenericReceive(file queue.c).
If my assumptions are not correct, please provide your feedback. Thank you for you attention in this matter.
possible bug in freertos. portYIELD_WITHIN_AP
possible bug in freertos. portYIELD_WITHIN_AP
Sorry guys, I have to correct following
Instead of : I suppose that portYIELD_WITHIN_API (line 480 queue.c) must be preceded with taskEXIT_CRITICAL if taskENTER_CRITICAL was called above. And taskEXIT_CRITICAL that is called below should be excluded from the code. See example below. The same thing may be done in function xTaskResumeAll(file tasks.c) and function xQueueGenericReceive(file queue.c). signed portBASE_TYPE xQueueGenericSend( xQueueHandle pxQueue, const void * const pvItemToQueue, portTickType xTicksToWait, portBASE_TYPE xCopyPosition )
{
signed portBASE_TYPE xEntryTimeSet = pdFALSE;
xTimeOutType xTimeOut;
portBASE_TYPE enterCriticalCall=0; /* This function relaxes the coding standard somewhat to allow return
statements within the function itself. This is done in the interest
of execution time efficiency. */
for( ;; )
{
taskENTER_CRITICAL();
{
/* Is there room on the queue now? To be running we must be
the highest priority task wanting to access the queue. */
if( pxQueue->uxMessagesWaiting < pxQueue->uxLength )
{
traceQUEUE_SEND( pxQueue );
prvCopyDataToQueue( pxQueue, pvItemToQueue, xCopyPosition ); /* If there was a task waiting for data to arrive on the
queue then unblock it now. */
if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE )
{
if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) == pdTRUE )
{
/* The unblocked task has a priority higher than
our own so yield immediately. Yes it is ok to do
this from within the critical section – the kernel
takes care of that. */
enterCriticalCall=1;
taskEXIT_CRITICAL();
portYIELD_WITHIN_API();
}
}
if(!enterCriticalCall)
{
taskEXIT_CRITICAL();
}
/* Return to the original privilege level before exiting the
function. */
return pdPASS;
}
else
{
if( xTicksToWait == ( portTickType ) 0 )
{
/* The queue was full and no block time is specified (or
the block time has expired) so leave now. */
taskEXIT_CRITICAL(); /* Return to the original privilege level before exiting
the function. */
traceQUEUE_SEND_FAILED( pxQueue );
return errQUEUE_FULL;
}
else if( xEntryTimeSet == pdFALSE )
{
/* The queue was full and a block time was specified so
configure the timeout structure. */
vTaskSetTimeOutState( &xTimeOut );
xEntryTimeSet = pdTRUE;
}
}
}
taskEXIT_CRITICAL(); /* Interrupts and other tasks can send to and receive from the queue
now the critical section has been exited. */ vTaskSuspendAll();
prvLockQueue( pxQueue ); /* Update the timeout state to see if it has expired yet. */
if( xTaskCheckForTimeOut( &xTimeOut, &xTicksToWait ) == pdFALSE )
{
if( prvIsQueueFull( pxQueue ) )
{
traceBLOCKING_ON_QUEUE_SEND( pxQueue );
vTaskPlaceOnEventList( &( pxQueue->xTasksWaitingToSend ), xTicksToWait ); /* Unlocking the queue means queue events can effect the
event list. It is possible that interrupts occurring now
remove this task from the event list again – but as the
scheduler is suspended the task will go onto the pending
ready last instead of the actual ready list. */
prvUnlockQueue( pxQueue ); /* Resuming the scheduler will move tasks from the pending
ready list into the ready list – so it is feasible that this
task is already in a ready list before it yields – in which
case the yield will not cause a context switch unless there
is also a higher priority task in the pending ready list. */
if( !xTaskResumeAll() )
{
portYIELD_WITHIN_API();
}
}
else
{
/* Try again. */
prvUnlockQueue( pxQueue );
( void ) xTaskResumeAll();
}
}
else
{
/* The timeout has expired. */
prvUnlockQueue( pxQueue );
( void ) xTaskResumeAll(); /* Return to the original privilege level before exiting the
function. */
traceQUEUE_SEND_FAILED( pxQueue );
return errQUEUE_FULL;
}
}
} I propose this I suppose that portYIELD_WITHIN_API (line 480 queue.c) must be preceded with taskEXIT_CRITICAL if taskENTER_CRITICAL was called above. And taskEXIT_CRITICAL that is called below should be avoided if it was called before portYIELD_WITHIN_API. See example below. The same thing may be done in function xTaskResumeAll(file tasks.c) and function xQueueGenericReceive(file queue.c).
Instead of : I suppose that portYIELD_WITHIN_API (line 480 queue.c) must be preceded with taskEXIT_CRITICAL if taskENTER_CRITICAL was called above. And taskEXIT_CRITICAL that is called below should be excluded from the code. See example below. The same thing may be done in function xTaskResumeAll(file tasks.c) and function xQueueGenericReceive(file queue.c). signed portBASE_TYPE xQueueGenericSend( xQueueHandle pxQueue, const void * const pvItemToQueue, portTickType xTicksToWait, portBASE_TYPE xCopyPosition )
{
signed portBASE_TYPE xEntryTimeSet = pdFALSE;
xTimeOutType xTimeOut;
portBASE_TYPE enterCriticalCall=0; /* This function relaxes the coding standard somewhat to allow return
statements within the function itself. This is done in the interest
of execution time efficiency. */
for( ;; )
{
taskENTER_CRITICAL();
{
/* Is there room on the queue now? To be running we must be
the highest priority task wanting to access the queue. */
if( pxQueue->uxMessagesWaiting < pxQueue->uxLength )
{
traceQUEUE_SEND( pxQueue );
prvCopyDataToQueue( pxQueue, pvItemToQueue, xCopyPosition ); /* If there was a task waiting for data to arrive on the
queue then unblock it now. */
if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE )
{
if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) == pdTRUE )
{
/* The unblocked task has a priority higher than
our own so yield immediately. Yes it is ok to do
this from within the critical section – the kernel
takes care of that. */
enterCriticalCall=1;
taskEXIT_CRITICAL();
portYIELD_WITHIN_API();
}
}
if(!enterCriticalCall)
{
taskEXIT_CRITICAL();
}
/* Return to the original privilege level before exiting the
function. */
return pdPASS;
}
else
{
if( xTicksToWait == ( portTickType ) 0 )
{
/* The queue was full and no block time is specified (or
the block time has expired) so leave now. */
taskEXIT_CRITICAL(); /* Return to the original privilege level before exiting
the function. */
traceQUEUE_SEND_FAILED( pxQueue );
return errQUEUE_FULL;
}
else if( xEntryTimeSet == pdFALSE )
{
/* The queue was full and a block time was specified so
configure the timeout structure. */
vTaskSetTimeOutState( &xTimeOut );
xEntryTimeSet = pdTRUE;
}
}
}
taskEXIT_CRITICAL(); /* Interrupts and other tasks can send to and receive from the queue
now the critical section has been exited. */ vTaskSuspendAll();
prvLockQueue( pxQueue ); /* Update the timeout state to see if it has expired yet. */
if( xTaskCheckForTimeOut( &xTimeOut, &xTicksToWait ) == pdFALSE )
{
if( prvIsQueueFull( pxQueue ) )
{
traceBLOCKING_ON_QUEUE_SEND( pxQueue );
vTaskPlaceOnEventList( &( pxQueue->xTasksWaitingToSend ), xTicksToWait ); /* Unlocking the queue means queue events can effect the
event list. It is possible that interrupts occurring now
remove this task from the event list again – but as the
scheduler is suspended the task will go onto the pending
ready last instead of the actual ready list. */
prvUnlockQueue( pxQueue ); /* Resuming the scheduler will move tasks from the pending
ready list into the ready list – so it is feasible that this
task is already in a ready list before it yields – in which
case the yield will not cause a context switch unless there
is also a higher priority task in the pending ready list. */
if( !xTaskResumeAll() )
{
portYIELD_WITHIN_API();
}
}
else
{
/* Try again. */
prvUnlockQueue( pxQueue );
( void ) xTaskResumeAll();
}
}
else
{
/* The timeout has expired. */
prvUnlockQueue( pxQueue );
( void ) xTaskResumeAll(); /* Return to the original privilege level before exiting the
function. */
traceQUEUE_SEND_FAILED( pxQueue );
return errQUEUE_FULL;
}
}
} I propose this I suppose that portYIELD_WITHIN_API (line 480 queue.c) must be preceded with taskEXIT_CRITICAL if taskENTER_CRITICAL was called above. And taskEXIT_CRITICAL that is called below should be avoided if it was called before portYIELD_WITHIN_API. See example below. The same thing may be done in function xTaskResumeAll(file tasks.c) and function xQueueGenericReceive(file queue.c).
possible bug in freertos. portYIELD_WITHIN_AP
I believe that it is allowed for a task within a critical section to call yield, and the way it is handled is that the critical section count is actually per task, not global, and as part of the task switch it is saved and restored, and the corresponding interrupt enable start is also restored.
This means that a critical section doesn’t mean that no task switches can occur, but that they can only occur at explicit yields, or yields as part of accessing a queue etc, but will NOT happen asynchronously.
possible bug in freertos. portYIELD_WITHIN_AP
Yes, I agree that critical section counter must be for every task separated. But looking at the code of most ports, the critical section counter seems to be a single for an application. For example at the code for LPC2000 port and coldfire_v1 it is globally defined in at the beginning of port.c file as follows.
I think that this counter may be an attribute of the task, and it is not enough to increment one global counter on every critical section entry and decrement it on an exit.
/* ulCriticalNesting will get set to zero when the first task starts. It
cannot be initialised to 0 as this will cause interrupts to be enabled
during the kernel initialisation process. */
unsigned long ulCriticalNesting = ( unsigned long ) 9999; and /* Used to keep track of the number of nested calls to taskENTER_CRITICAL(). This
will be set to 0 prior to the first task being started. */
static unsigned long ulCriticalNesting = 0x9999UL;
cannot be initialised to 0 as this will cause interrupts to be enabled
during the kernel initialisation process. */
unsigned long ulCriticalNesting = ( unsigned long ) 9999; and /* Used to keep track of the number of nested calls to taskENTER_CRITICAL(). This
will be set to 0 prior to the first task being started. */
static unsigned long ulCriticalNesting = 0x9999UL;
possible bug in freertos. portYIELD_WITHIN_AP
In currrent implementation we cant hold a critical section counter per every task, because enter critical macro, doesn’t take any arguments.
possible bug in freertos. portYIELD_WITHIN_AP
The task switch routine swaps out the counter in the task context (or the TCB) as part of the task switch. That way, the counter for the current task is easily available, but there is still a copy kept per task.
possible bug in freertos. portYIELD_WITHIN_AP
Thank you for your immediate response.
Yes i agree, but this only happens when portCRITICAL_NESTING_IN_TCB is enabled according to the source code. What if this option is not enabled.
I can see that if portCRITICAL_NESTING_IN_TCB is not defined, then critical nesting is a global for all tasks.
How it will be handled then?. And I dont like the way it is done #if ( portCRITICAL_NESTING_IN_TCB == 1 ) void vTaskEnterCritical( void )
{
portDISABLE_INTERRUPTS(); if( xSchedulerRunning != pdFALSE )
{
( pxCurrentTCB->uxCriticalNesting )++;
}
} #endif by disabling interrupts we will disable not only tick interrupt but all other interrupts either. That contradicts the main idea of a critical section.
Yes i agree, but this only happens when portCRITICAL_NESTING_IN_TCB is enabled according to the source code. What if this option is not enabled.
I can see that if portCRITICAL_NESTING_IN_TCB is not defined, then critical nesting is a global for all tasks.
How it will be handled then?. And I dont like the way it is done #if ( portCRITICAL_NESTING_IN_TCB == 1 ) void vTaskEnterCritical( void )
{
portDISABLE_INTERRUPTS(); if( xSchedulerRunning != pdFALSE )
{
( pxCurrentTCB->uxCriticalNesting )++;
}
} #endif by disabling interrupts we will disable not only tick interrupt but all other interrupts either. That contradicts the main idea of a critical section.
possible bug in freertos. portYIELD_WITHIN_AP
Sorry – only time for a very quick answer.
The kernel code is designed specifically to yield in a critical section – but application code should never do this.
richard_damon is right that the critical section count is per task. The critical section variable is saved as part of the task context., either in the TCB or on the stack depending on the port.
A task that yields in a critical section yields with interrupts disabled, but if it yields to a task that has a critical section count of zero interrupt will be re-enabled prior to the new task running. When the original task starts executing again, when it is restored it is noted that the critical section count is not zero (either from the stack or from the TCB) and it is restarted with interrupts disabled.
It is actually more complex then that because behaviour depends on the port too, but in essence, that is how the code is designed to work and it is not a bug (unless you can show otherwise). This really needs to be an FAQ because I spend a lot of my life explaining it.
Regards.
possible bug in freertos. portYIELD_WITHIN_AP
richard_damon is right that the critical section count is per task. The critical section variable is saved as part of the task context., either in the TCB or on the stack depending on the port.Please show me the code where critical counter is saved in xQueueGenericsend and xQueueGenericReceive. Because what I can see, is only one function void vTaskEnterCritical( void ) dealing with critical counter whichis enabled only when portCRITICAL_NESTING_IN_TCB == 1. By the way it never gets called in the freertos files. If the counter is specific for every task, then it should be somehow monitored. But we call taskENTER_CRITICAL
() with no arguments and I can see, that a few ports just increment the nesting counter inside this macros and it is a global. Just have a look at ports for LPC2000 and coldfire_v1.
possible bug in freertos. portYIELD_WITHIN_AP
For LPC2000, in the file FreeRTOSWorkingCopySourceportableGCCARM7_LPC2000portmacro.h, in the macro portSAVE_CONTEXT() you will see:
"LDR R0, =ulCriticalNesting nt"
"LDR R0, [R0] nt"
"STMDB LR!, {R0} nt"
where the critical nesting count is saved to the stack.
Not sure about coldfire without looking it up. It might be that it does not need to save it (some ports don’t and the core code has evolved to allow this).
Regards.
possible bug in freertos. portYIELD_WITHIN_AP
Oh thanks mate. Now I see why application with my port had some issues. Every time the context switch happens one of the first things we should do is to save nesting counter and restore it again.
I am taking my words back about bugs. Sorry for my premature conclusions
Bu my question still remains. Why you do not exit from critical section before portYIELD. It is only for my curiosity.
I see that in some places you exit from critical before portYIELD is being called buit in some arn’t. Thank you for your guidance
I see that in some places you exit from critical before portYIELD is being called buit in some arn’t. Thank you for your guidance