The application I am converting to FreeRTOS used to have the traditional main loop that calls various subsystems in turn. I want to replace some of those functions by high-priority tasks that are blocked most of the time, and the remaining ones by tasks of equal low priority. These low priority tasks share a lot of data structures, so if I enable preemption then I need to intorduce a lot of mutexes and critical sections. However, if I run the low-priority tasks cooperatively then I won’t need to do that. But I still want the high priority tasks to preempt the low priority tasks.
Is there a way to configure FreeRTOS so that an interrupt will only cause a task switch if it unblocks a task with higher priority than the currently-executing one; and when the high priority task becomes blocked again, it resumes execution of the original low-priority task and not a different one with equal priority? My understanding of configUSE_PREEMPTION==0 is that it it would prevent the high priority task preempting the low priority one too, which is not what I want. Thanks – David
Can you not just create all of the low priority tasks at the same priority, and disable time slicing? Then a higher priority task will always preempt (from an interrupt, or from a synchronous API call), but then when it’s done, the scheduler will resume running the lower priority task that had been preempted.
If the low priority tasks that share data are all at the same priority, and time slicing is disabled, then amongst themselves, they are essentially “non preemptive”. To them, a high priority task (without data sharing) is like an interrupt – they don’t know it’s happening, and (again, assuming no shared data) there is no hazard.
Sorry if I’m missing something from your description.
I’m not sure that is the case. When the high priority task blocks again
the scheduler will run, and choose another task using the normal
scheduling algorithm. Turning time slicing off will only prevent that
happening just because there was a tick interrupt.
Thanks Richard, I didn’t verify the behavior of FreeRTOS before posting that response.
Other RTOSes I’ve used (many) only switch to a task of similar priority when (a) their time slice expires, (b) the task voluntarily blocks (through a synchronous API call), or (c) a task of higher priority becomes ready.
I just assumed if none of those were true in David’s case, he’d get exactly the behavior he needed in those low priority tasks.
Earlier there was another discussion that I followed with interest about changes to task switching of equal priority (don’t remember the circumstances… I should just go find the thread!), so if my assumptions weren’t correct, my suggestion won’t quite work for David.
Don’t want to stir things up, but switching away from a task to another equal priority task without one of (a), (b) or (c) seems to saw a little bit against the grain (conventional wisdom) of preeemptive scheduling, from my own experience and education, but as long as the behavior is documented and consistent, no one (including myself) can complain.
Thanks for the thoughtful & helpful reply.
If pre-emption by a high priority task can indirectly cause a task switch between low-priority tasks, this is probably undesirable. About 35 years ago when I was responsible for maintaining an RTOS, we started getting reports from customers that when an application-level task did serial output, it ran very slowly, about 50 characters per second even though the baud rate was 38400. This only happened when another application-level task was also active. What was happening was this:
- A tick interrupt occurred, causing application-level task A to be resumed
- Task A wrote a character to the serial output system
- That caused an interrupt to be received from the serial I/O port
- As a result of the interrupt and subsequent processing, when the scheduler resumed running application-level tasks, it picked task B
- Task B ran until the next tick interrupt occurred; then the whole cycle started again.
So task A ended up being limited to sending one character per tick interrupt, and task B got almost all the CPU time.
The lesson learned was: only change the order of equal-priority tasks in the ready-list when the one nearest the front makes a yield call or a tick interrupt occurs. That ensures that they get equal time.
Yes, getting really equal execution of equal priority tasks can be difficult. My understanding of the scheduling algorithm is that the scheduler will run the first task on the ready list at the highest priority with a task on it. I beleive tasks when they wake up from being unblocked, will be put at the front of the ready list of their priority, so will run the next time that is the highest priority with a ready task, and when a tick interrupt occurs, if time slicing is enabled, then the current task is moved to the back of its ready list.
Thus, a higher task running for a moment will only cause a switch to a different task at the same priority level if that task became unblocked recently (to move it to the front), but otherwise that that was running at that level will continue to run.
This doesn’t totally make all the tasks at the same priority get totally equal time, as a task could ‘cheat’ by yielding just before the tick interrupt, letting another task get short changed by getting the runt remander of the tick as ‘its slot’, and then getting cycled back to the end of the list on the tick interrupt. This is one reason the Idle task can be told to yield or not to allow this to be controlled. Yielding probably gives more time to other idle tasks, but the task in the ready list right after it will often get a small time slot.
getting really equal execution of equal priority tasks can be difficult.
Especially if interrupts are executing – even if interrupts never cause task switches they do eat into time slices. Also, that would assume a task never blocks, and most multi-threaded applications attempt to ensure tasks do block when there is no useful work for them to be doing.
I beleive tasks when they wake up from being unblocked, will be put at the front of the ready list of their priority
I’m not sure (without checking) if they actually get put onto the front or the back – it will be whichever is most efficient to implement – but logically they are at the back in the sense that if there are multiple tasks at the same priority then the task that is selected is the one that has been waiting the longest.
Copyright (C) Amazon Web Services, Inc. or its affiliates. All rights reserved.