Priority inversion, optimization with peek

Hi all, I have something on my mind and would like your opinion. In a given application i have one SPI-ressource which is used to communicate with both a radio-chip and a graphic-display by having multiple chipselects.
Task A drives the radio-chip, task B displays texts and images on the display.
Task A has obviously the higher priority than task B.
The SPI is guarded by a MUTEX. The application works, and anyone is happy.
Whilst verifiing the software using the trace-facility I could detect a priority inversion.
The image below shows this capture. The task A (number 4) attempts to get the MUTEX (blue lines) which is hold by the task B. Task A obviously has to wait until task B gives the MUTEX back. Which it does after it has written the whole character to the display.
Task B will give the mutex back after every character before taking it again. This to make sure task A will get the SPI ressource within a useful time. To minimize the delay of task A I could of course give and take the mutex between every single pixel I send to the display.
I think this would be a huge impact in performance (confirmed by a short test). So my Idea was to check between any pixel sending to the display whether the mutex is wanted by another task.
I would check if the queue of the mutex is empty or not. When empty I then would give and take the mutex. Although I thing this would be much more efficient I don’t like to access kernel-objects directly (without an OS function).
I think it might be possible to detect whether another task is waiting by using the
xQueuePeek
function with the mutex-handler as the
xQueue
-argument. Thanks for your opinion.

Priority inversion, optimization with peek

The task holding the mutex could check its own priority using xTaskPriorityGet(NULL); If the priority is higher than when it obtained the mutex then priority inheritance must be in force because another task that has a higher priority is waiting for the mutex. Which trace are you using?

Priority inversion, optimization with peek

I quickly tested the following:
/* Test whether the mutex is wanted by another task. */
if( listLIST_IS_EMPTY( &((( xQUEUE * )xSPIMutex)->xTasksWaitingToReceive) ) == FALSE )
{
    /* Share the SPI with the radio-tasks. */
    xQueueGiveMutexRecursive( xSPIMutex );
    xQueueTakeMutexRecursive( xSPIMutex, portMAX_DELAY );
}
This actually works quite fine, but I had to manually import the xQUEUE-definition. I also testet the same with the priority-check.
/* Test whether the mutex is wanted by another task. */
if( uxTaskPriorityGet( NULL ) > ( tskIDLE_PRIORITY + 1 ) )
{
    /* Share the SPI with the radio-tasks. */
    xQueueGiveMutexRecursive( xSPIMutex );
    xQueueTakeMutexRecursive( xSPIMutex, portMAX_DELAY );
}
The result is about the same (priority-check is a tiny bit slower but much more satisfying because I don’t have to mess around with the kernel-objects). Thanks for the tip with the priorities. For the trace-tool: I was a bit disappointed by some features of the percepio. Thus I started to create my own two weeks ago. Thanks again.

Priority inversion, optimization with peek

listLIST_IS_EMPTY( &((( xQUEUE * )xSPIMutex)->xTasksWaitingToReceive)
This does not tell you if the task waiting has lower, equal or higher priority than the task currently holding the mutex.
uxTaskPriorityGet( NULL ) > ( tskIDLE_PRIORITY + 1 )
…neither will this as it stands.  As you then give the mutex back and retake it, the kernel scheduler will sort out which task should be given the semaphore next anyway, so maybe it does not matter in your case.  However, if you check against the calling tasks original priority (the priority it had when it took the mutex) instead of tskIDLE_PRIORITY + 1 then you need only give the mutex back and re-take it if the calling tasks priority has been raised – if that makes sense.
For the trace-tool: I was a bit disappointed by some features of the percepio. Thus I started to create my own two weeks ago.
Any feedback on additional features you would like to see would be appreciated. Regards.

Priority inversion, optimization with peek

This does not tell you if the task waiting has lower, equal or higher priority than the task currently holding the mutex.
Of course not, but the performance is much better than giving the mutex back after every pixel-write. Also this particular application there can only be a higher-priority task wanting the mutex, because there are only such tasks. I discarded this “solution” anyway. The ( tskIDLE_PRIORITY + 1 ) is actually the normal priority of the task.
The priority is never changed unless it is inherited thus it will only give and re-take the mutex exactly as you described.
This was just a quick and dirty implementation to test the mechanism itself. The actual implementation now saves the priority when creating the task and compares to that value.
Any feedback on additional features you would like to see would be appreciated.
I’m checking out a few functions and features just for fun. If I have anything serious I will get to you or percepio.