Bug in the Win32 Port

Hi Richard I open a new topic to emphasize the problem. The vPortExitCritical function in the Win32 port has a major bug: void vPortExitCritical( void ) { int32_t lMutexNeedsReleasing;
/* The interrupt event mutex should already be held by this thread as it was
obtained on entry to the critical section. */

lMutexNeedsReleasing = pdTRUE;

if( ulCriticalNesting > portNO_CRITICAL_NESTING )
{
    if( ulCriticalNesting == ( portNO_CRITICAL_NESTING + 1 ) )
    {
        ulCriticalNesting--;

        /* Were any interrupts set to pending while interrupts were
        (simulated) disabled? */
        if( ulPendingInterrupts != 0UL )
        {
            configASSERT( xPortRunning );
            SetEvent( pvInterruptEvent );

            /* Mutex will be released now, so does not require releasing
            on function exit. */
            lMutexNeedsReleasing = pdFALSE;
            ReleaseMutex( pvInterruptEventMutex );
        }
    }
    else
    {
        /* Tick interrupts will still not be processed as the critical
        nesting depth will not be zero. */
        ulCriticalNesting--;
       >>>>> lMutexNeedsReleasing = pdFALSE;  // this operation is missing!!!
    }
}

if( pvInterruptEventMutex != NULL )
{
    if( lMutexNeedsReleasing == pdTRUE )
    {
        configASSERT( xPortRunning );
        ReleaseMutex( pvInterruptEventMutex );
    }
}
} lMutexNeedsReleasing = pdFALSE; is missing, when the ulCriticalNesting is not one. The mutex is released when nested vPortEnterCritical calls are done, allowing other tasks to interrupt even if they are not allowed to.

Bug in the Win32 Port

And here is another problem: vPortGenerateSimulatedInterrupt is used to generate a context switch(yield) from within vPortEnter/ExitCritical. The original version of vPortGenerateSimulatedInterrupt released unconditionally the mutex, allowing other tasks to pre-empt while the original task was still in a critical section. So the releas mutex call goes in the if. The change forces also to check for the critical nesting before acquiring the mutex. Here is the fixed version (I’ve replaced zero with portNOCRITICALNESTING): void vPortGenerateSimulatedInterrupt( uint32_t ulInterruptNumber ) { configASSERT( xPortRunning );
if( ( ulInterruptNumber < portMAX_INTERRUPTS ) && ( pvInterruptEventMutex != NULL ) )
{
    // we do need a test on critical nesting!
    if (ulCriticalNesting == portNO_CRITICAL_NESTING)
    {
        WaitForSingleObject(pvInterruptEventMutex, INFINITE);
    }
    ulPendingInterrupts |= ( 1 << ulInterruptNumber );

    /* The simulated interrupt is now held pending, but don't actually
    process it yet if this call is within a critical section.  It is
    possible for this to be in a critical section as calls to wait for
    mutexes are accumulative. */
    if( ulCriticalNesting == portNO_CRITICAL_NESTING )
    {
        SetEvent( pvInterruptEvent );
        >>> ReleaseMutex( pvInterruptEventMutex ); // this must go here
    }
}
}

Bug in the Win32 Port

We have discovered another problem in the port that causes tasks deadlocks, particularly in the function ulTaskNotifyTake :
                /* All ports are written to allow a yield in a critical
                section (some will yield immediately, others wait until the
                critical section exits) - but it is not something that
                application code should ever do. */
                portYIELD_WITHIN_API();
            }
            else
            {
                mtCOVERAGE_TEST_MARKER();
            }
        }
        else
        {
            mtCOVERAGE_TEST_MARKER();
        }
    }
    taskEXIT_CRITICAL();

    taskENTER_CRITICAL();
    {
The original taskEXITCRITICAL was assuming that windows performed a task switch allowing the FreeRTOS interrupts management to take place and swap out the current task. Well, at least in our simulation, sometimes it didn’t happen! It seems that sometimes Windows has some delays in the context switch. The taskENTERCRITICAL was immediately performed, thus blocking the context switch and causing a deadlock of the current task. The solution is to wait that Windows performs the context switch. I’ve tried with Sleep(0) but it didn’t work. Sleep(1) worked but it would mess up the FreeRTOS scheduler. The only solution I’ve found was just to wait the interupts management to occur. (see also this thread: https://stackoverflow.com/questions/5848448/forcing-context-switch-in-windows). I attach our current port implementation. I’ve fixed all the problems mentioned above and marked the changes with #PORTCHANGE I hope that these changes will be revised and if eligible integrated in the FreeRTOS official release.

Bug in the Win32 Port

Which version of file You showed here ? Orginal source of the 10.2.0 port.c file for MSVC-MingW I copied below. It different to the code You posted. Are You going to include all changes in next FreeRTOS version? OR Can You gather all changes You currently working on in patch (or branch from repo)? Sorry I’ve just found out that whole port.c is in next post 🙂 void vPortGenerateSimulatedInterrupt( uint32_t ulInterruptNumber ) { configASSERT( xPortRunning );
if( ( ulInterruptNumber < portMAX_INTERRUPTS ) && ( pvInterruptEventMutex != NULL ) )
{
    /* Yield interrupts are processed even when critical nesting is
    non-zero. */
    WaitForSingleObject( pvInterruptEventMutex, INFINITE );
    ulPendingInterrupts |= ( 1 << ulInterruptNumber );

    /* The simulated interrupt is now held pending, but don't actually
    process it yet if this call is within a critical section.  It is
    possible for this to be in a critical section as calls to wait for
    mutexes are accumulative. */
    if( ulCriticalNesting == 0 )
    {
        SetEvent( pvInterruptEvent );
    }

    ReleaseMutex( pvInterruptEventMutex );
}
}

Bug in the Win32 Port

I have taken the port of version V10.2.0 and made my modifications. Actually we are still working with V10.0.1 but the port is compatible.

Bug in the Win32 Port

I found one bug in port.c davidefer posted: In void vPortExitCritical(void) there right now pvInterruptEventMutex is not release if ulCriticalNesting is above 1 (nestig is allowed so “Enable Interrupts” only when it’s equal 0 ). It’s in line number 659: lMutexNeedsReleasing = pdFALSE; // #PORTCHANGE mutex cannot be released in critical section BUT in* vPortEnterCritical* this mutex is locked more than once (in case of nesting critical sections). Line number 605: * if( xPortRunning == pdTRUE ) { / The interrupt event mutex is held for the entire critical section, effectively disabling (simulated) interrupts. / WaitForSingleObject( pvInterruptEventMutex, INFINITE ); ulCriticalNesting++; } * So whenever nesting critical occurs there is deadlock. Solution is: * if (ulCriticalNesting == portNOCRITICALNESTING) { WaitForSingleObject( pvInterruptEventMutex, INFINITE ); } ulCriticalNesting++; * Other thing is question if change in ExitCritical section is needed. Based on ResumeThread documentation https://docs.microsoft.com/en-us/windows/desktop/api/synchapi/nf-synchapi-releasemutex A thread can specify a mutex that it already owns in a call to one of the wait functions without blocking its execution. This prevents a thread from deadlocking itself while waiting for a mutex that it already owns. However, to release its ownership, the thread must call ReleaseMutex one time for each time that it obtained ownership (either through CreateMutex or a wait function). It looks like nesting mechanism is implemented inside the mutex already. So in previous version it could work correctly since every call of vPortEnterCritical make lock attempt on mutex and every vPortExitCritical made unlock attempt. Does anyone know if the changes will be included in next official FreeRTOS release?

Bug in the Win32 Port

So according to what you say, the whole ulCriticalNesting management could be deleted. (?) Or at least in vPortGenerateSimulatedInterrupt. Anyway it is quite strange because I’ve let my app run for hours without deadlocks.

Bug in the Win32 Port

@davidefer I think there is still ulCriticalNesting needed to inform vPortGenerateSimulatedInterrupt that interrupts are disabled and not to wait for mutex in the case. I think increasing it every vPortEnterCritical is not necessary and it could be bool. But it needs to be tested since it’s not described clearly in MS documentation. Look for example on https://docs.microsoft.com/en-us/windows/desktop/api/processthreadsapi/nf-processthreadsapi-resumethread It is clearly said that ResumeThread decrease suspend count and return value before change. Thread is running again if suspend count is equal 0. You didn’t observe it… hmmm Are You sure You called at least 2 times vPortEnterCritical in raw (without calling vPortExitCritical)? Could You place breakpoint in else section to if( ulCriticalNesting == ( portNOCRITICALNESTING + 1 ) ) statement ?

Bug in the Win32 Port

it seems you are right, i have no nesting. I’ll let a test run overnight with your mod. Thank you!

Bug in the Win32 Port

It seems that most of the changes I’ve done concerning the mutex management are useless, thanks to @dukb. Here attached is my current port version. I’ve removed the useless changess but left my bug fixes.

Bug in the Win32 Port

Thanks – there are quite a few changes to look through here.

Bug in the Win32 Port

I also noticed the problem of portYIELD not yielding immediately, causing the scheduler to lock up, but my fix uses a different approach than davidefer. First, I made each task store its thread state in a thread-local storage slot. This allows the task to access its own thread state and lets you check if a thread is a FreeRTOS task or just a normal Windows thread. I then added a Windows Event object to xThreadState. When a task exits a critical section with interrupts pending, it waits on this event object until prvProcessSimulatedInterrupts signals that it is done processing the interrupts. If an interrupt triggers a context switch, the task will continue to wait on this event until it is scheduled to run again. Then, I modified vPortGenerateSimulatedInterrupt so it just enters a critical section, sets the interrupt flag and event, then exits the critical section. Now when a task calls vPortGenerateSimulatedInterrupt, it will wait for interrupts to finish before returning from the finall call to portEXIT_CRITICAL. I also added a tickless idle implementation that reduces the cpu usage of the windows port.

Bug in the Win32 Port

Sorry been a while. Looking at the code now I’m not sure I see issues. There is however an issue in the port.c you posted as the loop waiting for the interrupt counter to change in the exit critical function is causing a hang when the trace recorder is used from the tick interrupt – as that uses critical sections inside the tick. There is a call to GetThreadContext() in prvProcessSimulatedInterrupts() designed to wait until the thread that was suspended is actually in the suspended state. I have tried setting that to a loop to ensure it does indeed wait until the subject thread is suspended but the only time I’ve seen GetThreadContext() fail is when the subject thread has deleted itself – so the handle passed to GetThreadContext() is NULL. I will continue looking at this. [EDIT] See https://sourceforge.net/p/freertos/discussion/382005/thread/d8a09bc895/#4231

Bug in the Win32 Port

Ok – I think I can replicate now. Curiously it was having the trace recorder code included in the build that was masking the issue – presumably because that has its own critical sections. I’m prety confident that the latest simply update (which will be checked in soon, but at the time of wrting SVN is not allowing a secure connection) fixes the issue. The update just allocates an event for each task, then each time the task leaves the blocked state by means other than the tick interrupt the first thing it does next is block on its own event. If the task continues running (because the SuspendThread() is asynchronous) for a bit then it won’t get past blocking on the event. When the task is resume the event is signalled, freeing the the task again.

Bug in the Win32 Port

ok thank you, we will check your solution a soon as checked in

Bug in the Win32 Port

So it turns out that when I thought I had managed to replicate the issue reported here it was actually a bug in the application code I was testing – once that was fixed I’m back not being able to replicate the issue. Can you please tell me which version of Windows you are using, and also confirm that you are not making ANY windows system calls from inside a FreeRTOS task (including writes to disk and writes to the console – although you can get away with these if they are infrequent and only done from one task as per the demo app). However the changes I made to the port.c file are probably valuable to keep just in case. I’m not going to check them in yet because the soak test failed last night – I’m almost certain the failure was a false positive but nonetheless I will continue to test before checking in. I have attached the updated port.c file to this post – you can compare it with the head revision in SourceForge to see the changes. Please let me know if it solves your problem (assuming the problem is not caused by using Windows system calls).

Bug in the Win32 Port

Hi Richard, concerning the Win32 calls, as stated here we do actually perform them but protected by the taskENTERCRITICAL(); Win32call; taskEXITCRITICAL(); sequence. Your port seems good until now, it has run for some hours without problems. I’ll check it with a different project and let you know tomorrow the results.

Bug in the Win32 Port

Richard, I have some concerns about the solution you posted.
  1. It only fixes the problem for the yield interrupt. If any other simulated interrupt causes a context switch, the suspended thread could continue to run for a while after the call to vPortGenerateSimulatedInterrupt.
  2. You’re using ulPendingInterrupts to check if you should wait on the yield event object AFTER releasing the mutex. If the simulated interrupt handler preempts the thread and clears the flags before you reach the check, the task will never actually wait on the event. You should either make a local copy before releasing the mutex, or create a boolean indicating that the thread should wait on the event.
  3. Why do you wait on the event in a loop with a 0 timeout? You can just use ResetEvent, right?
  4. The fix assumes that the thread calling vPortGenerateSimulatedInterrupt or vPortExitCritical is the currently scheduled FreeRTOS task. If it’s a normal Windows thread instead, it will access the yield event of the currently scheduled FreeRTOS task, which could cause it to unblock that FreeRTOS task early. This is why I used a thread-local storage slot in my solution. It allows you to access the thread state of the currently executing thread (not just the currently scheduled thread), which ensures you’re using the correct yield event object. And, in the case of a normal Windows thread, it will be NULL, so you know not to wait on an event. It also provides an easy way to check if a thread is running when it shouldn’t be (the thread state is different than pxCurrentTCB).
  5. There is no need to duplicate the yield event waiting code in vPortGenerateSimulatedInterrupt and vPortExitCritical. Just wrap the code inside vPortGenerateSimulatedInterrupt with vPortEnterCritical/vPortExitCritical instead of manually taking the mutex.
  6. You don’t call CloseHandle on the yield event when deleting the thread.
In regards to calling Windows system calls from a FreeRTOS task. I know your official position is to just don’t do that, but at the same time you break this rule in your own demos. From what I can tell, it’s not safe to call any external function (even standard c library functions) that could make a system call, without first entering a critical section. Otherwise, the FreeRTOS scheduler could try to suspend the thread during the syscall, which can cause SuspendThread to fail (or not suspend immediately). If it actually does suspend the thread successfully, it can cause a deadlock if another thread calls an system function before the first has a chance to finish (if the first had acquired a mutex before being suspended). I suspect that SuspendThread can fail even when the program does not call system functions from FreeRTOS tasks, because the Windows port itself does so. Namely, when entering/exiting a critical section, it acquires and releases the interrupt mutex. If the scheduler tries to suspend the thread at the wrong point inside WaitForSingleObject or ReleaseMutex, it can cause the call to SuspendThread to either fail to suspend the thread immediatly, or in some cases, fail to suspend the thread at all. That’s why it is important to block the FreeRTOS task on an event inside vPortExitCritical if any interrupt is pending, in case the interrupt causes a context switch and SuspendThread fails. From my testing, with my fix in place and with all system calls wrapped in a critical section, we can safely interact with standard Windows threads, without relying on lockless data structures or using the Windows thread priority as synchronization mechanism. We can safely call taskENTERCRITICAL/taskEXITCRITICAL from Windows threads to synchronize access to memory that is shared with a FreeRTOS task. We can interract with Windows threads from FreeRTOS tasks by calling Windows API functions from a critical section. We can interract with FreeRTOS tasks from Windows threads using the ISR FreeRTOS api functions (provided we enter a critical section first and manually call portYIELD if it is requested).

Bug in the Win32 Port

Richard, I have some concerns about the solution you posted.
Thanks for your feedback.
  1. It only fixes the problem for the yield interrupt. If any other simulated interrupt causes a context switch, the suspended thread could continue to run for a while after the call to vPortGenerateSimulatedInterrupt.
Skipping this one for now as will need more thought. [edit1] This is basically the same case as a tick interrupt resulting in a context switch. When making the changes posted above I decided not to consider the tick as a critical case as a tick interrupt can occur at any time – and therefore if the context switch (the task being suspended) occured a little late it would not cause a fatal error because the task being interrupted is unaware of it anyway. This is a very different scenario to a task yielding because it is entering the blocked state – if a task fails to enter the blocked state because it is yielding then there will be logic errors in the program that are critical. For example, if a task is blocking on a queue but continues past the block point then the queue logic won’t work and anything might happen. The Win32 port is not ‘real time’ in any case, but an approximation that should behave logically the same, but not necessarily temporaraly the same.
  1. You’re using ulPendingInterrupts to check if you should wait on the yield event object AFTER releasing the mutex. If the simulated interrupt handler preempts the thread and clears the flags before you reach the check, the task will never actually wait on the event. You should either make a local copy before releasing the mutex, or create a boolean indicating that the thread should wait on the event.
Can you please give me a line number (from the file I posted). [edit 1] Fixed.
  1. Why do you wait on the event in a loop with a 0 timeout? You can just use ResetEvent, right?
….because I didn’t know about ResetEvent(), will change.
  1. The fix assumes that the thread calling vPortGenerateSimulatedInterrupt or vPortExitCritical is the currently scheduled FreeRTOS task. If it’s a normal Windows thread instead, it will access the yield event of the currently scheduled FreeRTOS task,
If I understand you correctly here – that you are describing something it is not valid to do. Normally scheduled windows threads cannot access FreeRTOS scheduler operations, and Windows threads that are running FreeRTOS tasks cannot make Windows system calls – the two are just not compatible logically.
which could cause it to unblock that FreeRTOS task early. This is why I used a thread-local storage slot in my solution. It allows you to access the thread state of the currently executing thread (not just the currently scheduled thread), which ensures you’re using the correct yield event object. And, in the case of a normal Windows thread, it will be NULL, so you know not to wait on an event. It also provides an easy way to check if a thread is running when it shouldn’t be (the thread state is different than pxCurrentTCB).
Maybe I’m just not following this properly, but doing this kind of thing could be the route cause of your issues, and be why I can’t replicate them.
  1. There is no need to duplicate the yield event waiting code in vPortGenerateSimulatedInterrupt and vPortExitCritical. Just wrap the code inside vPortGenerateSimulatedInterrupt with vPortEnterCritical/vPortExitCritical instead of manually taking the mutex.
  2. You don’t call CloseHandle on the yield event when deleting the thread.
Will fix.
In regards to calling Windows system calls from a FreeRTOS task. I know your official position is to just don’t do that, but at the same time you break this rule in your own demos.
In the standard demo we get away with this because we write to stdout very infrequently and from only one task. You will note that the call to kbhit() is normally commented out, because that causes issues too. The TCP/IP examples on the other hand write to the console rapidly, so we don’t use printf() at all as it will soon crash if we do. Instead we send the strings to print to a Windows thread that is not under the control of the FreeRTOS kernel and print them out from there. In that case the strings are sent in a circular RAM buffer to avoid sharing any FreeRTOS primitives with the Windows thread. I think the thread running kernel code does signal the Windows thread somehow (forget how) but in non blocking way.
From what I can tell, it’s not safe to call any external function (even standard c library functions) that could make a system call, without first entering a critical section.
Entering a critical section may work, but I don’t know enough about the inner workings of Windows to know.
Otherwise, the FreeRTOS scheduler could try to suspend the thread during the syscall, which can cause SuspendThread to fail (or not suspend immediately). If it actually does suspend the thread successfully, it can cause a deadlock if another thread calls an system function before the first has a chance to finish (if the first had acquired a mutex before being suspended). I suspect that SuspendThread can fail even when the program does not call system functions from FreeRTOS threads, because the Windows port itself does so. Namely, when entering/exiting a critical section, it acquires and releases the interrupt mutex. If the scheduler tries to suspend the thread at the wrong point inside WaitForSingleObject or ReleaseMutex, it can cause the call to SuspendThread to either fail to suspend the thread immediatly, or in some cases, fail to suspend the thread at all. That’s why it is important to block the FreeRTOS task on an event inside vPortExitCritical if any interrupt is pending, in case the interrupt causes a context switch and SuspendThread fails.
Again need to consider this point more.
From my testing, with my fix in place and with all system calls wrapped in a critical section, we can safely interact with standard Windows threads, without relying on lockless data structures or using the Windows thread priority as synchronization mechanism. We can safely call taskENTERCRITICAL/taskEXITCRITICAL from Windows threads to synchronize access to memory that is shared with a FreeRTOS task. We can interract with Windows threads from FreeRTOS tasks by calling Windows API functions from a critical section. We can interract with FreeRTOS tasks from Windows threads using the ISR FreeRTOS api functions.
I only tried the code posted by davidefer – which for me locked up right away, I think because the trace recorder code was calling critical sections from inside the interrupt. I need to study your code in more detail as it sounds like it has some goodness – I did diff your file with the released code over the weekend but there were too many changes to look through. Good discussions –

Bug in the Win32 Port

I have also found a related bug in the Windows port. The scheduler can cause a deadlock if it suspends a thread before it has fully initialized (i.e. before it calls into the specified thread function). I suspect that the Windows thread initialization code acquires some mutex, which other system functions also try to acquire. If the thread is suspended after acquiring the mutex but before releasing it, and then another thread tries to acquire the same mutex, the program will deadlock. I’ll attach my solution, which is just to wait for the thread to fully initialize before returning from pxPortInitialiseStack. It also includes my previous changes and some changes to fix the tick accuracy, which you may find useful.

Bug in the Win32 Port

Can you please give me a line number (from the file I posted).
Line 666
If I understand you correctly here – that you are describing something it is not valid to do. Normally scheduled windows threads cannot access FreeRTOS scheduler operations, and Windows threads that are running FreeRTOS tasks cannot make Windows system calls – the two are just not compatible logically.
We call vPortGenerateSimulatedInterrupt from Windows threads, which I guess is not the intended use case, but like I say later on in my post, this works well with our fixes. Perhaps it would be less confusing about what is allowed or not if there were some standard way of interacting between Windows and FreeRTOS threads. We’ve been using the following function for that purpose, with good results: ~~~ typedef struct { int32t (*func)(void* arg); void* arg; int32t result;
StaticSemaphore_t wait_buf;
SemaphoreHandle_t wait;
} sysexecstate_t; VOID CALLBACK sysexecfunccb(PTPCALLBACKINSTANCE instance, PVOID context) { sysexecstatet* state = (sysexecstatet*)context; BaseTypet yield = pdFALSE;
// call the function
state->result = state->func(state->arg);

// tell the task that we're done
taskENTER_CRITICAL();
xSemaphoreGiveFromISR(state->wait, &yield);

if (yield == pdTRUE)
{
    portYIELD();
}
taskEXIT_CRITICAL();
} int32t sysexecfunc(int32t (func)(void arg), void* arg) { int32t result = 0; sysexecstatet state;
state.func = func;
state.arg = arg;
state.result = 0;
state.wait = xSemaphoreCreateBinaryStatic(&state.wait_buf);

BOOL submitted = FALSE;

taskENTER_CRITICAL();
submitted = TrySubmitThreadpoolCallback(sys_exec_func_cb, &state, NULL);
taskEXIT_CRITICAL();

if (submitted == TRUE)
{
    xSemaphoreTake(state.wait, portMAX_DELAY);
    result = state.result;
}
else
{
    result = -EWOULDBLOCK;
}

vSemaphoreDelete(&state.wait);
return result;
} ~~~ This allows you to synchronously execute a function outside of the FreeRTOS context from a FreeRTOS task. For example we can safely do something like this: ~~~ typedef struct { const void* ptr; sizet sizeofelements; sizet numberofelements; FILE* file; } fwriteargst; int32t dofwrite(void* arg) { fwriteargst* args = (fwriteargst*)arg; return fwrite(args->ptr, args->size_of_elements, args->number_of_elements, args->file); } int32t safefwrite(const void *ptr, size_t size_of_elements, size_t number_of_elements, FILE *file) { fwrite_args_t args; args.ptr = ptr; args.size_of_elements = size_of_elements; args.number_of_elements = number_of_elements; args.file = file;
 return sys_exec_func(do_fwrite, &args);
} void freertostask(void* arg) { FILE* f; uint8t buffer[10];
// ...
safe_fwrite(buffer, 10, 1, f);
// ...
} ~~~

Bug in the Win32 Port

From your description there are a lot of good enhancements here, I just need a little time to digest and test.

Bug in the Win32 Port

From my side, the tests I’ve done with the Richard’s version were ok. I post here my modified version with the termination management and some other mods. I’ll follow the discussion and in case needed I can also do some tests. Thank you very much for the support!

Bug in the Win32 Port

I made a couple of edits to this post just to keep you up to date with progress. Only commented on the first two points so far. Search for [edit1] in the post. https://sourceforge.net/p/freertos/discussion/382005/thread/d8a09bc895/#74d1/9a09

Bug in the Win32 Port

[edit1] This is basically the same case as a tick interrupt resulting in a context switch.
The yield interrupt is synchronous, while the tick interrupt is asynchronous. I agree that the timing is not critical for asynchronous interrupts, but it definitely is critical for synchrounous interrupts (like the yield interrupt). When the user calls vPortGenerateSimulatedInterrupt from a FreeRTOS task, it should behave in a synchronous manner, regardless of which interrupt it is. For example, if I register my own simulated interrupt handler and then call vPortGenerateSimulatedInterrupt from a task, I would expect that task to yield to the interrupt handler before returning from vPortGenerateSimulatedInterrupt, and, if the interrupt handler requests a context switch, I would also expect the scheduler to switch to another task before the first returns from vPortGenerateSimulatedInterrupt. The fact that it doesn’t currently do that with the yield interrupt is the reason it crashes. My point is that it is possible to fix this for every interrupt, not just the yield interrupt.

Bug in the Win32 Port

In fact, the real issue is not really with vPortGenerateSimulatedInterrupt at all, but rather with vPortExitCritical. When interrupts are enabled after the final call to vPortExitCritical, it should process all pending interrupts before returning from vPortExitCritical. That way, if you do something like ~~~ taskENTERCRITICAL(); vPortGenerateSimulatedInterrupt(portINTERRUPTUART); // … taskEXIT_CRITICAL(); ~~~ The interrupts handlers will be processed before returning from taskEXIT_CRITICAL. Then, vPortGenerateSimulatedInterrupt can be as simple as this: ~~~ void vPortGenerateSimulatedInterrupt(uint32t ulInterruptNumber) { portENTERCRITICAL(); uxPendingInterrupts |= ( 1U << ulInterruptNumber ); SetEvent( pvInterruptEvent ); portEXIT_CRITICAL(); } ~~~

Bug in the Win32 Port

I like that simplification.