I use TMS570 Hercules Active Safety MCUs(LS0432, LS20216, LC4357) and FreeRTOS. From the FreeRTOS website I found that the Minimum Stack Size for an MCU can be found from the Demo code. Can the Minimum and Maximum Stack size be calculated generic? So that I can use one norm to find the Stack size for a task for any MCU...
May I know the minimum heap size that can be configured as well, hoping that the maximum heap size can be the entire RAM size?
From the following link http://www.freertos.org/FAQMem.html, the context switch time calculated for one particular microcontroller was provided. I would like to know the algorithm being implemented in finding the Context Switch time...
Thanks in Advance !
configMINIMALSTACKSIZE is only used by the kernel to set the size of the stack used by the idle task - although it is used extensively in the demo applications just for convenience. How big it needs to be depends on what the idle task is doing. The value from the demo application is, in the majority of cases at least, assuming you don't have an idle hook function defined. If you are adding your own code to the idle task though a hook function then it will need to be much bigger. If you are calling functions such as sprintf(), especially if using GCC, then it will need to be much bigger.
We really need to add the "how do I know the maximum stack size" question to the FAQ as it is asked so often. The answer is, the same way you would in any C program. Each function call nesting depth, or each stack variable you use, will add to the stack requirement. You can of course calculate it by finding the path through your code that maximises it, but (other than in safety critical applications) I have never done that - much easier to estimate - then if the stack overflow hook gets hit increase it, and if the stack overflow hook does not get hit, use uxTaskGetstackHighWaterMark() to see how much stack has never actually been used, and adjust accordingly (you can also get the high watermark from FreeRTOS kernel aware plug-ins).
The context switch time only makes sense in cpu cycles, not in real time, so turn off stack overflow protection, ensure no trace macros are defined, ensure run-time stats are not being gathered, ensure configASSERT() is not defined, then set a break point on entry of the context switch function and a break point on the end of the function and measure the difference in CPU cycles (assuming that information is given to you by your development environment - otherwise you will have to count yourself, and that will take a long time).
Thanks for the information... Great support...
Copyright (C) Amazon Web Services, Inc. or its affiliates. All rights reserved.