I have a simple application with FreeRTOS-UDP and an interactive serial console. The UDP stack currently does nothing expect replying to ping requests. I ping the device from Windows and for a while I get relative good 2-3ms response times. But randomly there comes a point when something goes wrong and the ping response time becomes terrible long (200ms-3sec but most often time out). When I execute FreeRTOS_NetworkDown() from the serial console, the ping time becomes the acceptable 2-3ms again. But after a while (sometimes in a few seconds, sometimes in a half an hour) the ping response time becomes wrong again.
Has anyone got some idea about what can cause this anomaly?
As I see, when the ping response time is long, the MAC receive interrupt is also late (maybe the echo request is not transmitted in time).
Which architecture are using running this on? If you are using the
FreeRTOS Windows port, as per the FreeRTOS+UDP demos in the download,
then you will get some strange timing behaviour as it is at the mercy of
Also, could it be that you are not freeing network buffers after they
are used? In which case you may start to run out of buffers or RAM and
be seeing timeouts occurring in the stack's implementation as attempts
to obtain resources fail.
I am using Cortex M3 SmartFusion. I use statically allocated network buffers with copying network interface. I guess RX interrupt handler task can always get the network buffer because uses a non-blocking call and logs error when fails. The xNetworkInterfaceOutput() gets the network buffer from the UDP stack. I need to debug somehow the pxNetworkBufferGet() calls inside the UDP code.
You can define the trace macro iptraceFAILEDTOOBTAINNETWORKBUFFER()
to get notified if there is a failure to obtain a buffer.
To try it out do something like the following:
volatile uint32_t ulFailureCounts = 0;
at the bottom of FreeRTOSIPConfig.h
extern volatile uint32_t ulFailureCounts;
define iptraceFAILEDTOOBTAINNETWORKBUFFER() ulFailureCounts++
I guess it is not buffer allocation issue. I placed some debug trace, even for the case when pxNetworkBufferGet() blocks too much.
xNetworkBufferDescriptor_t *pxNetworkBufferGet( size_t xRequestedSizeBytes, TickType_t xBlockTimeTicks )
xNetworkBufferDescriptor_t *pxReturn = NULL;
/*_RB_ The current implementation only has a single size memory block, so
the requested size parameter is not used (yet). */
( void ) xRequestedSizeBytes;
TickType_t start = xTaskGetTickCount();
/* If there is a semaphore available, there is a network buffer available. */
if( xSemaphoreTake( xNetworkBufferSemaphore, xBlockTimeTicks ) == pdPASS )
TickType_t wait = xTaskGetTickCount() - start;
if (wait >= 20)
errorf("nw buf get wait=%u ms", wait);
/* Protect the structure as it is accessed from tasks and interrupts. */
pxReturn = ( xNetworkBufferDescriptor_t * ) listGET_OWNER_OF_HEAD_ENTRY( &xFreeBuffersList );
uxListRemove( &( pxReturn->xBufferListItem ) );
iptraceNETWORK_BUFFER_OBTAINED( pxReturn );
error("Failed to obtain nw buf!");
I don't see any error logs.
I've tested this stuff in an other network environment and the problem didn't reoccur. I don't know what is the real difference between the networks. The network where the issue doesn't reoccur is much more loaded, that's full with broadcast packets.
Normally it is possible to set the hardware to filter out a lot of
uninteresting network traffic, and also to perform some post processing
in the MAC interrupt itself. Both of those may help.
Good observation about the difference between the two networks.
Broadcasts are often used in network protocols like NetBIOS, PnP, and many others. For us embedded developers they can be quite bothering.
If all broadcast packets travel all the way to the IP-task, they use quite a bit of resources:
1) The queue '
xNetworkEventQueue' of the IP-task
2) Entries in the ARP table
3) A network buffer descriptor
ad 1) The queue '
xNetworkEventQueue' of the IP-task
the queue has a length of
ipconfigEVENT_QUEUE_LENGTH, which should be more than enough to hold all network buffers, i.e.
ipconfigEVENT_QUEUE_LENGTH >= ipconfigNUM_NETWORK_BUFFER_DESCRIPTORS + EXTRA
where EXTRA >= 5
ad 2) Entries in the ARP table
Care has been taken in the library like here:
if( pxIPHeader->ucProtocol != ipPROTOCOL_UDP )
/* Refresh the ARP cache with the IP/MAC-address of the
* received packet. For UDP packets, this will be done
* later in xProcessReceivedUDPPacket() as soon as know
* that the message will be handled by someone. This
* will prevent that the ARP cache will get overwritten
* with the IP-address of useless broadcast packets
vARPRefreshCacheEntry( &( pxIPPacket->xEthernetHeader.xSourceAddress ),
Only UDP packets that can be treated will create an ARP entry.
ad 3) A network buffer descriptor
the biggest problem with frequent UDP broadcasts is that they can occupy network buffers. This may prevent PING requests from being answered.
It is advisable to define:
#define ipconfigETHERNET_DRIVER_FILTERS_PACKETS 1
And test at a very early stage if a packet can be dropped:
if( pxIPHeader->ucProtocol == ipPROTOCOLUDP )
uint16t usDestinationPort =
/* xPortHasUdpSocket() returns pdTRUE if a socket has been
opened on a given port number. */
if ( ( xPortHasUdpSocket( usDestinationPort ) == pdFALSE )
#if ipconfigUSE_LLMNR == 1
&& ( usDestinationPort != FreeRTOS_ntohs( ipLLMNR_PORT ) )
#if ipconfigUSE_NBNS == 1
&& ( usDestinationPort != FreeRTOS_ntohs( ipNBNS_PORT ) )
#if ipconfigUSE_DNS == 1
&& ( pxProtPacket->xUDPPacket.xUDPHeader.usSourcePort !=
FreeRTOS_ntohs( ipDNS_PORT ) )
/* Drop this packet, not for this device. */
xReturn = FALSE;
PS. the above code can not be used in an interrupt.
FreeRTOS_IP.c in which packets are filtered on their IP address.
You wrote that you see big delays (200 ms - 3 sec) for ICMP messages to be replied. Early filtering will help to avoid it, but it isn't clear to me why you see such delays. I would expect that echo messages will be dropped, not delayed so much.
Hope this helps, Hein
Copyright (C) Amazon Web Services, Inc. or its affiliates. All rights reserved.