FreeRTOS_send() returns 0 after peer has closed socket, but expected -pdFREERTOS_ERRNO_ENOTCONN

The documentation for FreeRTOSsend() (from the +TCP stack) mentions that it returns -pdFREERTOSERRNO_ENOTCONN when the receiving party has closed its side of the connection. However, in practice I do not see this behavior; I only see send() returning 0, which as far as I know indicates that there is no space in the send buffer. Are we overlooking something? I ended up coding a special case to detect closing of the client socket, like this:
Status socketSendAll(uint8_t *buffer, size_t bufferSize)
{
    Status result = STATUS_OK;

    uint8_t* bytes = buffer;
    size_t remaining = bufferSize;
    BaseType_t sent;
    do {
        sent = FreeRTOS_send(socket, (const void*)bytes, remaining, 0);
        bytes += sent;
        remaining -= sent;
    } while (remaining > 0 && sent > 0);

    /**
     * TODO RBO I have seen situations where sent = 0 and remaining > 0. This occurred
     * when capturing data with the Python client, after closing the client. This should cause
     * the socket to close in the <product> as well, and I would expect send() to return a negative
     * value (error code). However, this does not seem the case at the moment, so handle this case
     * separately here. */
    if (sent == 0 && remaining > 0) {
        result = SAMPLE_TRANSPORT_SEND_FAILED;
    }

    /* Further error handling omitted */
}

FreeRTOS_send() returns 0 after peer has closed socket, but expected -pdFREERTOS_ERRNO_ENOTCONN

Hi Ronald, thanks a lot for reporting this. Normally, a disconnection will be noticed by calling FreeRTOS_recv(). Now if you just call FreeRTOS_send() from within a loop, the disconnection may remain unnoticed. I wasn’t aware of this. There is a simple patch though to be applied to FreeRTOS_Sockets.c, around line 2387 : ~~~ static int32t prvTCPSendCheck( FreeRTOSSockett *pxSocket, sizet xDataLength ) { int32_t xResult = 1;
     /* Is this a socket of type TCP and is it already bound to a port number ? */
     if( prvValidSocket( pxSocket, FREERTOS_IPPROTO_TCP, pdTRUE ) == pdFALSE )
     {
         xResult = -pdFREERTOS_ERRNO_EINVAL;
     }
     else if( pxSocket->u.xTCP.bits.bMallocError != pdFALSE_UNSIGNED )
     {
         xResult = -pdFREERTOS_ERRNO_ENOMEM;
     }
– else if( pxSocket->u.xTCP.ucTCPState == eCLOSED ) + else if( ( pxSocket->u.xTCP.ucTCPState == eCLOSED ) || + ( pxSocket->u.xTCP.ucTCPState == eCLOSEWAIT ) || + ( pxSocket->u.xTCP.ucTCPState == eCLOSING ) ) { xResult = -pdFREERTOSERRNOENOTCONN; } else if( pxSocket->u.xTCP.bits.bFinSent != pdFALSEUNSIGNED ) ~~~ It missed checking for eCLOSE_WAIT: in this state, the IP-stack is done with the socket, it is just waiting for the user to actually close the socket by calling FreeRTOS_closesocket(). if I may take the freedom to re-write your code a bit: ~~~ Status socketSendAll(uint8t *buffer, sizet bufferSize) { Status result = STATUS_OK;
uint8_t* bytes = buffer;
size_t remaining = bufferSize;
BaseType_t sent;
while( remaining > 0 )
{
    sent = FreeRTOS_send(socket, (const void*)bytes, remaining, 0);
    if( sent < 0 )
    {
        /* HT : I assume that the socket has a reasonable time-out for transmission,
        set with 'SO_SNDTIMEO'. */
        if( sent == -pdFREERTOS_ERRNO_EWOULDBLOCK )
        {
            /* EWOULDBLOCK or EAGAIN error. */
            continue;
        }
        /* Tell the caller that this socket has lost connection
        and that it must be closed by calling FreeRTOS_closesocket(). */
        return CONNECTION_CLOSED;
    }
    else
    {
        bytes += sent;
        remaining -= sent;
    }
};

/**
 * Now all data have been sent, or at least have been added to the TX buffer
 * of the socket. 
 */
} ~~~ I just tested the above patch, without ever calling FreeRTOS_recv(). Indeed FreeRTOS_send() returned -pdFREERTOS_ERRNO_ENOTCONN ( 128 ) once the connection was broken. Thanks again!

FreeRTOS_send() returns 0 after peer has closed socket, but expected -pdFREERTOS_ERRNO_ENOTCONN

Thanks Hein – I’ve capture this too.

FreeRTOS_send() returns 0 after peer has closed socket, but expected -pdFREERTOS_ERRNO_ENOTCONN

I wrote as a comment: ~~~ /* Now all data have been sent, or at least have * been added to the TX buffer of the socket */ ~~~ When the remote device is being switched off, the data can not be delivered of course. But suppose that the remote device starts a graceful shutdown(), your sending device will ignore the FIN packets, and keep on sending the data, until all data have been delivered. This scenario is important when you are sending something like a file, and you want all data to be transmitted. The rest of this post is about some special +TCP functions and a socket option: FreeRTOS+TCP has added a few handy functions to check the buffer status of a TCP socket: ~~~ /* Number of bytes in the reception buffer. */ BaseType_t FreeRTOS_rx_size( Socket_t xSocket ); /* Available space in the transmission buffer. */ BaseType_t FreeRTOS_tx_space( Socket_t xSocket ); /* Number of bytes in the transmission buffer. */ BaseType_t FreeRTOS_tx_size( Socket_t xSocket ); ~~~ Suppose that you are sending a file, and that you want to close ( shut down ) the connection immediately after the last byte has been delivered and confirmed, the following socket option can be used: ~~~ if( pxClient->uxBytesLeft == 0u ) { BaseType_t xTrueValue = 1;
    FreeRTOS_setsockopt( pxClient->xTransferSocket, 0,
        FREERTOS_SO_CLOSE_AFTER_SEND, ( void * ) &xTrueValue, sizeof( xTrueValue ) );
}

xRc = FreeRTOS_send( pxClient->xTransferSocket, pcFILE_BUFFER, uxCount, 0 );
~~~ The complete code can be found in “protocolsFTPFreeRTOS_FTP_server.c“. After the above code, the socket must be kept open until FreeRTOS_recv() returns a negative value, other than -pdFREERTOS_ERRNO_EWOULDBLOCK.

FreeRTOS_send() returns 0 after peer has closed socket, but expected -pdFREERTOS_ERRNO_ENOTCONN

Thanks for the quick reply, and glad I helped improve +TCP 🙂

FreeRTOS_send() returns 0 after peer has closed socket, but expected -pdFREERTOS_ERRNO_ENOTCONN

Ronald, you remarks have lead to a pull-request on github. Thanks again, Hein