Disable Kernal tick

I have a timing problem in my application. Controller used: AT91SAM7S Between 2 calls for sending data over SPI there is a delay between those 2 data send that comes close to the kernel tick. For my application I need to have the time between 2 spi calls as close to 3.5us as possible. My first idea was to somehow disable the kernel tick (if possible) as long as I need to do time critical SPI calls. But because the spi call depends on xQueueReceive I fear it might not work. Another idea was to use a software interrupt. My first tests with directly calling the isr function caused a dabt. Are there examples on how to set up and use software interrupts?

Disable Kernal tick

Since timing requirement is between 1-100 usec, you can achieve this slight delay with the help of SPI controller. For example, you can chain two transfers and use SPI_DLYBCT bit in SPI_CSR to delay the second one. However, this may be suitable or not, depending on your SPI configuration and spi protocol you need. Moreover, it will not be so easy to achieve this resolution since you are using queues to receive spi data. Because delay in xQueueSend-xQueueReceive chain is more than 10 usecs. ( I’m not %100 sure of this, figures are from a 1 year old test case and I don’t remember if the test setup was reliable enough). Just another idea, in case… :) If you may say smt about spi data protocol and spi configuration, maybe we can find a better solution. Caglar AKYUZ

Disable Kernal tick

Basic Configuration: MCK = 44MHz SPI Mode Register is initialised with 0x0A000015 meaning master mode, fixed peripheral, CS lines connected to decoder, mode fault detection disabled, no loopback and delay between chip selects = 10/MCK. All four CS registers are initialised with 0x00010202 meaning inactive state of SCK is logic zero, data captured on leading edge and changed on following, CS lines rise after last transfer, 8 bit transfers, SCK = MCK/2 (about 22MHz fr me), delay before SCK = 1/MCK, no delay between transfers on same CS. For sending and receiving data the PDC is used. And now the crucial part. The decoder is a PLD that either forwards corresponding CS signals to dataflash, a DAC and 3 ADCs or handles the MISO/MOSI lines for 8 bit write or read-only registers. These registers use 1 byte transfer, the DAC needs 3-byte transfer and the ADCs work with 2-byte transfers. For single transfers all works fine so far but 2 of the ADCs shall work as transient recorder that store measured data in dataflash and so i can’t afford having to much delay between spi transfers. The first byte sent to ADC is a control byte for initialising the next mesurement. At the same time the first byte of the last result is transferred. The second byte of the measure result comes with the next 8 clocks (I simply resend control byte). Then when CS for ADC is inactive it starts converting which lasts up to 3.5us. Now I’d like to initialise the next conversion und get the recent one but now there is a delay of nearly the kernel tick period (my kernel tick is 66us). I think that’s more than enough information and while writing this I got an idea I’m going to try out now :)

Disable Kernal tick

Is this an absolute timing requirement or a maximum time requirement?  You could use the PDC to start sending one frame after the other.  The PDC has a current and next pointer, so can switch from one frame to the next very quickly.

Disable Kernal tick

You can adjust ADC CS lines to 16 bits transfer mode instead of 8 bits. This way you can use SPI_DLYBCT to delay transfer on the *same* chip select so that you can send 2 commands to the *same* ADC with 3.5 usec delay. You are using By the way, you should use variable mode instead fix mode, plus PDC(in case you don’t). In this way, you can read/write all your data to your DACs, ADCs, etc. only  intercepting SAM7S once (with the help of nice SPI features present in the SAM7S). Since you make only 3 bytes long transfer at most, memory bandwidth is not a problem for your case. I hope you won’t need to read this post :) Caglar AKYUZ

Disable Kernal tick

Your idea is good, but the CS lines don’t go directly to the ADC but the decoder in the PLD generates a suitable CS. So all CSR should have same setting. Actually I wanted to stay on one setting for all the jobs but I think I have to adjust the setting on the fly depending on the width of the transfer. and fyi: I’m still using FreeRTOS version 4.2.0

Disable Kernal tick

There’re 4 CS lines coming from SAM7S, when you use a 4-to-16 decoder you will have 15 usable CS lines. Since one CS from SAM7S will control at most 4 CS  of decoder output. For example, you may connect ADCs to CS lines 0-2, leave 3 unconnected, connect your DAC to 4  leave 5-7 unconnected, connect registers starting from 8…etc. Then you don’t have to change CSs on the fly. If you’re hardware is not suitable for such a change, then it’s impossible of course. Other than that, I don’t see any way to catch up such a timing requirement still leaving some processor time for other tasks.

Disable Kernal tick

Ok I have adjusted the SPI functions to adjust the CSRs corresponding to the transfer width. The different delay times between transfers work but unfortunately the CS lines stay active until the end of the whole transfer. I’m using the PDC with fixed peripheral select and CSAAT = 0. I presume It would work if I change to variable peripheral select and CSAAT = 1 and setting the LASTXFER of the transfered data. Or is it otherwise possible to deactivate CS lines between PDC transfers?

Disable Kernal tick

I don’t think so. I guess you’re chaining two transfers on the *same* CS using PDC, but you want CS to become de-selected between two transfers so that your slaves see them two seperate commands. (Correct me if I’m wrong) According to Atmel datasheet(Master Mode Flow Diagram), after data is read into the serializer from TDR, SPI will try to tranmsit/receive this chunk. In the mean time PDC will put next data to TDR, so when transfer finishes SPI will not see TDRE flag so it will continue to send next data without raising the CS(eventough there will be the delay). I think you should send commands one by one to each of your ADCs so that CS will rise automatically. i.e. READ_ADC1 + READ_ADC2 + INIT_NEXT_ADC1 + INIT_NEXT_ADC2