2 Replies Latest reply on Oct 17, 2019 11:36 PM by rira_979871

    FX3 Slave FIFO Issue


      I have a simple application which streams data to an FX3 via the GPIF-II. I'm using 16-bit mode, external clock & slave mode. My GPIF-II state machine has only two states, IDLE and IN which accepts data (IN_ADDR, IN_DATA, repeat actions). I'm using SLWR_N in my state machine, but it's hard-wired to low for now (asserted). I use the STREAMER application to receive data. My IN endpoint is BULK, 1024, 16 burst, CY_U3P_DMA_TYPE_AUTO_MANY_TO_ONE. I also use A0 and A1, toggling A0 every 8192 clocks and A1 every 4096 clocks.


      In Streamer, I can choose almost any buffer size (packets per Xfer). For example, 16 thru 128 all work, BUT ONLY when Xfers to Queue is set to 1. In that case, I get the data I expect at the rate I expect (25MHz external clock = 50 MB/s).


      However, if I use 32/8 for the packets and xfers, the first megabyte transfers correctly, then each subsequent xfer fails and times out. After that, it gets 1 good xfer, then times out again, 1 good, then times out, etc. This occurs when Xfers to Queue is anything other than 1. The behavior is consistent, regardless of the time out value.


      The fact that the data rate is correct when Xfer to Queue == 1 seems to suggest my slave interface must be working. I have confirmed all GPIF I/O with a logic analyzer.


      Has anyone seen this? I'm sure it's something dumb on my part.



        • 1. Re: FX3 Slave FIFO Issue



          According to your transfer settings: Burst - 16, Packets per Xfer - 32; Host requests 32*16*1024 = 512KB in one transfer. Since you say that first 1MB is successful, do you mean that first two Xfers succeeded without error and then subsequent Xfers fail - I mean as below:

          1st Xfer - Pass; 2nd Xfer - Pass; 3rd - Fail; 4th - Pass;...


          Every time an Xfer succeeds to take data from the device are you getting 512KB?


          By your comment: "I also use A0 and A1, toggling A0 every 8192 clocks and A1 every 4096 clocks." -- Assuming you start with Thread 0 (A1:A0 = 0:0), you change the thread as follows: Thread 0 (for 4096 clocks) -> Thread 2 (for 4096 clocks) -> Thread 3 (for 4096 clocks) -> Thread 1 (for 4096 clocks)...Is this correct?

          If yes, then I assume you have created channel between 4 P Port sockets (producers) and 1 U Port (consumer); Buffer size in channel configuration is 8KB (Not sure of the count).

          Do you sample the address lines in IDLE state(Please share your state machine)?


          Are you using cyusb3 driver version


          Can you take a USB trace and see if there are any NRDYs from the device when you get Xfer timeout?


          As a quick test, you can try increasing the buffer size and see the difference in behavior of Xfers.




          • 2. Re: FX3 Slave FIFO Issue

            I misspoke in the first message. A1 is toggled every 16384 counts. In other words, I present A1..0 to count 0, 1, 2, 3, 0, 1, 2, 3, changing after every 8192 words (buffer size 16K).


            Yes, for the sockets I use "Many to one" and specify as follows:


            static CyU3PDmaMultiChannelConfig_t dmaDataInConfig = {

              .size = DMA_BUFFER_SIZE,  // (16384)

              .count = DMA_BUFFER_COUNT,    //  (2)

              .validSckCount = 4,

              .prodSckId[0] = CY_U3P_PIB_SOCKET_0,

              .prodSckId[1] = CY_U3P_PIB_SOCKET_1,

              .prodSckId[2] = CY_U3P_PIB_SOCKET_2,

              .prodSckId[3] = CY_U3P_PIB_SOCKET_3,

              .consSckId[0] = (u16)DATA_IN_ENDPOINT_SOCKET,

              .dmaMode = CY_U3P_DMA_MODE_BYTE,





              // Create multi-DMA AUTO channel
                (CyU3PDmaMultiChannelConfig_t*)&dmaDataInConfig );

              // set transfer size to Infinite
              CyU3PDmaMultiChannelSetXfer(&dmaUsbDataIn, 0, 0);


            Driver is C:\WINDOWS\System32\Drivers\CYUSB3.sys (Version:  Date: 2018-08-13)


            My GPIF state diagram:


            The trace and other tests will have to wait until tomorrow.