4 Replies Latest reply on May 29, 2015 2:41 PM by mifuc_1366091

    16-bit data to USB from GPIF-II in 32-bit mode

      Our application requires 16 GPIF data pins for reading from a peripheral device and 4 GPIF data pins for writing to the peripheral device. Unfortunately this means the GPIF must be configured in 32-bit mode because of those 4 extra data pins (24-bit mode appears to not be an option currently). This means that during every read event, we must transfer 32 bits, only half of which (the lower half) contain valid data.


      We want to transfer this 16-bit data as rapidly as possible over USB to our host. What is the fastest way to strip out those upper 2 bytes to avoid doubling the size of our data transfer?


      Can the GPIF be placed in 16-bit mode for the data readout phase, and back to 32-bit for the write phase? If so, can this be done on the fly, or would separate state machines need to be loaded?


      Alternatively, the CPU could perform the byte stripping or memory remapping, but this could add substantial latency to the overall data transfer to the USB endpoint.


      Any suggestions? Thanks in advance.

        • 1. Re: 16-bit data to USB from GPIF-II in 32-bit mode



          If I understood correctlt, you are trying to BitBang the 4 GPIF Lines for writing. In that case, you can use 16 Bit GPIF alone, can just use those 4 othe GPIF lines as ordinary GPIOs and use Bit Banging.




          -Madhu Sudhan

          • 2. Re: 16-bit data to USB from GPIF-II in 32-bit mode

            Thanks for your reply, Madhu. Unfortunately, our application has very tight timing requirements for which those extra 4 GPIF write lines must drive data (received from the host over USB) at the GPIF clock rate, so GPIO is not an option. Every "write" event (using these four pins) is followed by a ~50KB read event and this data must be sent back to the host over USB as rapidly as possible. That is why we require a method to either switch as rapidly as possible back and forth between 32-bit and 16-bit GPIF modes, or identify a method for quickly stripping those upper 2 bytes of useless data (using the CPU, tricks with DMA, etc.) to avoid cutting our effective USB bandwith back to the host. Does that make sense?

            • 3. Re: 16-bit data to USB from GPIF-II in 32-bit mode

              To hopefully keep this dialogue going, let me state our requirements a different way. Ideally we could configure the DMA channels between the USB endpoints and GPIF data lines such that USB==>GPIF DMA channels are 32 bits wide (so that we can drive DQ[16:19]) and GPIF==>USB DMA channels are 16 bits wide (so that we're only sending valid 16-bit data over USB). It seems like there should be some register setting(s) we could force or special way to configure the DMA channels to accomplish this even though that is not the standard/default usage for the GPIF in 32-bit mode.




              As a side-note, I ran a set of code posted to the forum in Feb 2014 by a Cypress employee (SRMS) that switches the GPIF from 16-bit mode to 32-bit mode "on the fly" in response to sending a particular vendor command. I probed the GPIF clock line to measure how long it takes to perform this switch (the clock is inactive during this interim period), and observed ~70ms. That is far too long for our requirements (recall we would have to perform a 16<==>32 switch operation twice for every 50KB frame), so I am hoping a better method is available for our particular situation. Thanks in advance.

              • 4. Re: 16-bit data to USB from GPIF-II in 32-bit mode

                CORRECTION: The time it takes to switch back and forth between 16-bit and 32-bit modes is ~0.4ms, not the ~70ms I stated above. I was confusing a reset event for the 16/32-bit reconfiguration process. Of course the GPIF clock turns off during reset as well as after explicitly disabling the GPIF, so that is my mistake.




                (I confirmed the 0.4ms timing using a state machine that toggles a GPIF CTL pin after coming out of reset. From this I could confirm the time interval between when the GPIF SM is disabled and when it is started again in the new 32-bit configuration).




                With some trimming of this delay, this workaround will probably work for our application after all. Hopefully there are no repercussions from repeatedly reconfiguring the GPIF state machine in this manner hundreds of times per second!  ;-)