If I understood correctlt, you are trying to BitBang the 4 GPIF Lines for writing. In that case, you can use 16 Bit GPIF alone, can just use those 4 othe GPIF lines as ordinary GPIOs and use Bit Banging.
Thanks for your reply, Madhu. Unfortunately, our application has very tight timing requirements for which those extra 4 GPIF write lines must drive data (received from the host over USB) at the GPIF clock rate, so GPIO is not an option. Every "write" event (using these four pins) is followed by a ~50KB read event and this data must be sent back to the host over USB as rapidly as possible. That is why we require a method to either switch as rapidly as possible back and forth between 32-bit and 16-bit GPIF modes, or identify a method for quickly stripping those upper 2 bytes of useless data (using the CPU, tricks with DMA, etc.) to avoid cutting our effective USB bandwith back to the host. Does that make sense?
To hopefully keep this dialogue going, let me state our requirements a different way. Ideally we could configure the DMA channels between the USB endpoints and GPIF data lines such that USB==>GPIF DMA channels are 32 bits wide (so that we can drive DQ[16:19]) and GPIF==>USB DMA channels are 16 bits wide (so that we're only sending valid 16-bit data over USB). It seems like there should be some register setting(s) we could force or special way to configure the DMA channels to accomplish this even though that is not the standard/default usage for the GPIF in 32-bit mode.
As a side-note, I ran a set of code posted to the forum in Feb 2014 by a Cypress employee (SRMS) that switches the GPIF from 16-bit mode to 32-bit mode "on the fly" in response to sending a particular vendor command. I probed the GPIF clock line to measure how long it takes to perform this switch (the clock is inactive during this interim period), and observed ~70ms. That is far too long for our requirements (recall we would have to perform a 16<==>32 switch operation twice for every 50KB frame), so I am hoping a better method is available for our particular situation. Thanks in advance.
CORRECTION: The time it takes to switch back and forth between 16-bit and 32-bit modes is ~0.4ms, not the ~70ms I stated above. I was confusing a reset event for the 16/32-bit reconfiguration process. Of course the GPIF clock turns off during reset as well as after explicitly disabling the GPIF, so that is my mistake.
(I confirmed the 0.4ms timing using a state machine that toggles a GPIF CTL pin after coming out of reset. From this I could confirm the time interval between when the GPIF SM is disabled and when it is started again in the new 32-bit configuration).
With some trimming of this delay, this workaround will probably work for our application after all. Hopefully there are no repercussions from repeatedly reconfiguring the GPIF state machine in this manner hundreds of times per second! ;-)