I am experimenting with FX2LP device, which is connected to an FPGA. I generate cca. 24MB/s data load. With streamer program (c#) it was great, the speed was excellent >30MB/s. Then I noticed that the load is not coherent, it is bursty. It depends on the settings of packet / transfer and transfer / queue. For example for 64 pac/xfer - 64 xfer/que i get 1 - 2ms when the queue is empty and the c# program refills it. The endpoints were configured as Bulk
The problem is that it is no additional RAM connected to FPGA, so it will be great if the load will be more coherent. (There won’t be the 1 - 2 ms gaps).
So I had an idea, but I don’t know if it is supported by the driver:
It is possible to double buffer the queue? So when the first is empty, the second is already started. Then there won’t be any gaps, and the transfer will be smooth.
If you look at the streamer code, you will understand that the queue is never empty. When you choose 64 xfers to queue, before the waitforxfer and finishdataxfer are called, begindataxfer is called 64 times. And then on for each wait-finishdataxfer, one begindataxfer is called. So there are 64 xfers in queue all the time.
Could you please explain more what you mean when you say the queue is empty for 1-2ms? Also a little more explanation on your double buffer implementation would help us understand more.
Ok thanks for reply,
but then it is more complicated than I thought.
So the 1-2ms dead time was examined by connecting FLAGx of CY7C68013A (as a full flag of endpoint) to oscilloscope. Then what I saw, was transfer in every uframe (flag indicated empty buffer), and then 1 to 2ms silence (flag indicated full buffer). The number of transfers in uframes was equal to number set in queue. So 64 queue / 64 packet resulted in 4096 consecutive uframes to be filled with my data, then the dead time. I assuming that transfer was in every uframe is because the full flag indicated 125us periodicity.
That is why i thought that the queue is empty and refilled again, and that is why I wanted to double buffer it. But if the queue is kept full all the time, than I don't have any idea why the transfer is like that.
Can it be by OP system ? (win7 x86, dual core 3GHz, 4GB of RAM), or by other devices connected to USB ? I had only keyboard and mouse connected to the PC by USB at that time.
I would also suggest you to carry the test out on multiple systems (say XP OS or machine with USB 3.0 host controller) just to rule out if it is a host related issue.
I am not sure if USB keyboard and mouse will affect your data transfer.
FYI, AN61345 will be updated by next week. I recorded a speed of 44.3 MBps with the updated stream-in project (not of web as of today, 22 march 13) . The same speed with the stream-out project on Win7 OS 64-bit with Intel's USB 3.0 host controller.
You could also check out the projects once they are on web.
I made some research according to c# and c++ streamer software.
What I got surprised me. With c++ streamer soft with 64 packets / 4 queue I got 43800KB/s and totally consistent load on USB. With c# soft with the same settings I got only 7600KB/s and 10-12 ms gaps in transfer.
I made some screenshots, there are in attachment. Within the attachment you can find 2 screenshots of apps, and 3 screenshots of an oscilloscope measuring, made on FLAGD configured as full flag of the endpoint. Please notice that 2 of measurements are made with 5ms time base and the detailed is with 50us
I am using Cypress Suite USB 3.4.7 on win 7 x86 machine.
Is the issue mentioned in http://www.cypress.com/?app=forum&id=167&rID=63567 solved ? And also what is causing this behavior ?