1 Reply Latest reply on Jun 22, 2012 3:59 AM by anandsrinivasana_

    BeginDataXfer, WaitForXfer, and FinishDataXfer approach

       Hi all,


      I'm using these functions to transfer data from a FPGA to PC using slaveFIFO approach and it works but I have some doubts about the actual mode it works and its limitations. I'm using a single bursty BULK endpoint.


      If I don't understand bad as said in USB3.0 specification only 1 IN transfer can be active at a time.


      So BeginDataXfer sets-up a bunch of transfers (1 per call) but only initiates the first one. Then, WaitForXfer waits for the transfer to complete or the specified timeout expires and FinishDataXfer finishes the transfer (I don't know what FinishDataXfer exactly does).


      Is it the right behaviour?. I ask it because some doubts arise me at this point.


      1. Is there any limitation regarding the maximum number of BeginDataXfer functions that can be called in a row?.


      2. Does the endpoint timeout start counting down when BeginDataXfer is call?. If so, If I configure a big number of transfers and I'm not able to call all of them before the endpoint timeout expires, are those transfers aborted?.


      The problem is that I have a very variable throughput (from 1,48Gbps to 0,15Gps) so I have to configure queue size and transfer size depending on that the current throughput provided by the FPGA and I need to know the limitations of the above approach.


      Best regards,



        • 1. Re: BeginDataXfer, WaitForXfer, and FinishDataXfer approach

          To give a high level description,


          BeginDataXfer queues up the request i.e. gives the buffer and its location to the USB stack


          Waitforxfer waits for the transfer to complete, it'll wait for the timeout value specified if the transfer doesn't complete by then it'll timeout the reqeust


          Finishdataxfer retrieves the data that was received. If only part of the data was received and a timeout happened it'll release the rest of the buffer




          The limitation in number of BeginDataXfer has to do with the amount of buffer that can be queued on the host controller driver Please look at http://msdn.microsoft.com/en-us/library/ff538112.aspx for details.


          The more transfers you queue the faster the data rate i.e. by the time one transfer is complete the next is already queued up so it'll start.


          Xferdata uses the above 3 API but there is a performance difference because only after one Xferdata returns we'll queue up the next so there is latency between 2 transfers are the application level.