USB superspeed peripherals Forum Discussions
In the bulkloop source code, in function public unsafe void TransfersThread(), I just change to send out 1024 bytes per time, then inEndpoint.FinishDataXfer will return false, and failed receive the data. the change is as following, there will no problem if set xferLen to 1023 or 1025. I have attached the source code
xferLen = lenOut;
//calls the XferData function for bulk transfer(OUT/IN) in the cyusb.dll
bResult = outEndpoint.XferData(ref outBuffer, ref xferLen);
change to
xferLen = 1024; // has problem
//calls the XferData function for bulk transfer(OUT/IN) in the cyusb.dll
bResult = outEndpoint.XferData(ref outBuffer, ref xferLen);
Dear Supplier,
Could you please provide the TSCA statement for our customer request? Thank you very much.
CYP | CY7C68013A-56BAXC |
CYP | CY7C67200-48BAXI |
CYP | CYUSB3014-BZXC |
CYP | S29GL064N90TFI040 |
CYP | CY7C2263KV18-550BZXI |
CYP | S29GL032N90TFI040 |
CYP | S25FL127SABMFI001 |
CYP | CY7C4142KV13-106FCXC |
Hi Cypress team,
I'm using your FX3 USB3 controller with a software that uses LibUSB. So far, I'm able to communicate without any problem by using the BULK transfers with Auto DMA. My code is mostly the CyFxSlFifoSync adapted with WinUSB.
Right now, I have a problem where the FX3 device doesn't reenumerate automatically whenever I restart my software. I found the code example AN73609 which is obsolete but still seems to follow my procedure on my software side.
I don't know if there is an event that I should receive on the Cypress firmware side to parse into and reset the communication by calling the CyU3PConnectState sequence whenever I exit my connection with libusb_exit.
I tried to handle the problem by sending a control transfer to call the CyU3PConnectState sequence but there seems to be a competetion between both the firmware and the software side. In other words, my software need the connection to still be opened to close correctly but I still need to send the reset of the communication accordingly.
If you have any ideas of how I should handle it, please let me know 😄
Thank you,
Keven
Show Less
when I debug my application,click to Debug at Run->Debug Configurations...(i have configured),Console window dispaly ,
&"load\n"
load
You can't do that when your target is `exec'
&"You can't do that when your target is `exec'\n"
53^error,msg="You can't do that when your target is `exec'"
(gdb)
the Debug Configurations is as follows,
set prompt (arm-gdb)
# This connects to a target via netsiliconLibRemote
# listening for commands on this PC's tcp port 3333
target remote localhost:3333
monitor reset halt
# Set the processor to SVC mode monitor reg cpsr 0xd3
# Disable all interrupts
monitor mww 0xFFFFF014 0xFFFFFFFF
# Enable the TCMs
monitor mww 0x40000000 0xE3A00015
monitor mww 0x40000004 0xEE090F31
monitor mww 0x40000008 0xE240024F
monitor mww 0x4000000C 0xEE090F11
# Set the PC to 0x40000000
monitor reg pc 0x40000000
si
si
si
si
Show Less
Hello,
Can someone help me with some advice?
I am using a sensor that has a fixed frequency to transmit the pixels, which allows changing frame rate by increase the line and frame blanking. I want to use the maximum resolution of this sensor, but I'm having problems complying with the Output Pixel Clock in the CX3 MIPI Receiver Configuration.
This is a normal configuration for the sensor:
The first thing that I don't understand is that the red cross says the min is 192MHz, and the box on the right says the min is 159 MHz, which is the correct one? why are there two minimum values?
I want to change the sensor configuration so the Output Pixel Clock is below 100MHz. I've noticed that the frame rate and the blanking influence this Clock minimum value. I tried decreasing the sensor frame rate, which can be achieved by increasing blanking.
The configuration below is the maximum vertical and horizontal blanking possible in the sensor, which leads to the lowest frame rate possible.
The minimum Output Pixel Clock on the right goes down significantly, but the red cross still shows a very high minimum value.
What else could I change to make this work? what other parameters influence this clock?
I appreciate any help!
Best regards,
Renato.
Show Less
I have used Eclipse IDE in the past with Altera/Intel Nios II. In that case, once the application has been built, we can just right click the project and select "Run as" or "Debug as" and the Nios II option exists there. With FX3 all I get is "Run as Local C/C++ application" and this always fails to run with "Binary not found error" in spite of the project build succeeding.
Now I have got the FX3 superspeed explorer kit. I have followed the instructions to copy all example projects into an Eclipse workspace. However, I find the instruction saying that I must open another program called USB Control Centre and then select the "Cypress FX3 USB BootLoader Device" and then goto Program -> FX3 -> RAM and then browse all the way to an .img file and select it, quite a long winded way.
Why can't eclipse do this work directly? Also, how am I supposed to add breakpoints and then load the program and cycle through it line by line. Breakpoints can't be done if I just load an .img file and then let the program run.
What am I missing here?
Show LessDears,
We have a FX3 device which uses GPIF to keep reading large datasets from FPGA and then sends them to PC host through a bulk-in end point.
With Wireshark, we find that sometimes PC sends a clear-feature to the device to reset the bulk-in end point (before this, a bulk-in reading failure occurred on PC site). We think this maybe caused by some bulk-in transaction failures ( CRC errors , time-out, or USB device stalled, etc.) detected by PC host.
From the examples provided by Cypress, after receiving a clear-feature request, FX3 needs to reset the DMA channel associated with the bulk-in end point, flush and reset the end point, etc. . this may lead to the un-transmitted data lost in the DMA buffer.
Is it possible to retransmit the lost data after communication recovery (with GPIF interface between FX3 and FPGA, it is hard to do the data backup before transmission)? if so, how to do it?
Thanks,
Scott
Show LessI've connected the FX3 ExplorerKit to an Altera Cyclone V Dev Board (using HMSC breakout). The FX3 is programmed as "sync slave fifo" (2bit address). We're using it in 32bit data mode, and we only perform (write) transfers from FPGA to FX3. Currently the FPGA is running some test code to continuously send data to the FX3 (burst writing). On the host side we're using the ControlCenter app to read + display the data send from the FPGA. We're using the "standard" version of the sync slave fifo where flag-a = "current dma thread ready" and flag-b = "current dma thread watermark". The problem is that with our current implementation after a (short) while both flag-a and flag-b go from '1' to '0'. Even after reading all data on the host (using ControlCenter), the flags don't reset to '1'. For now we can only make this work (sort of) by ignoring both flags and simply sending data every clock cycle but this obviously not suitable for our final design.
This is basically what our FPGA does:
- Set address to "00", slcs# = '0', sloe# = '1', slrd# = '1' (these values are fixed and never changed)
- In a (simple) statemachine we do:
1) Check flag-a and wait until it is '1';
2) Check flag-b and wait until it is '1';
3) Setup data on dq[31:0] + assert slwr# (='1')
4) Continue doing 3) until flag-b (partial full) becomes '0' after which we make slwr# = '0'
5) Finally we wait until flag-b becomes '1' again and return to 1) and start all over
What are we doing wrong here? We closely inspected the documentation that comes with the "sync slave fifo" but we have the feeling we're missing something crucial, specifically concerning the use of the (partial) flag.
Any help would be much appreciated.
Show Less