Do you actually need one full buffer before sending the first buffer and after last buffer? or you want a few bytes of data to send before and after the start a frame?
Please let us know what is the significance of this additional data before and after every frame.
What are all video resolutions that you want to stream in this way?
Yes, if I want to correctly implement USB Vision standard. Using this standard the device must sent two extra packets of data individually. At the beginning which is called Leader and at the end, called Trailer.
I want to stream any ROI available.
Anyway, could you tell if someone has being able to implement this standard using CX3 MCU?
Please try the following to implement the above.
1. Create a DMA Manual_IN channel from GPIF to CPU
2. Create a DMA Manual_Out Channel from CPU to USB
3. Identify the start of the frame and end of the frame as per the data flow through GPIF to CPU DMA Channel then send the leader and trailer to USB via CPU to USB DMA Channel.
4. Send the payload data as it is in between the leader and trailer
There is another method to implement this.
Use an FPGA/ASIC which gets the data from the sensor and adds leader and trailer to the payload. Connect this FPGA parallel interface with FX3 GPIF II interface to transfer data to USB.
I tried using this method but is not applicable at my case, because I have (and need) a Control DMAChannel. And CX3 forbids having two dmachannels from and to the CPU.
About the second suggestion: We have already think about it and its an option. (We are looking into the possible FPGA solutions) But we are targetting a cheap solution, therefore avoiding using FPGA.
And the good news, I managed to stream images at good speed. Almost correct. The thing is that I am tricking the U3V standard by telling it that I am streaming a VGA(30fps) image, but really I am streaming a bit less. In plain words, first and last payload data are changed into Leader and Payload packets.
There still some issues with DMA size, GPIF size and sensor, but its a good start.
It's great. Please let us know your implementation details and the current issues that you are facing.
I have been performing tests on different formats, specifically 720p and Full Res, trying formats: Mono8, Mono12 and "YUV422" (but the output format is Mono12). Using YUV422 it has been quite straight forward, which was unexpected; because when I tried the other two formats its been another story. I tested using different app grabbers with different drivers.
Mono8 and Mono12 it's being tricky. Sometimes I managed to transmit some images before the firmware stops and sometimes I just get a acquisition timeout.
The configuration is the following: 1280x720@30fps Mono8 (Payload 1280x712)
CSI TX Clk @ 336MHz
Sensor Clk 84 MHz
CX3: PCLK: 84 MHz
CSI <-> HS 84 MHz
(H-Active = 7.42 us)
MIPI Cfg: RAW8
GPIF II: 8-bit width
Buffer Size: 5120 (with count number = 23)
U3V: 1280x712p Mono8
The theoretical grab will be a bit more than 30fps, because the U3V App thinks is receiveing less than what it is really transmitted
The problem is well know, after StartAcquisition and some packects, Error 71 is triggered at the DMA Commit API in the DMA Callback. Let me explained my code: when Payload data is received as Prod event, if is the first packet (counter == 0) it is smashed this packet and transmitted the Leader. This counter increases every time a packet is committed until it reaches the top value which is an integer number equal to NUM_OF_PAYLOAD_BUFFERS + 2 (Leader + Trailer) (the last one is also smashed and the Trailer is transmitted instead). Similar to the U3Vision Example project for FX3.
At some point of this transmission the error happens. Maybe is at the 3rd packet or at the 150th, etc. (always before reaching the last one)
So, reading the Note about this error, the only solution given that can be implemented is increasing the DMA buffer size. The thing is: how much I can increase it? Because FX3 TRM says that for the 512 KB RAM, I can only expand 8KB (the 2-boot stage RAM area). But it seems is not enough. But then, the ARM has more "Unused" space that I am unsure I can or cannot used. Any inputs on this?
Also, knowing that 1280x720 Bytes (Mono8) is much less than 1280x720x2 (YUV422) byte to send, how is not working here, but is working in YUV422 format with same speed? I know, less data, faster transmission, but is it really producing events that fast?
Another issue is regarding the OV5640 sensor. I am able to read/write the regs (NDA with OVT), but I am unable to configure it outside the default ROI and formats. Because, given the situation explained, I would like to stream some more lines so the U3V transmits a correct ROI, e.g.1280x720, meaning the sensor is configure as 1280x728. I cannot undestand why I cannot configure or why its so difficult. (fact: first time with Image MIPI sensor)
Anyway, thank you for the support and tell if something is not clear haha.
By default, you have 224 KB buffer data. Since 4 KB of this will be used for Debug Channel and Control Endpoint buffer, you will have rest 220 KB buffer space. You can accomidate this 220 KB DMA buffer space for your application. In case, if you are not using second stage boot loader, you will get additional 32 KB buffer space.
As per the DMA configuration mentioned, you are using small buffer (5K). This is the reason you may be seeing the 71 error. Please increase this to 16K and check the funcitonality.
It is not clear to me that how did you receive producer to send the Leader and Trailer Packet. Does the sensor is sending some additional data (additional to frame data) so that first buffer is getting full and triggered the Producer event, hence you are discarding the buffer and committing leader packet to USB?
Sensor configuration: Please talk to OVT regarding the register settings issue.