I have been using the firmware shown here: https://community.cypress.com/thread/16971?q=Streaming%20RAW%20image%20data%20using%20Cypress%20driver heavily modified to interface with the OV5647 sensor and with the CyU3PMipicsiSetPhyTimeDelays suggested in other posts, to stream RAW 10-bit data from an OV5647 sensor to a Windows 10 app that translates the raw Bayer data to an RGB image that is displayed by OpenCV. In general the approach is working, but there are some major details preventing it from being a production ready solution. Others have noted the same issues in the original post.
I'll explain my Windows app for background. It is a 64-bit command line app that runs two threads. The first thread (collection thread) starts streaming then performs continuous XferData calls (using CyUSB3.dll) to retrieve camera data into one of N frame-sized buffers. N being between 1 to 32. A second thread (rendering thread) waits for a complete (filled) frame buffer then translates the Bayer data to an RGB image (OpenCV Mat object) and calls imshow to display the image. Simple concept. Logic insures that the buffers are used sequentially and a buffer's content isn't overwritten before it is displayed. So, in the case of a single buffer, the collection thread collects a buffer then waits for the rendering thread to display it before collecting the next frame. These processes generally keep up with the sensors 15 fps (5Mp) and 30 fps (1080p) rates. Rendering typically takes 5 to 30 ms with an average of about 20.
I have experimented with two camera resolutions 5Mp and 1080p. 1080p works much better than 5Mp, but both have the same issue: after some amount of time, ranging randomly from a few frames to several minutes, the XferData call times out (regardless of how long or short I set the timeout). Sometimes the system recovers from the timeout, other times it doesn't (and firmware must be reloaded to recover). My debugging shows that the rendering thread occasionally falls behind causing the collection thread to delay frame transfer long enough that firmware detects a DMA timeout and its timer resets the stream. Instances when the system recovers from this timeout cause the image to randomly shift (vertically & horizontally) in the display window.
My idea to fix this issue goes like this. Instead of implementing start and stop scan vendor commands (x99/x88), I would start the system with the sensor streaming. The DMA callback will simply toss the incoming MIPI data until it receives a frame grab command. At this point the next incoming MIPI frame is sent to the PC. After the grab frame is sent, firmware reverts to tossing the incoming MIPI data until the next grab command. This way the PC is ready to collect MIPI data when it is being sent so there should be no timeout issues.
I have almost succeeded in the first phase rewriting the DMA callback routine to toss incoming data *except* callbacks stop after two MIPI frames are collected, and nothing I've done causes them to continue beyond the first two buffers. I've attached the DMA callback routine. You'll see I am using GPIO17 as an oscilloscope trigger to know when the DMA callback is active. I see one burst of GPIO17 toggles during the imager's first frame, then none.
I think the problem has something to do with buffer commit but can't find any good details on how this works in the API except for a couple of brief comments here and there. Can someone explain how the DMA handler "knows" a buffer is ready for processing (e.g. sending to USB) and any other pertinent details such as how to manipulate whatever variable controls it?
callback.txt.zip 1.7 K