Streaming RAW image data using Cypress driver

Tip / Sign in to post questions, reply, level up, and achieve exciting badges. Know more

cross mob
lock attach
Attachments are accessible only for community members.
ScGr_289066
Level 5
Level 5
100 replies posted 50 replies posted 25 replies posted

Hi All,

I have been using the firmware shown here: https://community.cypress.com/thread/16971?q=Streaming%20RAW%20image%20data%20using%20Cypress%20driv... heavily modified to interface with the OV5647 sensor and with the CyU3PMipicsiSetPhyTimeDelays suggested in other posts, to stream RAW 10-bit data from an OV5647 sensor to a Windows 10 app that translates the raw Bayer data to an RGB image that is displayed by OpenCV.  In general the approach is working, but there are some major details preventing it from being a production ready solution.  Others have noted the same issues in the original post.

I'll explain my Windows app for background.  It is a 64-bit command line app that runs two threads.  The first thread (collection thread) starts streaming then performs continuous XferData calls (using CyUSB3.dll) to retrieve camera data into one of N frame-sized buffers.  N being between 1 to 32.  A second thread (rendering thread) waits for a complete (filled) frame buffer then translates the Bayer data to an RGB image (OpenCV Mat object) and calls imshow to display the image.  Simple concept.  Logic insures that the buffers are used sequentially and a buffer's content isn't overwritten before it is displayed.  So, in the case of a single buffer, the collection thread collects a buffer then waits for the rendering thread to display it before collecting the next frame.  These processes generally keep up with the sensors 15 fps (5Mp) and 30 fps (1080p) rates.  Rendering typically takes 5 to 30 ms with an average of about 20.

I have experimented with two camera resolutions 5Mp and 1080p.  1080p works much better than 5Mp, but both have the same issue: after some amount of time, ranging randomly from a few frames to several minutes, the XferData call times out (regardless of how long or short I set the timeout).  Sometimes the system recovers from the timeout, other times it doesn't (and firmware must be reloaded to recover).  My debugging shows that the rendering thread occasionally falls behind causing the collection thread to delay frame transfer long enough that firmware detects a DMA timeout and its timer resets the stream.  Instances when the system recovers from this timeout cause the image to randomly shift (vertically & horizontally) in the display window.

My idea to fix this issue goes like this.  Instead of implementing start and stop scan vendor commands (x99/x88), I would start the system with the sensor streaming.  The DMA callback will simply toss the incoming MIPI data until it receives a frame grab command.  At this point the next incoming MIPI frame is sent to the PC.  After the grab frame is sent, firmware reverts to tossing the incoming MIPI data until the next grab command.  This way the PC is ready to collect MIPI data when it is being sent so there should be no timeout issues.

I have almost succeeded in the first phase rewriting the DMA callback routine to toss incoming data *except* callbacks stop after two MIPI frames are collected, and nothing I've done causes them to continue beyond the first two buffers.  I've attached the DMA callback routine.  You'll see I am using GPIO17 as an oscilloscope trigger to know when the DMA callback is active.  I see one burst of GPIO17 toggles during the imager's first frame, then none.

I think the problem has something to do with buffer commit but can't find any good details on how this works in the API except for a couple of brief comments here and there.  Can someone explain how the DMA handler "knows" a buffer is ready for processing (e.g. sending to USB) and any other pertinent details such as how to manipulate whatever variable controls it?

Thanks,

Scott

0 Likes
4 Replies
KandlaguntaR_36
Moderator
Moderator
Moderator
25 solutions authored 10 solutions authored 5 solutions authored

Scott,

I have a query on your implementation.

When do you issue grab command and how do you ensure that the next frame is going to start aftre the grab command.

I can see that you are setting grab to False once a frame is sent to host (in DMA Consumer Event).

Please remove debug prints in the DMA Call back and test instead use GPIO for debugging.

The info. on commitbuffer API is limited. Please check the source code for more details.

I will try to provide any related info., if I get the same.

0 Likes

Hi KandlaguntaR_36,

The firmware uses two booleans grab and graben.  Grab is set when the host issues the grab (vendor) command.  The DMA callback is constantly processing producer events and summing the accumulated frame sizes.  Once the size matches the running resolution, 5Mp in my case, this is the end of a frame.  The accumulated size of cleared and graben is set if grab is also true.  When graben is true, incoming producer buffers are routed to consumer events by calling CyU3PDmaMultiChannelCommitBuffer() instead of tossing them by calling CyU3PDmaMultiChannelDiscardBuffer().  Once the frame is complete, grab and graben are both cleared.  Although this is somewhat academic since DMA callbacks stop after the first frame of producer events are processed.

On the PC side, the collection loop does nothing but issue a grab command, then call XferData() until the sum of bytes received matches the expected frame resolution, 5M (2592x1944*2) in my case.

Initially I had no debug print statements in the DMA callback.  It wasn't until I discovered it dosn't work that I added them.  So, in effect, I had run the code with no print statements.

Thanks,

Scott

0 Likes

Scott,

Please check whether there are any commit buffer failures in your case.
if there is any, you need to reset the DMA as you do time timer timeout.

Also if the endpoint is stall in some cases, and send clear feature request to clear the stall.

In UVC case, the application will take care of identifying the stall and clearing it by sending clear feature request. Here, you need to take care of it in your application.

You need to capture the USB trace and see what is happening over the bus and take appropriate action in both firmware and application.

0 Likes

Hi KandlaguntaR_36,

Thanks for the advise.

Best,

Scott

0 Likes