It is not recommended to perform a lot of processing over the incoming video stream inside FX3 when using for UVC applications.
If doing so, this might make the CPU overloaded and you may experience frame losses and data corruption.
So, you can try the 127 color byte addition to the stream in the FPGA itself instead of performing it in FX3. This would make the video stream run undisturbed, provided the FPGA streams the converted greyscale image in YUV format to FX3 correctly.
Alternatively, you can send the RAW8 data to the host via FX3 as a packed YUV frame but will need a custom host application to decode the video stream and extract the image from the video stream. This way, there won't be a need to append the 127 color byte.
The standard UVC host applications cannot decode the incoming RAW8 video stream and would discard it.
To avoid it, you can pack the RAW8 stream as YUV and send it to the host, and the UVC driver will accept the data assuming it is YUV formatted and try to decode it in such a way, but the video won't be proper.
So, it's necessary to create a custom host application to decode this RAW8 assumed to be YUV video format properly.
Thanks for the quick and detailed answer.
If we pack the pixels in YUV format (16 bit), we wont have enough bandwidth oner the parallel 32-bit interface to transmit the 5120x720 images at 60 frames per second from the FPGA to the FX3. That is why we wanted to do it in the FX3.
So it seems in our case the best option would send a 5120x720 video to PC packed as YUV, but instead of using standard UVC host applications we design our own application for displaying/reading the images. Can we use open CV to read the images?
"It is not recommended to perform a lot of processing over the incoming video stream inside FX3 when using for UVC applications. If doing so, this might make the CPU overloaded and you may experience frame losses and data corruption."
Do you have suggestions of how to implement this, even with the risk of data loss?
Appending a byte for every pixel data within the fx3 firmware in an UVC application is not possible. It has to be done either from FPGA itself or in the GPIF (i.e if the appending cannot be done in the FPGA, then the 2nd and 4th bytes of GPIF can be hardwired to 0x7F using the API CyU3PGpioSetIoMode() and leaving those two bytes unconnected in the GPIF. At the same time first byte coming out of FPGA should be connected to first byte of GPIF and 2nd byte coming from FPGA should be connected to 3rd byte of GPIF. In that way in one PCLK, FPGA will be driving 2 bytes whereas since GPIF is configured for 32 bits and we have pulled 2nd and 4th byte to required values, it will be sampling 4 bytes - thereby appending 0x7F to each byte)
But as you have mentioned even the above option would cause bandwidth issues for 60FPS for your resolution. So, you will have to consider reducing fps/ resolution for your solution Or as discussed in the previous posts, you will have to send 5120*720*8 - RAW data as YUY2 to Host and making your own Host app to get the data from UVC driver and converting to real YUY2 data(by appending to each byte) for display.