AN75779: Which media format to use for grayscale 10 bpp image

Tip / Sign in to post questions, reply, level up, and achieve exciting badges. Know more

cross mob
LeGa_3963206
Level 4
Level 4
50 replies posted 25 replies posted 10 replies posted

Hi,

I'm sending 800x480 grayscale 10bpp image from FPGA to FX3 throgh 32-bit bus

Which streaming-encoding format GUID should I specify in descriptor (CyFxUSBHSConfigDscr)?

And how should data bytes be organized within the stream?

Thanks

0 Likes
1 Solution

Hello Leonid,

The descriptors and the probe control are used to inform the host about the video data that would be streaming.

When you keep the GPIF bus width as 16 bits and the data coming from sensor is 8 bits. 8 bits are added to ever pixel and so the frame size increases.

For example the resolution is 720 * 640 and 8 bits/pixel from sensor

So when you keep the GPIF buswidth to 16 bits i.e. 8 bits are padded to each pixel.

you need mention the frame size in descriptors as 720 * 640 * 16 bits/pixel

2) Descriptors/probe control: changed to 8bpp

>> This should be 16 bpp

/* GUID, globally unique identifier used to identify streaming-encoding format: YUY2  */

    0x59, 0x55, 0x59, 0x32,             /*MEDIASUBTYPE_YUY2 GUID: 32595559-0000-0010-8000-00AA00389B71 */

    0x00, 0x00, 0x10, 0x00,

    0x80, 0x00, 0x00, 0xAA,

    0x00, 0x38, 0x9B, 0x71,

     0x10,                                         /* Number of bits per pixel: 16*/

Regards,

Rashi

Regards,
Rashi

View solution in original post

0 Likes
25 Replies
Rashi_Vatsa
Moderator
Moderator
Moderator
5 likes given 500 solutions authored 1000 replies posted

Hello,

Is your application UVC or non UVC?

If it is UVC application you cannot send RAW/RGB data. The code associated with AN75779, follows UVC 1.0 spec which supports only YUY2 color format. UVC 1.5 spec supports YUY2,NV12,M420 and i420 image formats.

You can mention the image format to be YUY2 (GUID)(refer AN75779 firmware). If the format reported in the UVC descriptor is RAW then UVC driver would discard the data.Once you receive the data, your application has to process the data to convert the image to the desired format and then display.

You can refer to this KBA UVC Troubleshooting Guide – KBA226722

If your application is non UVC, you can stream RAW 10 by changing the GUID field with GUID of RAW 10 format and can use cypress driver to receive the data.

How is the RAW 10 data from the FPGA mapped to 32 bits (GPIF). Generally, For RAW 10 data, the GPIF bus width is configured 16 bits. (Please refer to the notes of section 2 in the above mentioned KBA).

Regards,

Rashi

Regards,
Rashi
0 Likes

Thanks, Rashi

Couple questions though:

1) Where can I found GUIDs for RAW10, NV12,M420 and i420?

2) Which Cypress driver helps to receive RAW10? Could you please rpovide a link?

Thanks

0 Likes

Hello,

If your application is non UVC, you can use Cypress API with CyUSB3.sys to receive the data and Microsoft API to display the data.

The GUID of RAW 10 is not necessary for non UVC applications.

You can refer to the thread, which uses Cypress driver for streaming RAW data by making modifications to AN75779 firmware

FX3 / CX3 Firmware for Streaming RAW Image Data using Cypress Driver

Regards,

Rashi

Regards,
Rashi
0 Likes

Hello,

I would like to work with UVC (YUV2). I'm working with 32-bit bus/16bpp FX3 configuration. That means I'm sending two 16-bit pixels per clock

Could you please clarify how my source 10-bit grayscale pixel should be placed within these 16-bit?

Thanks

0 Likes

Hello,

As per the section 2 of UVC Troubleshooting Guide – KBA226722

If these 10 lines are connected to GPIF and the bus width of the GPIF is configured as 16, then for each PCLK, all 16 lines would be sampled. If the remaining 6 lines are pulled down on the board, then FX3 would be sampling logic ‘zero’ on these lines. If it is pulled up, logic ‘one’ would be sampled. So, for each PCLK 2 bytes are sampled and not 10 bits. Host application should take care of the extra bits in each pixel data.

Now, as per your previous response, 20 bits output per clk will be fed to GPIF but GPIF state machine will sample 32 bits i.e 12 bits will be sampled extra (as per the status of GPIF lines either 1(if pulled up) or 0 (if grounded))

Regards,

Rashi

Regards,
Rashi
0 Likes

Hello,

I'm sorry I wasn't clear enough. I use all 16 bits.

For example, when upper 6 bits of every 16 are low then my image has "green overlay". It means I stell see my source image, but everything is green. And when upper 6 bits are high then image is more violet.

So I found that having 0x20 in upper 6 bits makes my image almost identical to source, but it still a little bit green'ish.

As far as I understand FX3 doesn't care about pixels, it sends bytes of data and it's up to transmitter and receiver how to interpret these bytes. Am I right?

If so then when I specify 16bpp YUV2 in UVC part of descriptor my receiving app (Windows uvc driver/directshow/whatever) will take this into account when receiving bytes of data and converting it to pixels.

That's why I asked about YUV2. May be within these 16-bit some bits are responsible for Y and others for U and V.

I understand that my question is not so much related to FX3, but still  do you have any idea how to feed FX3 with grayscale image so that windows UVC driver would understand that it's grayscale?

Thanks,

Leonid

0 Likes

Hello Leonid,

As far as I understand FX3 doesn't care about pixels, it sends bytes of data and it's up to transmitter and receiver how to interpret these bytes. Am I right?

>> Yes

The Host application should be a custom one which reads the data as RAW 10 itself and then display. Using the UVC applications like AMCap or VLC would lead to display as you mentioned in your post (greenish). It will not be possible to view exact video output (RAW) using these application as they do not support RAW color format. We recommend to design a custom application.

Regards,

Rashi

Regards,
Rashi
0 Likes

Hi, Rashi

Thanks for explanation.

1.

As far as I understand FX3 doesn't care about pixels, it sends bytes of data and it's up to transmitter and receiver how to interpret these bytes. Am I right?

>> Yes

If so, then why if I write something wrong in UVC descriptors regarding input data format (image size, bpp, etc..) FX3's GPIF SM fails even receive data?

2.

My idea is to send grayscale  image in YUV format just by providing pixel values as Y while U and V left as 0.

That implies that data stream should look like Y0-0-Y1-0-Y2-0-Y3-0 instead of usual color YUV stream Y0-U0-U1-V1-Y2-U1-Y3-V1.

And that means that I have to send double number of pixels (regardless of number of bits per pixel).
But in that case FX3 thinks that input data doesn't correspond to descriptor information and fail to transmit.

     E.g. my image is 800x480 8bpp, So I write it in UVC descriptors, but actually provide 1600 bytes per line. That cause DMA to fail - UART output says "UVC: Completed 0 frames and 0 buffers" meaning that there is no even PROD events from GPIF

How can I solve this?

Thanks,

Leonid

0 Likes
lock attach
Attachments are accessible only for community members.

Hello Leonid,

If so, then why if I write something wrong in UVC descriptors regarding input data format (image size, bpp, etc..)

>> The descriptors and the probe control settings are for the UVC driver or the host application to know about what kind of data it would be receiving.

FX3's GPIF SM fails even receive data?

>> GPIF state machine samples the GPIF lines agnostic of the color format. It just samples the status of the GPIF lines and fills the buffer with that data.

You don't need to change the resolution but you need to set the bits per pixel field to 16 bits and set GUID of YUY2 format (refer attachment)

For eg the resolution from sensor is 640*480 and RAW(10bpp) > YUY2(16 bpp). You can keep the bpp as 8bpp

So the new frame size would be 640*480*2 bytes

If you have configured the GPIF bus width to 32 bits means 2 pixel per PCLK. The frame size is not increased but the data will be received quickly. The only parameter that would be changed will be the frame rate. You need to check the FV and LV from the sensor and then set the frame rate and related fields like min bit rate, max bit rate in the descriptors as well as probe control settings in Probe Control structure (glProbeCtrl ) accordingly in uvc.c.

For your reference Change Resolution in FX3 UVC Camera - KBA220269

The reason of not getting PROD_EVENTS can be different (not related to descriptors). Please confirm if you changed the LD_DATA_COUNT and LD_ADDR_COUNT values after changing the GPIF bus width. Please refer to this KBA for the same. Configuring Buffer Sizes in AN75779 UVC Firmware – KBA90744

If this doesn't work please share the debug prints.

Regards,

Rashi

Regards,
Rashi
0 Likes

Hello, Rashi

With my experiments I use Virtual dub and VLC on Windows and ffmpeg on linux. And FPGA as 800x480x10bpp test grayscale image source.

So far I have the following for 32-bit GPIF bus:

1. For 8bpp my DV should be 800*8/8/4=200 clocks exactly, otherwise DMA chokes. And I have to through away two bits of every pixel. That means I can transfer only 800 bytes (=pixels) per line while YUV supposes to transfer Y-U-Y-V-.. which is 1600 pixels.
No wonder that both players give me black rect. And ffmpeg says that expected frame size is 768000 (800x480x2), while received is only 384000 (800x480)

2. For 16bpp my DV should be 800*16/8/4 = 400 clocks. Again I can's send 1600 pixels. In this case I can see the image but it a little bit green in some gray pixels as I mentioned above. I played with various shift of my source 10 bits within 16 bits of output pixel - didn't help

3. For 32bpp my DV should be 800*32/8/4= 800 clocks. Here I can see the image but it has all the colors except grayscale regardless of position of data within 32-bit pixel.

I suppose that main problem is that FX3 won't transfer double number of pixels (not bytes)

So my question is how exactly can I transfer 10bpp grayscale image so that it remains grayscale?

Thanks,

Leonid

0 Likes

Hello Leonid,

For a UVC application, you need to pack 6 bytes to 10 bpp data.

- When you send the RAW 10 data as yuy2 to the host. The host application (VLC) will sample the pixels as done for YUY2 format and not RAW 10 (grayscale) .

-

1. For 8bpp my DV should be 800*8/8/4=200 clocks exactly, otherwise DMA chokes. And I have to through away two bits of every pixel. That means I can transfer only 800 bytes (=pixels) per line while YUV supposes to transfer Y-U-Y-V-.. which is 1600 pixels.

No wonder that both players give me black rect. And ffmpeg says that expected frame size is 768000 (800x480x2), while received is only 384000 (800x480)

>> You were not able to see the video as the input from the sensor (800*480*1) bytes and UVC driver was expecting (800*480*2) bytes

2.  For 16bpp my DV should be 800*16/8/4 = 400 clocks. Again I can's send 1600 pixels. In this case I can see the image but it a little bit green in some gray pixels as I mentioned above. I played with various shift of my source 10 bits within 16 bits of output pixel - didn't help

>> This will show the greenish video and not gray scale because the pixels are sampled according to YUY2 format.

If you are able to see the greenish video, means that the streaming through the FX3 is successful but there is a problem on the host side in displaying the data. So the host application should be such that the it is able to sample the data as per the gray scale (separating 6 non-image bits)

You can't directly view gray scale image in  UVC host application (like VLC). To view gray scale image you need to have a non UVC application. You can use cypress driver (cyusb3) to grab the data and a custom application to display the data (which would separate the 6(padded bytes) and displays the image data only.

Or if you want a UVC application, you need to build a custom host application to view the data which is streaming through UVC driver.

Regards,

Rashi

Regards,
Rashi
0 Likes

Hi, Rashi,

Thanks for explanation.

As we know grayscale image definitely can be sent through UVC(YUV2) just in that case U and V will be constants. According to YUV standard Y component is grayscale. As I wrote I need to send Y0-0-Y1-0-Y2-0.. ans so on which gives double number of pixels regardless of bits per pixel.

I'm using FPGA as image source, so I can produce any image size with all possible formats, but the only problem is that FX3 somehow doesn't allow to send double sized line.

So is there any way to make it to do so?

Thanks,

Leonid

0 Likes

Hello Leonid,

Can you set the bits per pixel to 8  and GPIF bus width to 16 bit and then try streaming. The first 8 bits (gray scale data from sensor) and the second 8 bits should be ( zeros - GPIF pins pulled down)

The USB descriptors and the probe control should be H_Active * V_Active * 2 bytes/pixel *fps YUY2 format

The first 8 bits of the YUY2 - 16 bits are sampled as Y component

Please let me know the results

Regards,

Rashi

Regards,
Rashi
0 Likes

Hi, Rashi

The USB descriptors and the probe control should be H_Active * V_Active * 2 bytes/pixel *fps YUY2 format

What does it mean? The only relevant parameter in probe control is bpp

Here is what I changed according to your recommendation:

1) GPIF designer: 16-bit bus, data/addr counters: (16*1024 - 16)/2 - 1 = 8183

2) Descriptors/probe control: changed to 8bpp

3) FPGA sends image as follows:

wavedrom (4).png

As result:

     - Debug UART: UVC: Completed 0 frames and 0 buffers

     - Windows (VirtualDub/VLC): blank screen

     - Linux (mplayer): blank screen with message Frame too small! (384000<768000) Wrong format ?

Any ideas?

Thanks,

Leonid

0 Likes

Hello Leonid,

The descriptors and the probe control are used to inform the host about the video data that would be streaming.

When you keep the GPIF bus width as 16 bits and the data coming from sensor is 8 bits. 8 bits are added to ever pixel and so the frame size increases.

For example the resolution is 720 * 640 and 8 bits/pixel from sensor

So when you keep the GPIF buswidth to 16 bits i.e. 8 bits are padded to each pixel.

you need mention the frame size in descriptors as 720 * 640 * 16 bits/pixel

2) Descriptors/probe control: changed to 8bpp

>> This should be 16 bpp

/* GUID, globally unique identifier used to identify streaming-encoding format: YUY2  */

    0x59, 0x55, 0x59, 0x32,             /*MEDIASUBTYPE_YUY2 GUID: 32595559-0000-0010-8000-00AA00389B71 */

    0x00, 0x00, 0x10, 0x00,

    0x80, 0x00, 0x00, 0xAA,

    0x00, 0x38, 0x9B, 0x71,

     0x10,                                         /* Number of bits per pixel: 16*/

Regards,

Rashi

Regards,
Rashi
0 Likes

Hi Rashi

Thanks for idea.

Now I transmit only 8bit pixels within 16bit bus like this: pixel0-0x7F-pixel1-0x7f-pixel2... And it works fine - I see grayscale image without any colors.

So how can I transmit 10bit pixel?

Regards,

Leonid

0 Likes

Hello Leonid,

For first 8 bits of the pixel are sampled as Y in the YUY2 format i.e gray scale. If you set the bits per pixel as 10, then 2 bits would be sampled a U or V and you would be getting greenish color video. To display the video as gray scale you have send the pixel data in that 8 bits only.

Regards,

Rashi

Regards,
Rashi
0 Likes

Hello, Rashi

There are lot's of UVC cameras on market which transmit 10-bit color image. which means grayscale also possible.

Event EZ-USB® FX3™ HD 720p Camera Kit based on Aptina sensor MT9M114 can output 10-bit bayer raw (see https://www.onsemi.com/pub/Collateral/MT9M114-D.PDF page 30)

The question is how to configure FX3 (GPIF, descriptors, etc..) to report image format correctly to streaming app on PC?

Regards,

Leonid

0 Likes

Hi, Rashi,

Small update:

I succeeded to receive 10-bit grayscale by using 16-bit bus and 16bpp and sending pixels like this: PIX0 - 0x7F - PIX1 - 0x7F - PIX2 - 0x7F...

Where 10 LSB of 16-bit pixel are my raw 10 bit of image and 6 MSB are zeroes.

The only problem is that line time is twice the image width. E.g. for 800x480 DV is 1600 clocks

Then I tried to use 32-bit bus with 16bpp where on each clock I send LSB 16-bit as pixel and MSB 16-bit are 0x7F. But this doesn't work

Any ideas?

Regards,

Leonid

0 Likes

Hello Leonid,

The only problem is that line time is twice the image width. E.g. for 800x480 DV is 1600 clocks

>> This is because the frame size is increased as you are padding zeros to the pixel data. The GPIF II bus width cannot be configured 10 bits so you will see increase in the frame size.

Please let me know what is the configuration that you need Is it 10 bpp or 16 bpp?

Regards,

Rashi

Regards,
Rashi
0 Likes

Hi Rashi,

This is because the frame size is increased as you are padding zeros to the pixel data. The GPIF II bus width cannot be configured 10 bits so you will see increase in the frame size.

When I send "double" (1600 clocks) line the frame size determined by PC (VirtualDub/VLC/etc.) is the same - 800x480.

Please let me know what is the configuration that you need Is it 10 bpp or 16 bpp?

I need to send 10bpp 800x480 grayscale image , but looks like the only way to do it is to use 16bpp and pad it with 6 zeroes. This doesn't affect frame size.

My question is how come image that is fine on  16bit bus/16bpp becomes corrupted on 32bit bus/16bpp?

Regards,

Leonid

0 Likes

Hi Rashi,

Sorry, there was a mistake.

I can configure 8-bit bus and send 16bpp pixel within "double" line, but when bus is 16-bit, I can't send "double" line - I have blank screen

Regards,

Leonid

0 Likes

Hello Leonid,

but when bus is 16-bit, I can't send "double" line

>> Does this mean GPIF bus width = 16 bits and Bits per pixel = 32 bits?

If this so,  please confirm that did you change the bits/ pixel in the descriptors and probe control?

I need to send 10bpp 800x480 grayscale image , but looks like the only way to do it is to use 16bpp and pad it with 6 zeroes

>> if you want this configuration and in previous response you said it worked. Please let me why are you trying for other configuration

Regards,

Rashi

Regards,
Rashi
0 Likes

Hi Rashi,

Sorry for misunderstanding.

What I want to see is my source 10bpp grayscale image.

The only configuration that works right now is 8-bit bus, 16bpp, but in this case I have to truncate my source 10bpp image to 8bpp and send it like this: pixel0(7:0) - 0x7F - pixel1(7:0) - 0x7F - pixel2(7:0) - 0x7F.

In this case DV length obviously has double size. In my case it's 1600 clocks

Strange thing if I set bus to 16-bit or 32-bit and leave DV of double size Windows programs stop to show the image while Linux ones works but shows image with invalid colors.

0 Likes

Hello Leonid,

For displaying the video as gray scale you have to send 8 bits/pixel because the first 8 bits would be sampled as Y and rest U and V as zero.

If you want to send 10 bits/pixel  the GPIF bus width should be 16 bits. And you would need to strip the padded zeros on the custom host application. Or you will be able to see greenish image with UVC host application

If you configure the GPIF bus width to 32, i.e padding 22 zeros ( 10 bits/pixel ) your frame size  will increase.

The descriptors and the probe control should be configured as per the input frame size so that host application would be aware of the video source.

So the changes you do on the sensor side (bits/pixel, resolution, frame rate) and  increasing the frame size (padding zeros/ones) by increasing the GPIF bus width, need to be changed in descriptors and probe control.

If you are getting the blank screen, there is possibility that the sensor input from sensor/ fx3 is not same as mentioned in the descriptors and probe control settings.

The sensor you mentioned in previous response MT9M114  can output RAW 10 but would need a (ISP) for conversion to YUY2 format or else a custom host application to sample the data appropriately.

You can't directly display RAW10 video on standard UVC host application.

Regards,

Rashi

Regards,
Rashi
0 Likes