1 2 Previous Next 25 Replies Latest reply on Dec 4, 2019 2:46 AM by RashiV_61

    AN75779: Which media format to use for grayscale 10 bpp image

    LeGa_3963206

      Hi,

       

      I'm sending 800x480 grayscale 10bpp image from FPGA to FX3 throgh 32-bit bus

      Which streaming-encoding format GUID should I specify in descriptor (CyFxUSBHSConfigDscr)?

      And how should data bytes be organized within the stream?

       

      Thanks

        • 1. Re: AN75779: Which media format to use for grayscale 10 bpp image
          RashiV_61

          Hello,

           

          Is your application UVC or non UVC?

           

          If it is UVC application you cannot send RAW/RGB data. The code associated with AN75779, follows UVC 1.0 spec which supports only YUY2 color format. UVC 1.5 spec supports YUY2,NV12,M420 and i420 image formats.

          You can mention the image format to be YUY2 (GUID)(refer AN75779 firmware). If the format reported in the UVC descriptor is RAW then UVC driver would discard the data.Once you receive the data, your application has to process the data to convert the image to the desired format and then display.

          You can refer to this KBA UVC Troubleshooting Guide – KBA226722

           

          If your application is non UVC, you can stream RAW 10 by changing the GUID field with GUID of RAW 10 format and can use cypress driver to receive the data.

           

          How is the RAW 10 data from the FPGA mapped to 32 bits (GPIF). Generally, For RAW 10 data, the GPIF bus width is configured 16 bits. (Please refer to the notes of section 2 in the above mentioned KBA).

           

          Regards,

          Rashi

          • 2. Re: AN75779: Which media format to use for grayscale 10 bpp image
            LeGa_3963206

            Thanks, Rashi

             

            Couple questions though:

            1) Where can I found GUIDs for RAW10, NV12,M420 and i420?

            2) Which Cypress driver helps to receive RAW10? Could you please rpovide a link?

             

            Thanks

            • 3. Re: AN75779: Which media format to use for grayscale 10 bpp image
              RashiV_61

              Hello,

               

              If your application is non UVC, you can use Cypress API with CyUSB3.sys to receive the data and Microsoft API to display the data.

              The GUID of RAW 10 is not necessary for non UVC applications.

              You can refer to the thread, which uses Cypress driver for streaming RAW data by making modifications to AN75779 firmware

              FX3 / CX3 Firmware for Streaming RAW Image Data using Cypress Driver

               

              Regards,

              Rashi

              • 4. Re: AN75779: Which media format to use for grayscale 10 bpp image
                LeGa_3963206

                Hello,

                 

                I would like to work with UVC (YUV2). I'm working with 32-bit bus/16bpp FX3 configuration. That means I'm sending two 16-bit pixels per clock

                Could you please clarify how my source 10-bit grayscale pixel should be placed within these 16-bit?

                 

                Thanks

                • 5. Re: AN75779: Which media format to use for grayscale 10 bpp image
                  RashiV_61

                  Hello,

                   

                  As per the section 2 of UVC Troubleshooting Guide – KBA226722

                   

                  If these 10 lines are connected to GPIF and the bus width of the GPIF is configured as 16, then for each PCLK, all 16 lines would be sampled. If the remaining 6 lines are pulled down on the board, then FX3 would be sampling logic ‘zero’ on these lines. If it is pulled up, logic ‘one’ would be sampled. So, for each PCLK 2 bytes are sampled and not 10 bits. Host application should take care of the extra bits in each pixel data.

                   

                  Now, as per your previous response, 20 bits output per clk will be fed to GPIF but GPIF state machine will sample 32 bits i.e 12 bits will be sampled extra (as per the status of GPIF lines either 1(if pulled up) or 0 (if grounded))

                   

                  Regards,

                  Rashi

                   

                   

                  • 6. Re: AN75779: Which media format to use for grayscale 10 bpp image
                    LeGa_3963206

                    Hello,

                     

                    I'm sorry I wasn't clear enough. I use all 16 bits.

                    For example, when upper 6 bits of every 16 are low then my image has "green overlay". It means I stell see my source image, but everything is green. And when upper 6 bits are high then image is more violet.

                    So I found that having 0x20 in upper 6 bits makes my image almost identical to source, but it still a little bit green'ish.

                     

                    As far as I understand FX3 doesn't care about pixels, it sends bytes of data and it's up to transmitter and receiver how to interpret these bytes. Am I right?

                    If so then when I specify 16bpp YUV2 in UVC part of descriptor my receiving app (Windows uvc driver/directshow/whatever) will take this into account when receiving bytes of data and converting it to pixels.

                     

                    That's why I asked about YUV2. May be within these 16-bit some bits are responsible for Y and others for U and V.

                    I understand that my question is not so much related to FX3, but still  do you have any idea how to feed FX3 with grayscale image so that windows UVC driver would understand that it's grayscale?

                     

                     

                    Thanks,

                    Leonid

                    • 7. Re: AN75779: Which media format to use for grayscale 10 bpp image
                      RashiV_61

                      Hello Leonid,

                       

                      As far as I understand FX3 doesn't care about pixels, it sends bytes of data and it's up to transmitter and receiver how to interpret these bytes. Am I right?

                      >> Yes

                       

                       

                      The Host application should be a custom one which reads the data as RAW 10 itself and then display. Using the UVC applications like AMCap or VLC would lead to display as you mentioned in your post (greenish). It will not be possible to view exact video output (RAW) using these application as they do not support RAW color format. We recommend to design a custom application.

                       

                      Regards,

                      Rashi

                      • 8. Re: AN75779: Which media format to use for grayscale 10 bpp image
                        LeGa_3963206

                        Hi, Rashi

                         

                        Thanks for explanation.

                        1.

                        As far as I understand FX3 doesn't care about pixels, it sends bytes of data and it's up to transmitter and receiver how to interpret these bytes. Am I right?

                        >> Yes

                        If so, then why if I write something wrong in UVC descriptors regarding input data format (image size, bpp, etc..) FX3's GPIF SM fails even receive data?

                         

                        2.

                        My idea is to send grayscale  image in YUV format just by providing pixel values as Y while U and V left as 0.

                        That implies that data stream should look like Y0-0-Y1-0-Y2-0-Y3-0 instead of usual color YUV stream Y0-U0-U1-V1-Y2-U1-Y3-V1.

                        And that means that I have to send double number of pixels (regardless of number of bits per pixel).
                        But in that case FX3 thinks that input data doesn't correspond to descriptor information and fail to transmit.

                             E.g. my image is 800x480 8bpp, So I write it in UVC descriptors, but actually provide 1600 bytes per line. That cause DMA to fail - UART output says "UVC: Completed 0 frames and 0 buffers" meaning that there is no even PROD events from GPIF

                         

                        How can I solve this?

                         

                        Thanks,

                        Leonid

                        • 9. Re: AN75779: Which media format to use for grayscale 10 bpp image
                          RashiV_61

                          Hello Leonid,

                           

                          If so, then why if I write something wrong in UVC descriptors regarding input data format (image size, bpp, etc..)

                          >> The descriptors and the probe control settings are for the UVC driver or the host application to know about what kind of data it would be receiving.

                           

                          FX3's GPIF SM fails even receive data?

                          >> GPIF state machine samples the GPIF lines agnostic of the color format. It just samples the status of the GPIF lines and fills the buffer with that data.

                           

                          You don't need to change the resolution but you need to set the bits per pixel field to 16 bits and set GUID of YUY2 format (refer attachment)

                          For eg the resolution from sensor is 640*480 and RAW(10bpp) > YUY2(16 bpp). You can keep the bpp as 8bpp

                          So the new frame size would be 640*480*2 bytes

                           

                          If you have configured the GPIF bus width to 32 bits means 2 pixel per PCLK. The frame size is not increased but the data will be received quickly. The only parameter that would be changed will be the frame rate. You need to check the FV and LV from the sensor and then set the frame rate and related fields like min bit rate, max bit rate in the descriptors as well as probe control settings in Probe Control structure (glProbeCtrl ) accordingly in uvc.c.

                           

                          For your reference Change Resolution in FX3 UVC Camera - KBA220269

                           

                          The reason of not getting PROD_EVENTS can be different (not related to descriptors). Please confirm if you changed the LD_DATA_COUNT and LD_ADDR_COUNT values after changing the GPIF bus width. Please refer to this KBA for the same. Configuring Buffer Sizes in AN75779 UVC Firmware – KBA90744

                           

                          If this doesn't work please share the debug prints.

                           

                          Regards,

                          Rashi

                           

                           

                          • 10. Re: AN75779: Which media format to use for grayscale 10 bpp image
                            LeGa_3963206

                            Hello, Rashi

                             

                            With my experiments I use Virtual dub and VLC on Windows and ffmpeg on linux. And FPGA as 800x480x10bpp test grayscale image source.

                            So far I have the following for 32-bit GPIF bus:

                            1. For 8bpp my DV should be 800*8/8/4=200 clocks exactly, otherwise DMA chokes. And I have to through away two bits of every pixel. That means I can transfer only 800 bytes (=pixels) per line while YUV supposes to transfer Y-U-Y-V-.. which is 1600 pixels.
                            No wonder that both players give me black rect. And ffmpeg says that expected frame size is 768000 (800x480x2), while received is only 384000 (800x480)

                             

                            2. For 16bpp my DV should be 800*16/8/4 = 400 clocks. Again I can's send 1600 pixels. In this case I can see the image but it a little bit green in some gray pixels as I mentioned above. I played with various shift of my source 10 bits within 16 bits of output pixel - didn't help

                             

                            3. For 32bpp my DV should be 800*32/8/4= 800 clocks. Here I can see the image but it has all the colors except grayscale regardless of position of data within 32-bit pixel.

                             

                            I suppose that main problem is that FX3 won't transfer double number of pixels (not bytes)

                             

                            So my question is how exactly can I transfer 10bpp grayscale image so that it remains grayscale?

                             

                            Thanks,

                            Leonid

                            • 11. Re: AN75779: Which media format to use for grayscale 10 bpp image
                              RashiV_61

                              Hello Leonid,

                               

                              For a UVC application, you need to pack 6 bytes to 10 bpp data.

                               

                              - When you send the RAW 10 data as yuy2 to the host. The host application (VLC) will sample the pixels as done for YUY2 format and not RAW 10 (grayscale) .

                               

                              -

                              1. For 8bpp my DV should be 800*8/8/4=200 clocks exactly, otherwise DMA chokes. And I have to through away two bits of every pixel. That means I can transfer only 800 bytes (=pixels) per line while YUV supposes to transfer Y-U-Y-V-.. which is 1600 pixels.

                              No wonder that both players give me black rect. And ffmpeg says that expected frame size is 768000 (800x480x2), while received is only 384000 (800x480)

                               

                              >> You were not able to see the video as the input from the sensor (800*480*1) bytes and UVC driver was expecting (800*480*2) bytes

                               

                              2.  For 16bpp my DV should be 800*16/8/4 = 400 clocks. Again I can's send 1600 pixels. In this case I can see the image but it a little bit green in some gray pixels as I mentioned above. I played with various shift of my source 10 bits within 16 bits of output pixel - didn't help

                               

                              >> This will show the greenish video and not gray scale because the pixels are sampled according to YUY2 format.

                               

                              If you are able to see the greenish video, means that the streaming through the FX3 is successful but there is a problem on the host side in displaying the data. So the host application should be such that the it is able to sample the data as per the gray scale (separating 6 non-image bits)

                               

                              You can't directly view gray scale image in  UVC host application (like VLC). To view gray scale image you need to have a non UVC application. You can use cypress driver (cyusb3) to grab the data and a custom application to display the data (which would separate the 6(padded bytes) and displays the image data only.

                               

                              Or if you want a UVC application, you need to build a custom host application to view the data which is streaming through UVC driver.

                               

                              Regards,

                              Rashi

                               

                               

                               

                               

                               

                               

                              • 12. Re: AN75779: Which media format to use for grayscale 10 bpp image
                                LeGa_3963206

                                Hi, Rashi,

                                 

                                Thanks for explanation.

                                As we know grayscale image definitely can be sent through UVC(YUV2) just in that case U and V will be constants. According to YUV standard Y component is grayscale. As I wrote I need to send Y0-0-Y1-0-Y2-0.. ans so on which gives double number of pixels regardless of bits per pixel.

                                 

                                I'm using FPGA as image source, so I can produce any image size with all possible formats, but the only problem is that FX3 somehow doesn't allow to send double sized line.

                                 

                                 

                                So is there any way to make it to do so?

                                 

                                Thanks,

                                Leonid

                                • 13. Re: AN75779: Which media format to use for grayscale 10 bpp image
                                  RashiV_61

                                  Hello Leonid,

                                   

                                  Can you set the bits per pixel to 8  and GPIF bus width to 16 bit and then try streaming. The first 8 bits (gray scale data from sensor) and the second 8 bits should be ( zeros - GPIF pins pulled down)

                                  The USB descriptors and the probe control should be H_Active * V_Active * 2 bytes/pixel *fps YUY2 format

                                   

                                  The first 8 bits of the YUY2 - 16 bits are sampled as Y component

                                   

                                  Please let me know the results

                                   

                                  Regards,

                                  Rashi

                                  • 14. Re: AN75779: Which media format to use for grayscale 10 bpp image
                                    LeGa_3963206

                                    Hi, Rashi

                                     

                                    The USB descriptors and the probe control should be H_Active * V_Active * 2 bytes/pixel *fps YUY2 format

                                    What does it mean? The only relevant parameter in probe control is bpp

                                     

                                    Here is what I changed according to your recommendation:

                                    1) GPIF designer: 16-bit bus, data/addr counters: (16*1024 - 16)/2 - 1 = 8183

                                    2) Descriptors/probe control: changed to 8bpp

                                    3) FPGA sends image as follows:

                                    wavedrom (4).png

                                     

                                     

                                    As result:

                                         - Debug UART: UVC: Completed 0 frames and 0 buffers

                                         - Windows (VirtualDub/VLC): blank screen

                                         - Linux (mplayer): blank screen with message Frame too small! (384000<768000) Wrong format ?

                                     

                                    Any ideas?

                                     

                                    Thanks,

                                    Leonid

                                    1 2 Previous Next