6 Replies Latest reply on Oct 4, 2019 4:42 AM by BoJo_4384951

    Create UVC class camera based on an_75779

    BoJo_4384951

      Hello FX3 gurus,

       

      We want to take uncompressed video streams from Sony's IMX253 CMOS sensor and transfer them to the PC by the help of FX3 module. The best we can take from IMX253 sensor is UHD image of 4096 x 300 pixels in size. We have FPGA device as a bridge between IMX253 and FX3 module. FPGA is used to apply demosaicing, noise reduction, color/gamma correction algorithms. What we have at the output of all those blocks are pixels in RGB space with 12-bits resolution per pixel (or 10-bits resolution per pixel if we don't have enough FPGA resources). 

       

      I found that AN75779 application note could be the perfect place to start with. Given the fact that we want to take the maximum we can from FX3 module, our Databus is 32-bit wide. Consequently, I created a new project in GPIF II Designer by following AN75779 document in order to generate appropriate cyfxgpif2config.h header file. Basically, I changed the width of the Databus and limit value of LD_ADDR_COUNT and LD_DATA_COUNT counters to 4091.

       

      Here is what I did on the FX3 firmware side (SDK 1.3.4 is used, by the way):

       

      In uvc.c file:

      io_cfg.isDQ32Bit    = CyTrue;

       

      I also changed the info about bit depth, image size, frame size and frame rate of the Super Speed Configuration descriptor in cyfxuvcdscr.c file:

          0x20,                       /* Number of bits per pixel - 32 */
          0x00, 0x10,                 /* Width in pixel - 4096*/
          0xB8, 0x0B,                 /* Height in pixel - 3000*/
          0x00,0x20,0xBC,0xBE,        /* Max bit rate bits/s. - 3.2Gbps */
          0x5B, 0xCC, 0x15, 0x00,     /* 7fps */

       

      I did not touch Min bit rate bits/s field and I leaved streaming encoding format to YUY2 for the moment.

       

      Is there anything we are missing to change? I did not change the descriptor fields of the High Speed Configuration descriptor because we are mainly interested in USB 3.0 transfers.

       

      We would like to use some open-source video players (e.g. VLC, VirtualDub...) to try playing our streams on PC side. Consequently, we should transfer them some of the streaming encoding formats they are supporting. It would be the most appropriate for us to send uncompressed streams in RGB format but I've been reading that sending the stream in RGB format is not supported by those open-source video players. Can you confirm this?

       

      Is there any other streaming encoding format in addition to YUY2 that is supported by open-source video players? How about YCbCr?

       

      Thank you very much for your time and effort. It is really appreciated.

       

      Sincerely,

      Bojan.

        • 1. Re: Create UVC class camera based on an_75779
          RashiV_61

          Hello,

           

          -The changes done to the LD_ADDR_COUNT and LD_DATA_COUNT counters (changing bus with to 32 bit)  and uvc.c file is correct.

          - The changes done to the descriptors
             

                  No . of bits per pixel :  Should be as per the input. In your case it's either 10 bits or 12 bits.

                 Why did you changes the bits/pixel to 32 bits?

           

                  0x00, 0x10,                               /* Width in pixel */                     //4096

                  0xB8, 0x0B,                              /* Height in pixel */                  //3000

                  0xXX,0xXX,0xXX,0xXX,           /* Min bit rate bits/s. */     // 4096 * 3000* 7 * bits/pixel

                  0xXX,0xXX, 0xXX                   /* Max bit rate bits/s. */     // 4096 * 3000* 7 * bits/pixel

                  0x00,0xA4,0x1F,0x00,            /* Maximum video or still frame size in bytes(Deprecated)*/  // 4096*3000*Bytes/pixel

                   0x5B, 0xCC, 0x15, 0x00,       /* 7fps */

           

          - Please let me know the exact bits/pixel that you would stream to FX3 . If you keeping the encoding format to be YUY2 then the bits/pixel is to be 16 bits/pixel

           

          - You also need to make changes in the probe control in uvc.c (Fields: fps and max video frame size). If the DMA buffer size is changed only the the "No. of bytes device can rx in single payload = 16 KB */" field needs to be changed accordingly.

           

          /* UVC Probe Control Settings for a USB 3.0 connection. */

          uint8_t glProbeCtrl[CY_FX_UVC_MAX_PROBE_SETTING]

           

          You can play YUY2 (can be reffered as Ycbcr 422) in e- cam or VLC

           

          VLC does not stream RGB data. It can stream YUY2 or MPEG format. Similary, for E-con system's Ecam

           

           

          Regards,

          Rashi

          • 2. Re: Create UVC class camera based on an_75779
            BoJo_4384951

            Hello RashiV_61,

             

            Thanks for your reply and sorry for the confusion in my question. What we actually have after demosaicing block is 12-bits per R component of the pixel, 12-bits per G component of the pixel and 12-bits per B component of the pixel. If we don't have enough FPGA resources we will pass only 10-bits of R, 10-bits of G and 10-bits of B component to the correction blocks. Consequently, after correction blocks we might have 36-bits or 30 bits per pixel in RGB format.

             

            We would need to convert that into some 32-bit pixel representation format that is appropriate to be played by open-source video players. Do you have any recommendation? Our idea is to use YCbCr format. Is that OK?

             

            According to your suggestions, I changed the following lines within glProbeCtrl[] settings:

            0x5B, 0xCC, 0x15, 0x00,/* Desired frame interval in the unit of 100ns: 7 fps */
            0x00, 0x00, 0xEE, 0x02,/* Max video frame size in bytes - 4096 x 3000 x 4Bytes per pixel*/

             

            We would not change the size of the DMA buffers because, to the best of my understanding, it is perfectly tailored to perform 16 bursts of 1KB packets over Bulk Endpoint and thus take the best from USB 3.0 possibilities. Am I wrong, would it be better for us to change the size of the DMA buffers?

             

            According to your suggestions, I put the following for the Min/Max bit rate bits/s

            0x00,0x00,0x10,0xA4,      /* Min bit rate bits/s. - 4096 x 3000 x 7fps x 32-bits per pixel */
            0x00,0x00,0x10,0xA4,      /* Max bit rate bits/s. - 4096 x 3000 x 7fps x 32-bits per pixel */

             

            Thanks once again for your support.

            Sincerely,

            Bojan.

            • 3. Re: Create UVC class camera based on an_75779
              RashiV_61

              Hello,

               

              Apologies for late reply.

              As your requirement is of UVC application,you need to use color format which is supported by the UVC driver.

               

              -  UVC 1.5 supports 4 (YUY2, NV12, M420, I420) uncompressed formats. So it is recommended to convert the RGB data to one of this color formats (on FPGA). As you have mentioned YCbCr, you can use YUY2 format Recommended 8-Bit YUV Formats for Video Rendering - Windows applications | Microsoft Docs . The YUY2 supports 16 bits/pixel.

               

              - If this this conversion is done (RGB > YUY2). The changes need to be made in the descriptors, Probe control structure as (bits/pixel) would change (from 32 bits to 16 bits)

               

              - The GUID mentioned i the descriptors will indicate the host about the color format of the data. So GUID should be of YUY2 color format.

               

              - VLC media player is open source and supports YUY2 format.

               

              - Please confirm that the output  signals from the FPGA would be Frame_Valid, Line_Valid, Data lines, PCLK

               

              Regards,

              Rashi

              • 4. Re: Create UVC class camera based on an_75779
                BoJo_4384951

                Hi, RashiV_61,

                 

                Thanks for your reply.

                 

                I am not the expert for video processing so I might be wrong but it seems to me that if we have 36-bits or 30-bits per pixel after demosaicing, noise reduction and color/gamma correction blocks and we transfer only 8-bits per pixel, or even 16-bits per pixel in YUV2 format, we well lose on image quality. Moreover, it is important to us not only to display video on the screen but also to process images on the host (PC) side. Consequently, we should find a way to transfer as much info about the pixel as we can.

                 

                You say UVC 1.5 is supporting only 4 uncompressed formats (YUV2, NV12, M420, I420). However, in Appendix C of UVC 1.5 Class specification from August 2012, page 164, I found the following:

                 

                "C.1. Supported video and still image formats
                This specification is designed to be format-agnostic, and will support any present or future video
                or still image format. The video and still image formats supported by the device are reported to
                the host software via Format descriptors (see section 3.9.2.3, "Payload Format Descriptors").
                "

                 

                It might be that open source UVC-class players can only play the videos in above 4 uncompressed formats. However, if we develop appropriate UVC drivers on the Host side, we should be able to transfer and play any video format we want. Am I right with my conclusion?

                 

                For example, we might use this AYUV 4:4:4 format with 32-bits per pixel and no downsampling on chroma channels:

                Recommended 8-Bit YUV Formats for Video Rendering - Windows applications | Microsoft Docs

                 

                - Yes, our FPGA device will provide PCLK, Data Lines, Frame_Valid, and Line_Valid lines.

                 

                Sincerely,

                Bojan.

                • 5. Re: Create UVC class camera based on an_75779
                  RashiV_61

                  Hello,

                   

                  If a host application really sticks to UVC, then there are only select formats that are supported. UVC 1.5 spec supports 4 (YUY2, NV12, M420, I420). However, if the application is able to implement f different color format, UVC can still be used as a transport medium.

                   

                  If UVC descriptor reports a bit depth of 32bit instead of 16bit for a pixel, the driver will expect a total of H_Active * V_Active * 4 bytes. This way, image in AYUV color format can be transferred. But on the host side, the host application will have to read the data as AYUV and not YUY2.

                   

                  Regards,

                  Rashi

                   

                  • 6. Re: Create UVC class camera based on an_75779
                    BoJo_4384951

                    Thanks, RashiV_61!

                     

                    It was important to me to confirm that we can use UVC firmware on FX3 side to transfer video data over USB 3.0.

                     

                    Thanks for your time and effort to clarify the things to me and help me.

                     

                    Sincerely,

                    Bojan.