I have a similar question, so I will jump on this thread. I am making a device which will capture audio and video at high quality and stream them to the PC with as little latency as possible. For video, the way is clear (with many examples, including UVC).
However, for Audio, and simultaneous Audio/Video, it is very unclear to me. I can choose from various video standards, however the audio source Audio is I2S. I see that the FX3's I2S interface only supports transmission. I am willing to reduce the quality of my Video data to 24 bits, so there should be enough pins. Any help or examples would be wonderful.
How about custom protocol with 24-bit video and [31:24] to carry audio? Or insert an audio sample every few lines of video?
Good to know others are suffering with the same issue.
Yes, I2S receiver needs to be made separately in some other chip. Try an FPGA for that.
I would suggest sending audio frames after every video frame or at a specific interval to maintain video quality at 32 pins equivalent.
Also if the frames are neatly separated, the video can be sent using UVC while the audio can be sent using UAC.
Let me know what you think about it.
Currently I'm trying to insert an audio frame after every few video lines. Hope that works for me.
To solve my issue of streaming both audio and video, would it help if I basically mixed the examples of UAC and UVC with the appropriate GPIF II state Machine?