We are using the Cypress FX3 UAC firmware example as a starting-point for streaming audio data to an Isochronous endpoint as a standard USB audio class device. The example relies on 16-bit PCM audio data being read from SPI flash. As a test we've modified this example to use data buffers obtained via CyU3PDmaChannelGetBuffer() and set them with fixed patterns in memory instead (ultimately the actual audio data will be delivered via Slave FIFO interface). We are successful in streaming the data to the host PC and recording it in Audacity. However, the recorded data doesn't seem to match what we're expecting.
For example if we use a signed 16-bit audio data sample fixed at a value of 16384, we're expecting a recorded floating-point counterpart to have a corresponding value of 0.5 (i.e. 16384 / 2^15). The data below does not result in a recorded value of 0.5:
*(dmaBuffer.buffer+8) = 0x00;
*(dmaBuffer.buffer+9) = 0x40; // a sample of 0x4000 (16384)
However, if we right-shift the data by 5 before committing it then the recorded data is essentially our 0.5 result. This seems to hold true for other values between +32767 and -32768.
*(dmaBuffer.buffer+12) = 0x00;
*(dmaBuffer.buffer+13) = 0x02; // a sample of 0x0200 (512)
We're able to see that we can record left/right channel pairs that seem to correspond to our data buffers, but don't yet understand why we need to shift the data in order to get the expected result (with loss of precision due to the shifted lower bits). Is the data expected to be 16-bit signed linear PCM? We didn't change the USB audio descriptors from the example. Any ideas as to what might be occurring here? Is there somewhere we can locate the SPI flash audio data referenced in the UAC example? Perhaps there is something basic that we're missing. Any suggestions you have would be appreciated.
Can you please probe your SPI Data lines to see this is how the data arrives in the first place? Also, you can verify this using the UART Debug prints.
- Madhu Sudhan
Thanks for your reply. We aren't using SPI flash as a source for audio data like is done in the Cypress UAC example. We've modified the example slightly such that the audio data does NOT get sourced from flash memory.
In the Cypress UAC example it looks as if CyFxUacSpiTransfer() is used within the UAC application thread to retrieve the audio data, and then this data is copied to a buffer obtained via CyU3PDmaChannelGetBuffer(). Once the audio data has been copied then it is committed via CyU3PDmaChannelCommitBuffer().
However, in our modification instead of retrieving audio data from SPI flash and copying the data into the buffer (as is done with the example app), we just fill-in the buffer with fixed data values to represent the audio data. The idea of this was to start with fixed data buffers that would represent 48 samples of audio data, and then stream this to an Isochronous endpoint and record the data on the host PC. For the most part this works as expected. [Ultimately we're hoping to send the audio data over FX3 Slave FIFO]
The questionable part though has to do with the data itself. The recorded data on the host PC didn't seem to correspond to the values in our data buffers. For example, we were expecting that a value of 16384 (0x4000) at the FX3 would correspond to a floating-point value of 0.5 on the host PC. But this didn't seem to be the case. We found that if prior to committing the data we first right-shifted the data by 5 then the recorded values would seem to be appropriate. For example, using a value of 512 would produce a corresponding recorded value of 0.5 on the host PC. This doesn't seem right, but perhaps we're missing something obvious. Definitely appreciate your help in trying to understand what we're missing here...