9 Replies Latest reply on Sep 24, 2018 10:27 AM by shams_2888731

    Audio Streaming PSoC 4

    shams_2888731

      Dear msur,

       

      Following up from the discussion at Audio Streaming and Signal Processing using DMA, I had the same mistake in a custom board using PSoC 4 Chip (CY8C4248LQI-BL583) when I was trying to stream audio from an external codec. However, implementing the suggestion that worked for PSoC 6 does not work for PSoC 4 and feel like I am not setting up the DMA properly for this case since for the given sampling rate and sample size, I should have at least about 8 ms for signal processing. I have attached a part of the project without the code for codec that I am using. As before I added a delay to imitate some signal processing but adding this introduces echoes and corrupts that audio. Any help would be appreciated.

       

      Kind regards,

       

      Shams

        • 1. Re: Audio Streaming PSoC 4
          anks

          You can check this project which is available with CY8CKIT-046 Kit's firmware.It is PSoC4200L based.It may be helpful.

          • 2. Re: Audio Streaming PSoC 4
            msur

            Hello shams_2888731,

             

            Sorry for a delay in response, was away and then had some login issues the past 2 days.

             

            To your query, note that PSoC 4 I2S component has only one FIFO element (i.e. FIFO level 1). In case of PSoC 6, it has a huge buffer (255 bytes). Now when you stop your RX and start it again, you are literally cutting out 2 ms worth of Audio every 10 ms approx (if 8 ms is the 64 sample period ==> 8 ms data > 2 ms delay/no data > 8 ms data ... ). This is like 20% audio loss. Not sure how that translates to echo but definitely you should hear clipped audio because of that.

             

            In order for a better output, you should implement a ping-pong buffer configuration (2 buffers) as I explained earlier for capturing the audio. Use two descriptors and chain them together. The first descriptor takes care of transferring data from I2S to first SRAM buffer and generate interrupt after completion (64 bytes). At this point pass the control (chain) to descriptor 1 (which transfers I2S to the second buffer). During the descriptor 1 transfer, you process descriptor 0 transferred data (make sure you complete within the time). Note that both descriptors will trigger the same interrupt - read the descriptor status (Invalid ==> just completed) before setting the appropriate flag. Invalidate a descriptor after transfer is complete and validate it after processing the data transferred by that descriptor.

             

            Let me know if you have any queries about this implementation. The example project that Ankita shared should give an example of this implementation.

             

            Regards,

            Meenakshi Sundaram R

            1 of 1 people found this helpful
            • 3. Re: Audio Streaming PSoC 4
              shams_2888731

              Dear Meenakshi Sundaram R,

               

              No worries. Thank you for the reply.

               

              Yes I am going through the project but it has a lot going on and finding it hard to understand the parts that are useful to me. I see that in the AudioIn.c file in the ProcessAudioIn() function, the ping-pong buffer is being configured.

               

              1- First how does the code calls this function ProcessAudioIn() ? It seems that it is a callback function but I am not sure what condition triggers the call to the function.

              2- Do I also need two more DMAs or having 2 descriptors should be enough for both Rx and Tx ?

              3- Once descriptor 1 is done, it will generate an interrupt and then I do processing again as I did when descriptor 0's completion triggered the interrupt. Is that correct ?

              4- Also do I need 2 conditions like in the ProcessAudioIn() for cases when to use either one or two descriptors ?

               

              Sorry I have too many questions but for now I will settle for these to get me started. Thanks for the help.

               

              Kind regards,

               

              Shams

              • 4. Re: Audio Streaming PSoC 4
                shams_2888731

                Also One more thing I wanted to confirm is that in order to know which descriptor is completed and hence know which buffer to use for processing, will I have to check for the status of both descriptors to see which is done using something like this :

                 

                if ( (RxDMA_GetDescriptorStatus(0) & CYDMA_RESPONSE) ==  CYDMA_DONE) {

                     inBuffer_SetSrcAddress(0, (void *) (streamingSamples1));

                }

                 

                else if ( (RxDMA_GetDescriptorStatus(1) & CYDMA_RESPONSE) ==  CYDMA_DONE) {

                      inBuffer_SetSrcAddress(0, (void *) (streamingSamples2));

                }

                • 5. Re: Audio Streaming PSoC 4
                  msur

                  Actually, the ProcessAudioIn() API is for the USB IN endpoint that transfers data from the I2S Rx buffer to the USB IN EP buffer. The USB audio is slightly complicated and might be confusing for your use case (as it involves lot of USB audio spec defined sync, not required in your case).

                   

                  You should check the RxDMA implementation, the isr_RxDMADone interrupt implementation and the handler for your case. So how that works is -

                  1. RxDMA is configured to transfer one sample at a time from the I2S Rx - it is configured to transfer 1152 bytes (circular buffer) and keeps transferring as long as the tr_in (Rx_DMA_tr) is high (i.e. bytes available in I2S Rx FIFO)
                  2. RxDMA tr_out (RxCount) is used to count the number of samples transferred by the DMA.
                  3. When you read 144 bytes (IN_TRANS_SIZE macro) of data, the RxDMADone is triggered (the ByteCounter_Rx counter). This interrupt maintains the InLevel along with amount of data transferred out on the USB IN EP (calculated as "removed" variable in RxDMADone_interrupt ISR)
                  4. The I2S_Rx path is enabled when the USB Audio IN path is enabled and is active all the time till either the USB is disconnected or the inLevel overflows (checked in the ISR) as USB was not able to take the data out faster than it was getting filled in by I2S Rx.

                   

                  In your case, you can use the ProcessAudioIn API implementation for the Tx path but with much lesser complexity. You can use the inBuffIndex for the read pointer in the buffer for transferring to I2S Tx. Transfer the number of bytes you want to transfer using the TxDMA, update the inBuffIndex, keep a count on the number of bytes transferred and subtract it from the inLevel after transfer is complete. ProcessAudioIn can be called everytime you have some 'x' bytes to transfer to I2S Tx - may be it can be called from RxDMADone_interrupt ISR, if you want to send Tx every 144 bytes. But make sure the Tx data gets transferred properly before initiating the next DMA transfer for Tx. Or follow similar approach for Tx too - use two 144 bytes buffer and ping-pong between them.

                   

                  Now to your questions -

                  1. ProcessAudioIn - See above

                  2. You will need two channels - one for Tx and one for Rx. The example uses the ByteCounter_Rx to implement the two descriptor explanation I provided in my prev post i.e. instead of chaining descriptors, it is generating a counter based interrupt after every 'x' bytes. So you can use either - whichever you are comfortable with. You can do a similar approach for Tx

                  3. You can either do that or use the ByteCounter_Tx method, which splits the same descriptor into multiple packets by generating interrupt based on number of bytes transferred.

                  4. Again that condition is not required for your case, as in USB Audion the buffer sizes differ based on the sample rate and size selected. In your case, I doubt you require that. So you dont need the two descriptor approach mentioned in the API. You can try the two descriptor approach I provided in my prev post though.

                   

                  I know it might be a bit overwhelming. But you can always ask clarifications here and we will happy to help

                   

                  Regards,

                  Meenakshi Sundaram R

                  1 of 1 people found this helpful
                  • 6. Re: Audio Streaming PSoC 4
                    shams_2888731

                    Okay I have implemented the 2 descriptors solution that you suggested and I have attached that part of the project. It works much better now but if I put a delay of anything more than 3 ms then I don't hear anything. This is probably because the GetDescriptorStatus() returns CYDMA_INVALID_DESCR and so the TxDMA Src is not set. However, I am not sure why this is happening since I should have about 7-8 ms of maximum delay possible. Also do I need to have RxDMA_ChEnable() after RxDMA_ValidateDescriptor() ? For now I have only added the 2 descriptor solution at the Rx end but kept everything else unchanged.

                    • 7. Re: Audio Streaming PSoC 4
                      shams_2888731

                      Actually I think I found why I only had 3 ms room for signal processing. My codec transfers 2 bytes at the given sample rate but the I2S can only transfer 1 byte at time, hence the discrepancy. Still I would appreciate if you can check the above project and let me know if that is the recommended way to use the two descriptors.

                       

                      Few more questions :

                      1- is there a need for me to add the ping-pong to the Tx end as well ?

                      2- If for example I want to double the signal processing time from ~8 ms to ~16 ms, is there any other way instead of increasing the number of samples or decreasing the sampling rate ?

                      2- my codec is litte-endian while the I2S is big-endian. Thus in order to make sense out of the data and store it as int16, is there a DMA settings that combines the bytes into halfword  after the byte swapping is enabled at RX ( I know I can combine the pair of bytes received and store it into an int16 array in the code )

                      • 8. Re: Audio Streaming PSoC 4
                        msur

                        To your questions -

                        1. You can do that - this lets you transfer the one Tx buffer to I2S while the other Tx buffer gets filled from the processed audio data.

                        2. Best option would be to increase the buffer size/number of samples, if you have the RAM available for the buffer.

                        3. I believe we have the "byte swap" option in I2S component just for this purpose. Why don't you try that?

                         

                         

                        And yeah the descriptor usage looks fine in the project

                         

                        Regards,

                        Meenakshi Sundaram R

                        • 9. Re: Audio Streaming PSoC 4
                          shams_2888731

                          Thank you very much for all the help.