Hi,
I'm trying to run a simple experiment using Delta-Sigma ADC on a CY8CKIT-059. For some reason, the ADC is measuring a constant offset of about 18mV.
Even when I short ADC input to GND on the board, I still get 18mV offset. I tried the same experiment with a SAR ADC and it behaves correctly where 0.0V at the input produced ~0 counts at the output. What am I missing here?
Thanks in advance, Boris.
ADC Configuration:
-Single ended
- Single sample
- 14-bit resolution
- Conversion rate 1000
- Input rage Vssa to 2.048V
- Buffer Mode: Bypass
- Reference: Internal 1.024Volts
Solved! Go to Solution.
It looks to me that the offset correction for the decimator is being calculated incorrectly for certain resolutions. For 14-bits if you add the following line after ADC_Start() it will fix the problem, but this is just a work around.
ADC_DEC_OCOR_REG = 0x44;
ADC_DEC_OCORM_REG = 0x1E;
If you use the range Vss to Vref range everything is fine as well, because the offset correction register will always be zero. The way the Vss to 2*Vref (Single Ended) works is that it connects the minus input to the reference voltage. To make the numbers come out correct, 1/2_full_range * n is added in the decimator as an offset. "n" is compensation for the decimator gain. If this isn't done, the ADC would give you a -1.024V at Vss, 0V at Vref, and 1.024V at 2*Vref. By adding 1.024 volts to each of these inputs you would get 0V (0 counts), 1.024V (8192 counts), and 2.048V (16384 counts), for the inputs Vss, Vref, and Vref*2 respectively, assuming Vref is 1.024V.
I will post a bug report on this issue immediately. I found the same problem for 12, 13, and 15 bits as well.
Mark
what is return value of "result"?
result = ~139
result2 = 17
Can you check what happens when you try to enable the input buffer and set DelSig_ADC to 16-bit?
That's interesting. At 16-bit resolution I get 0.0 mV offset. Why does is it work correctly for 16 bits but not 14?
It looks to me that the offset correction for the decimator is being calculated incorrectly for certain resolutions. For 14-bits if you add the following line after ADC_Start() it will fix the problem, but this is just a work around.
ADC_DEC_OCOR_REG = 0x44;
ADC_DEC_OCORM_REG = 0x1E;
If you use the range Vss to Vref range everything is fine as well, because the offset correction register will always be zero. The way the Vss to 2*Vref (Single Ended) works is that it connects the minus input to the reference voltage. To make the numbers come out correct, 1/2_full_range * n is added in the decimator as an offset. "n" is compensation for the decimator gain. If this isn't done, the ADC would give you a -1.024V at Vss, 0V at Vref, and 1.024V at 2*Vref. By adding 1.024 volts to each of these inputs you would get 0V (0 counts), 1.024V (8192 counts), and 2.048V (16384 counts), for the inputs Vss, Vref, and Vref*2 respectively, assuming Vref is 1.024V.
I will post a bug report on this issue immediately. I found the same problem for 12, 13, and 15 bits as well.
Mark
Thank you for your explanation and yes it did correct the offset. I assume those register values only work for 14 bits and different values would need to be used if I change the resolution to 12 or 13. Is there a formula that would give me the correct register values based on the resolution or do I have to experimentally determine them. How did you calculate the the values of those registers for 14-bit resolution?
Thanks,
Boris
The input cap-ration attenuates the input to compensate for the non-linearity that occur in the modulator near the ends of the input range. The decimator then compensates for the attenuation. This ratio is then used to adjust the offset value, the "n" that I mentioned above. I want to look into this issue a bit more to make sure I understand what is going on with the calculated offset value, before I post an overall fix.
Mark