This parameter sets how the averaging mode operates. If the accumulate option is selected,
each ADC result is added to the sum and allowed to grow until the sum attempts to outgrow a 16
bit value, at which point it is truncated. If the Fixed Resolution mode is selected, the LSb is
truncated so that the value does not grow beyond the maximum value for the given resolution.
Can someone give me an example of these 2 options?
OK, just to make some initial assumptions: the alternate ADC resolution is configured to 10 bits (so we have a range of 0..1023), and the number of averaged samples is 256. Also, for the sake of easy calculation, assume that all samples read 512.
Then what happens is (when a channel is configured to use averaging): 256 samples are taken, and their values are added together.
In 'accumulate mode' this will be 256 times the value 512. This means that at the 126th sample, the result will overflow to 0 (128*512=65536), the same happens with the last sample (so the final result is 0).
In 'fixed resolution' mode, the 8 LSBs are truncated, and only the two leftmost bits are added (which happens to be '2' in our case). So we get a result of 256*2=512
Now assume we average only 64 samples. In the 'accumulate' case we get a result of 64*512=32768, which we would need to divide by 64 again. In 'fixed resolution' mode, the 6 leftmost bits are truncated, so we get a result of 64*8=512.
Now the results will be a little bit different when the samples all have different values. Then the 'accumulate' mode with subsequent division might deliver more accurate results, since the LSBs are taken into account. But one needs to be careful not to run into an overflow situation (so the resolution and the number of samples should be checked).
If you go here -
there is an Architecture TRM manual you can download, and an extensive chapter
in the analog section on the SAR including its avergaing modes and the registers
that control them.
Maybe I am not reading this correctly but I think there is not any
situation where overflow can occur from averaging. From TRM -
The second one is more like
sum of n (Samples / n)
where the /n part is done by right-shifting the values before the addition. So this looses some precision.
Yes, the averaging math....
I have learned a simplification when calculating a gliding average that works astonishing well:
As shown in Dana's links the next average is calculated by subtracting one element and adding the newest. This way you are obliged to maintain an array of k element values when calculating a gliding k average. A space and code-saving approximation is to use the last calculated average for the value of the element to subtract. So the whole algorithm shortens to my above given formula without the need of maintaining an element array.
For low k in Bobs design the error rises. You can establish
that by taking the limit of the function as k approaches some
value. Choice of algroithim all depends on accuracy, latency,
and of course HW & SW cycles needed.
The traditional running average of a set of samples has its
own issues, essentially the convolution of a window onto the
dataset. You could always run some tests with a noise source
generated by PSOC.