Oversampling is used when the signal does not contain clock information. In case of Manchester encoding, the data signal contains clock information also. Hence, oversampling is not used. A transition on the data signal is taken as the cue for decoding the signal.
Thanks for much for replying.
Is there any point then to setup the sampling timer higher than x4 the bit rate?
If I understand the setup, it is based on center sampling - taking sample as far away from transitions. So in this case it's at 1/4 or 3/4 of the bit period. (This should handle jitter around edges?)
What's the advantage of (as AN seems to suggest) suggesting going x16 the bit rate, while still taking only one sample at 3/4 bit time ?
(For tick granularity? )
I am referring you to page 4 of this Application Note.
"VC3 (counter clock frequency) counts three-fourths of bit time; this requires a VC3 clock at least four times the bit rate. However, tolerance must be added to cope with intrinsic precision and jitter of the transmitter and receiver.
Selecting an x16 oversampling rate gives more than 10 percent frequency tolerance on each side and is the retained value for the design. "
From your first reply on my question:
In case of Manchester encoding, the data signal contains clock information also. Hence, oversampling is not used.
I guess I'm not understanding then what the AN means in the quote from your last post, as it looks conflicting.
Selecting an x16 oversampling rate gives more than 10 percent frequency tolerance
I sure would like to understand this better. How do you achieve this x16 oversampling with taking only 1 sample? (As in the AP's implementation).
(If you want to redirect me to read on this somewhere else that's Ok too; if it contains the needed info).
I agree that oversampling may not be the correct word to use here. Selecting an x16 clock would have been more appropriate. Hope that this would clear the confusion.