Inside ADC we have a quantizer, when the sampled signal is passed through it, the signal values get discretized. The number of discrete levels in which the signal is discretized is called the resolution of the quantizer. I read that if we oversample the input signal and then pass it through the quantizer and pass the output again through a low pass filter called decimation filter the signal to quantization noise (SQNR) of the quantizer improves resulting in better resolution of the quantizer.
How is this possible? True that the SQNR improves but how does that affect the resolution of the Quantizer? Isn't the number of levels inside the quantizer fixed by its hardware and thus resolution of the quantizer i.e., the number of dicrete levels in which it breaks the input fixed?
Answer
There are two aspects to how this works. First, since the signal is oversampled there is a great deal of correlation between samples that we can take advantage of via the low-pass filter. The noise, on the other hand, has no correlation (assuming it is white noise), and thus will often destructively interfere with itself.
Your question seems to be more about how the actual bit growth happens though. If we have, for example, a 12-bit ADC, how can the number of bits grow to, say, 16-bits? It is actually very simple. FIR filters (and the same basic argument applies to IIR filters as well) essentially do a lot of multiplying and adding. When you multiply two 12-bit values you get a 24-bit value. When you add two of those 24-bit values together you get a 25-bit value. The non-scaled bit width of a FIR filter is: $$ (bw_i * 2) + log_2(numTaps) $$ where $bw_i$ is the bit width of the input and $numTaps$ is the number of taps in the filter. As you can see, if we have 12-bit inputs and we want 16-bit outputs there is no trouble getting the 16-bits. You usually have to scale down the filter outputs, in fact, to restrict them to the bit widths that you want.
No comments:
Post a Comment