Monday 25 May 2015

resampling - What are the relative merits of various upsampling schemes?


I recently encountered a DSP system which did some internal upsampling via zero padding. Expecting zero-order-hold, I was surprised to find that a DC signal did not produce a DC output; many harmonics of the internal (lower) sampling frequency were also present in the output.


This leads to my question: What upsampling techniques are commonly used, and what are their relative merits? Why would I choose zero padding, zero order hold, or first order hold, and what other techniques are available?


Some clarifications:



  • The system is real-time, so the upsampling scheme must be causal.


  • The upsampler is followed by a an anti-aliasing filter that can also be specified.



Answer



For the purposes of this answer I will use Matlab's terminology and define "upsampling" as the process of inserting $m-1$ zeros in between the input samples, and "interpolation" as the combined process of upsampling and filtering to remove the $m-1$ aliases ($m$ being the interpolation factor) that upsampling introduces. For an explanation of how/why upsampling introduces aliases, please see this thread.


It is important to understand that any low-pass filter can be used to get rid of the aliases and thus complete the interpolation. Some filters have advantages when used in interpolation, though. I'll discuss the various flavors of interpolation filtering below.


FIR Filter


Interpolating FIR filters are efficient because they combine upsampling and alias filtering into one step. This is most easily seen in an example. Suppose we have a data sequence $x[n]$ and we want to interpolate it by a factor of two. The first step is to upsample by a factor of two. This changes the original data sequence from $x_0, x_1, ... x_N$ to $x_0, 0, x_1, 0, ... x_N$.


Now suppose we have a low-pass FIR filter, $h[n]$, that we will use to remove the alias. When we convolve the upsampled data sequence with the filter, half the filter taps are stimulated by the non-zero samples, and half the taps are inactive because they correspond to the zero samples. The half that is stimulated and the half that is inactive flips back and forth as the filter goes through the data. These two sets of taps are sometimes referred to as the filter phases.


This same effect can be achieved implicitly by eliminating the upsampling, and filtering the original data sequence with an interpolating FIR filter. The interpolating FIR filter produces $m$ outputs for every input sample. For all $m$ outputs the filter will operate on the same $ceil(K/m)$ input samples (where K is the number of filter taps and "ceil" is the ceiling function).


An example will hopefully illustrate how this works. Suppose that we have a six tap filter and we are interpolating by a factor of two. The filter taps are [1 -2 4 4 -2 1]. If we literally interpolated and then filtered the samples and filter taps would line up (once there was full overlap) as follows:



$$ 0 : 1\\ x_2 : -2 \\ 0 : 4 \\ x_1 : 4 \\ 0 : -2 \\ x_0 : 1 \\ $$ Next sample...


$$ x_3 : 1\\ 0 : -2 \\ x_2 : 4 \\ 0 : 4 \\ x_1 : -2 \\ 0 : 1 \\ $$ Next sample...


$$ 0 : 1\\ x_3 : -2 \\ 0 : 4 \\ x_2 : 4 \\ 0 : -2 \\ x_1 : 1 \\ $$ And so on. The point of the interpolating filter is that it skips actually inserting the zeros and just alternates which set of taps it uses at the moment instead. Thus, the preceding sequence would now look like the following:


$$ x_2 : -2 \\ x_1 : 4 \\ x_0 : 1 \\ $$


$$ x_3 : 1 \\ x_2 : 4 \\ x_1 : -2 \\ $$


$$ x_3 : -2 \\ x_2 : 4 \\ x_1 : 1 \\ $$


Zero Order Hold


A zero-order hold interpolator is one that simply repeats each sample $m-1$ times. So a factor of two zero-order hold interpolator converts $x_0, x_1, ... x_N$ into $x_0, x_0, x_1, x_1, ... x_N, x_N$. This method is attractive because it is exceedingly easy, both in terms of coding and computational load, to implement.


The problem with it is that its low-pass filtering is quite poor. We can see that when we recognize that the zero-hold interpolator is a special case of FIR interpolation. It corresponds to upsampling followed by an $m$-wide rectangle filter. The Fourier transform of a rectangle filter is a sinc function, which is a rather shabby low-pass filter. It's shabbiness can be fixed with a compensating FIR filter, but if you are going to do that you might as well just use a good low-pass filter to begin with.


First Order Hold



First order hold is a step up from the zero-hold interpolator in that it linearly interpolates the upsamples using the two nearest input samples. So, a factor of two first-order hold interpolator would convert $x_0, x_1, ... x_N$ into $x_0, \frac{x_0+x_1}{2}, x_1, \frac{x_1+x_2}{2}, ... x_N$.


Like the zero-order hold interpolator, the first-order hold interpolator is a special case of FIR interpolation. It corresponds to upsampling and filtering with a triangle filter. For factor-of-two interpolation the filter is $[\frac{1}{2} 1 \frac{1}{2}]$, for factor-of-three interpolation the filter is $[\frac{1}{3} \frac{2}{3} 1 \frac{2}{3} \frac{1}{2}]$, and so on.


The triangle filter is two rectangle filters convolved together, which corresponds to sinc squared in the frequency domain. This is a definite step up from the zero-order hold, but is still not great.


IIR Filter


I have never used an interpolating IIR filter so I won't say a lot about them. I assume that the same arguments apply as in regular filtering- IIR filters are more efficient, can be unstable, don't have linear phase, etc. I do not believe that they can combine the upsampling and filtering steps like a FIR filter can, but I could be wrong about that.


FFT Interpolation


I'll throw this one in even though it's not very common (of course, I don't think the zero-hold is common either). This thread discusses FFT resampling, where resampling is both interpolation and decimation.


Higher Order Holds


Second-order hold interpolators are usually referred to as "quadratic interpolators". They are non-linear and thus cannot be implemented as FIR filters, which are linear. I do not understand the math behind them well, so I won't discuss their performance. I will say, though, that I believe that they are somewhat common outside of signal processing.


Higher order (three or more) methods also exist. These are referred to as "polynomial regressions".



EDIT:


Cascade Integrator Comb (CIC) Filters


I forgot to mention CIC Filters. CIC filters are used for two reasons: they use only adders/subtracters (not as big a deal now that multiplies are fast and cheap), and they can do really large sample rate changes pretty efficiently. Their down side is that they are essentially an efficient implementation of a cascaded rectangle filter, so they have all of the disadvantages of rectangle filters as discussed above. CIC interpolators are pretty much always preceded by a compensating FIR filter that pre-distorts the signal to cancel out the distortion introduced by the CIC. If the sample rate change is large enough the cost of the pre-distortion filter is worth it.


No comments:

Post a Comment

readings - Appending 内 to a company name is read ない or うち?

For example, if I say マイクロソフト内のパートナーシップは強いです, is the 内 here read as うち or ない? Answer 「内」 in the form: 「Proper Noun + 内」 is always read 「ない...