Consider a white Gaussian noise signal x(t).
If we sample this signal and compute the discrete Fourier transform, what are the statistics of the resulting Fourier amplitudes?
Answer
We can do the calculation using some basic elements of probability theory and Fourier analysis. There are three elements (we denote the probability density of a random variable X at value x as PX(x)):
Given a random variable X with distribution PX(x), the distribution of the scaled variable Y=aX is PY(y)=(1/a)PX(y/a).
The probability distribution of a sum of two random variables is equal to the convolution of the probability distributions of the summands. In other words, if Z=X+Y then PZ(z)=(PX⊗PY)(z) where ⊗ indicates convolution.
The Fourier transform of the convolution of two functions is equal to the product of the Fourier transforms of those two functions. In other words:
∫dx(f⊗g)(x)e−ikx=(∫dxf(x)e−ikx)(∫dxg(x)e−ikx).
Denote the random process as x(t). Discrete sampling produces a sequence of values xn which we assume to be statistically uncorrelated. We also assume that for each n xn is Gaussian distributed with standard deviation σ. We denote the Gaussian function with standard deviation σ by the symbol Gσ so we would say that Pxn(x)=Gσ(x).
The discrete Fourier transform amplitudes are defined as Xk≡N−1∑n=0xne−i2πnk/N. Focusing for now on just the real part we have ℜXk=N−1∑n=0xncos(2πnk/N). This is just a sum, so by rule #2 the probability distribution of ℜXk is equal to the multiple convolution of the probability distributions of the terms being summed. We rewrite the sum as ℜXk=N−1∑n=0yn where yn≡xncos(2πnk/N). The cosine factor is a deterministic scale factor. We know that the distribution of xn is Gσ so we can use rule #1 from above to write the distribution of yn: Pyn(y)=1cos(2πnk/N)Gσ(ycos(2πnk/N))=Gσcn,k(y) where for brevity of notation we've defined cn,k≡cos(2πnk/N).
Therefore, the distribution of ℜXk is the multiple convolution over the functions Gσcn,k: PℜXk(x)=(Gσc0,k⊗Gσc1,k⊗⋯)(x).
It's not obvious how to do the multiple convolution, but using rule #3 it's easy. Denoting the Fourier transform of a function by F we have F(PℜXk)=N−1∏n=0F(Gσcn,k).
The Fourier transform of a Gaussian with width σ is another Gaussian with width 1/σ, so we get F(PℜXk)(ν)=N−1∏n=0G1/σcn,k=N−1∏n=0√σ2c2n,k2πexp[−ν22(1/σ2c2n,k)]=(σ22π)N/2(N−1∏n=0cn,k)exp[−ν22σ2N−1∑n=0cos(2πnk/N)2]. All of the stuff preceding the exponential are independent of ν and are therefore normalization factors, so we ignore them. The sum is just N/2 so we get F(PℜXk)∝exp[−ν22σ2N2]=G√2/σ2N and therefore PℜXk=Gσ√N/2.
We have therefore computed the probability distribution of the real part of the Fourier coefficient Xk. It is Gaussian distributed with standard deviation σ√N/2. Note that the distribution is independent of the frequency index k, which makes sense for uncorrelated noise. By symmetry the imaginary part should be distributed exactly the same.
Intuitively we expect adding more integration should reduce the width of the resulting noise distribution. However, we found that the standard deviation of the distribution of Xk grows as √N. This is just due to our choice of normalization of the discrete Fourier transform. If we had instead normalized it like this Xk=1NN−1∑n=0xne−i2πnk/N then we would have found PℜXk=Gσ/√2N which agrees with the intuition that the noise distribution gets smaller as we add more data. With this normalization, a coherent signal would demodulate to a fixed amplitude phasor, so we recover the usual relation that the ratio of the signal to noise amplitudes scales as √N.
No comments:
Post a Comment