Assume you have a single sinusoid in bandlimited Gaussian noise with unknown amplitude $A$, known frequency $f_0$, and known noise spectral density $S(f)$ in $\frac{\mathrm{units}^2}{\mathrm{Hz}}$:
$$x(t) = A\sin(2\pi f_0t) + n(t)$$
The signal is sampled for a known finite duration $T$ such that the frequency component of the sinusoid alone would have finite magnitude $$\bigg\lvert\mathscr{F}\left\{A\sin(2\pi f_0t)\cdot \mathrm{rect}\left(\frac{t}{T}\right)\right\}(f_0)\bigg\rvert = AT$$.
How would one then quantify the 'goodness' of amplitude estimation $\hat{A} = \frac{|X(f_0)|}{T}$? I assume the SNR would be a distribution with the following shape? (Since the signal amplitude would be divided by an integral of zero-mean noise, which is also zero-mean)
Edit: Actually, using the definition of SNR as $\frac{E_s}{\sigma^2}$, SNR is constant since we would know both $E_s (=\frac{A^2T}{2})$ and $\sigma^2 (= S(f) \cdot BW)$. However, now I'm a bit confused. This would make it seem that reducing the BW to an infinitesimal amount would make the SNR very high. On the other hand, I thought the fourier transform was essentially applying a very selective BP filter at each frequency but still results in noise corrupting the frequency magnitude response because there is also a finite amount of energy per bandwidth in the 'smeared' (time-windowed) sinusoid.
Edit2: I believe the main sub-problem is Frequency magnitude distribution of noise
No comments:
Post a Comment