The question is based on Book : Fundamentals of Statistical Signal Processing by Steven Kay, Chapter 4 : Eq(4.21). The expression for the variance of the estimated coefficients when the input is PRN as
$$\textrm{Var}\left(\hat{h}_i\right)= \frac{\sigma_w^2}{Nr_{uu}[0]} \tag{1}$$
where $\sigma_w^2$ is the variance of the measurement noise; $\sigma_u^2$ is the variance of the input.
Let the model be: $$x[n] = \sum_{i=1}^m h[i]u[n-i] + w[n] \tag{2}$$
where $u[\cdot]$ is the input process, and $w[n]$ is a zero mean white gaussian noise. $\sigma_{\rm input}^2$ is the variance of the input, $u[n]$ and $\sigma_{\rm noise}^2$ is the variance of the measurement noise $w[n]$. The cross-correlation function $R_{x,u}[\cdot]$ between $x[\cdot]$ and $u[\cdot]$ is the periodic convolution between the sequence $h[\cdot]$ and the periodic autocorrelation function $R_{u,u}[\cdot]$
Performance is evaluated using CRLB of channel coefficients.
Can somebody please explain to clear my confusion about what the expression of the variance of the estimator i.e, Cramer-Rao lower bound for the channel coefficients will be
- (a) when $u[n]$ takes real values. Is it $\frac{m\sigma_{\rm noise}^2}{N \sigma_{\rm input}^2}$
- (b) when $u[n]$ takes values from a finite symbol set $\{+1,-1\}$. Is it $\displaystyle \frac{m\sigma^2_{\rm noise}}{N}$ where $\sigma^2_u = 1$ and $r_{uu} =1 $
The expression in the book is for PRN sequence, but I don't quite understand what PRN sequence is. is it another term for symbolic values (not real values)? By PRN do we mean the values are ${+1,-1}$?
EDIT of the update based on the answer and offline discussions
Just to confirm if I am on the correct track. The general formula for $var(\mathbf{\Delta h}\mathbf{\Delta h^T}) =\frac{ \text{variance of measurement noise}}{\text{normalizing factor * autocorrelation of input}}$ = $\frac{\sigma^2}{D*\text{autocorre of input}} $ where $D = N * variance = N {\sigma^2_u }$.
Therefore, $var(\mathbf{\Delta h}\mathbf{\Delta h^T}) = \frac{\sigma^2}{N \sigma^2_u \sum u[n]u^*[n]}$ = $\frac{\sigma^2}{N(\sigma^2_u \sigma^{*2}_u)^2}$ [autocorrelation having complex conjugate term] ??
For +1/-1 sequence, this formula evaluates to $var(\mathbf{\Delta h}\mathbf{\Delta h^T}) = \frac{\sigma^2}{N\sigma^2_u} $
For QAM :$var(\mathbf{\Delta h}\mathbf{\Delta h^*}) = \frac{\sigma^2}{D (\sum u[n] u^*[n])}$ = $\frac{\sigma^2} { \big(N \sigma_u \sigma_u^*\big) \big(\sum u[n] u^*[n]\big)}$
Answer
A PRN sequence is a Pseudo-Random Noise sequence, often generated by using an Linear Feedback Shift Register (LFSR) with the feedback taps done by using a primitive irreducible polynomial in GF{2}, which is the Golois Field of 2 elements. When a primitive and irreducible polynomial in GF{2} is used, the LFSR will produce a "maximum length sequence", meaning the pattern at its output will be the longest possible for the number of registers in the LFSR. An example generated from the polynomial $1+x^3+x^4$ is shown in the picture below. This polynomial is primitive and irreducible in GF{2}, and in this case we will use the symbols $-1$ and $+1$ as the two elements in the field), and therefore this LFSR as configured will produce a maximum length sequence; the state of the generator, being the values after each cycle at the output of each register in the generator, will cycle through every possible combination pseudo-randomly before repeating, except for the all zero state. (So in the picture shown the choices are $0001$, $0010$, $0011$, $0100$, $\ldots$, $1111$ in pseudo-random order, so the pattern of the sequence at the output will have $15$ values before repeating) Generally the maximum length is $x^N-1$ where $N$ is the order of the polynomial, the $-1$ is because the all-zero state is not allowed).
For clarity we will refer to each output of the LFSR as a "chip", and the complete sequence is referred to as a "symbol".
When you correlate to a matched PRN sequence + noise (the input), you multiply the input by each coefficient in the sequence and sum the result (correlation = multiply + accumulate). If you are aligned to the code, the multiplication will cause the $-1$'s in the input to be $+1$'s, and the $+1$'s in the input to be $+1$'s, essentially converting the pseudo-random pattern to be all $1$'s. This output of all ones is then accumulated to achieve the correlator output. (Below shows the multiplication that would then be fed to an accumulator after each result).
The accumulation on the signal component (which is all $1$'s when aligned) will go up in magnitude by $N$ given $N$ samples in the sequence (if we assume one sample per chip).
Assuming the input noise itself is uncorrelated to the PRN sequence, and white within our sample space (which means that each sample of noise is uncorrelated to all the other samples), then the multiplication by $+1/-1$ will not change the variance of the noise. Therefore the standard deviation of the noise will increase by $\sqrt{N}$ in the accumulator. (When you add N uncorrelated noise samples, the standard deviation goes up by the $\sqrt{N}$).
So the variance of the noise is increasing by $N$, but the variance of the signal is increasing by $N^2$ (since the signal magnitude, not power, went up by a factor of $N$). The end result is the signal-to-noise ratio (SNR) changes by a factor of N in power or $10\log(N)$ in $\textrm{dB}$, and this is referred to as the "processing gain" of the PRN correlator.
If we normalize the signal level to be the same at input and output, the noise standard deviation will have changed by a factor of $1/\sqrt{N}$, and it's variance by $1/N$, matching your equation.
Stating this mathematically, consider our samples at the output of the multiplier as $$X_i\quad\text{for}\quad i=1, 2, \ldots, N$$
The multiplication by $+1/-1$ does not change the distribution of the noise on the input samples (they will still be independent and identically distributed, IID), with the same variance as the input given as $$\textrm{Var}(X_i) = \sigma^2$$
Multiplying by $\pm 1$ with a synchronized PRN causes the mean to be 1 as previously described, $$E(X_i)=1$$
We will denote the output after accumulating over all $N$ samples as $Y$, but we will also divide by $N$ to be consistent with the normalized autocorrelation $r_{uu}=1 = E(Y)$: $$r_{uu} = \frac{1}{N}\Sigma_1^N X_i = E(Y)$$
Where $X_i$ is the product of the input waveform with the PRN.
This results in the expression for the mean and variance of $Y$ as follows:
\begin{align} E(Y) &= E\left[\frac{1}{N}\left(X_1+X_2+ \ldots +X_N\right)\right] = \frac{1}{N}E\left[(X_1+X_2+ \ldots +X_N)\right]\\ \textrm{Var}(Y) &= \textrm{Var}\left[\frac{1}{N}\left(X_1+X_2+ \ldots +X_N\right)\right] = \frac{1}{N^2}\textrm{Var}\left(X_1+X_2+ \ldots+X_N\right) \end{align}
\begin{align} E\left[(X_1+X_2+ \ldots +X_N)\right] &= E[X_1]+E[X_2]+ \ldots E[X_N] = N\\ \textrm{Var}(X_1+X_2+ \ldots +X_N) &= \textrm{Var}(X_1)+\textrm{Var}(X_2) + \ldots \textrm{Var}(X_N) = N\sigma^2 \end{align}
Therefore \begin{align} E(Y) &= NE[(X_1+X_2+ \ldots +X_N)] = 1\quad\text{and}\\ \textrm{Var}(Y) &= \frac{1}{N^2}\textrm{Var}(X_1+X_2+ \ldots +X_N) = \frac{1}{N^2}N\sigma^2 = \frac{\sigma^2}{N} \end{align}
No comments:
Post a Comment