Setup
Suppose we have a complex $L\times 1$ signal $\mathbf{x}$ with two tones at (unknown) frequencies and phases defined as: $$ x_n = A_1 e^{j \omega_1n + \varphi_1} + A_2 e^{j \omega_2n + \varphi_2} $$ for $0 \leq n \leq L-1$. We observe $\mathbf{x}$ corrupted by additive white Gaussian noise: $$ \mathbf{y} = \mathbf{x} + \mathbf{w} $$ where $\mathbf{w} \sim \mathcal{CN}(0, \sigma^2 \mathbf{I})$.
Problem
I'm trying to show a connection between the signal amplitudes and eigenvalues from the eigendecomposition of the autocovariance matrix $\mathbf{R_y}$ of $\mathbf{y}$ (this is related to MUltiple SIgnal Classification i.e. MUSIC algorithm which splits the $L$ dimensional space into a 2 dimensional signal subspace and an $L-2$ dimensional noise subspace).
$$ \mathbf{R_y} := \mathbf{E}[\mathbf{y}\mathbf{y}^H] = \mathbf{x}\mathbf{x}^H + \sigma^2 \mathbf{I}. $$
Let $\mathbf{s}_1 = \begin{bmatrix} 1 & e^{j\omega_1} & e^{j2\omega_1} & \ldots & e^{j(L-1)\omega_1}\end{bmatrix}^T$ and similarly define $\mathbf{s}_2$. Then, $$ \mathbf{x} = \begin{bmatrix} \mathbf{s}_1 & \mathbf{s}_2 \end{bmatrix} \begin{bmatrix} A_1 e^{j\varphi_1} \\ A_2 e^{j\varphi_2} \end{bmatrix} $$ and the signal autocovariance matrix can be written as: \begin{eqnarray*} \mathbf{x}\mathbf{x}^H &=& \begin{bmatrix} \mathbf{s}_1 & \mathbf{s}_2 \end{bmatrix} \begin{bmatrix} A_1 e^{j\varphi_1} \\ A_2 e^{j\varphi_2} \end{bmatrix} \begin{bmatrix} A_1 e^{-j\varphi_1} & A_2 e^{-j\varphi_2} \end{bmatrix} \begin{bmatrix} \mathbf{s}_1^H \\ \mathbf{s}_2^H \end{bmatrix} \\ &=& \underbrace{\begin{bmatrix} \mathbf{s}_1 & \mathbf{s}_2 \end{bmatrix}}_{\mathbf{S}} \underbrace{\begin{bmatrix} A_1^2 & A_1 A_2 e^{j(\varphi_1-\varphi_2)} \\ A_1 A_2 e^{-j(\varphi_1-\varphi_2)} & A_2^2 \end{bmatrix}}_{\mathbf{M}} \underbrace{\begin{bmatrix} \mathbf{s}_1^H \\ \mathbf{s}_2^H \end{bmatrix}}_{\mathbf{S}^H} \end{eqnarray*} where we note that $\mathbf{M}$ is Hermitian symmetric and therefore has a decomposition $\mathbf{M} = \mathbf{Q\Lambda Q}^H$ where $\mathbf{\Lambda}$ is a 2x2 diagonal matrix and $\mathbf{Q}^H\mathbf{Q} = \mathbf{Q}\mathbf{Q}^H = \mathbf{I}$.
Therefore, \begin{eqnarray*} \mathbf{R_y} &=& \mathbf{SQ \Lambda Q^HS^H} + \sigma^2 \mathbf{I}. \end{eqnarray*}
Now I am stuck. I suspect I should be able to eventually express $\mathbf{R_y} = \mathbf{PDP^H}$ where the diagonal matrix $\mathbf{D}$ should have two large eigenvalues related to the signal amplitudes $A_1$ and $A_2$, but since $\mathbf{SS^H}\neq \mathbf{I}$ I have no way to absorb the $\sigma^2$'s into the diagonal matrix $\mathbf{\Lambda}$.
Answer
The piece I was missing was the distribution of the initial phase values $\varphi_1$ and $\varphi_2$. It is standard to assume that these are uniformly distributed [^]. This leads to: $$ \mathbf{R_x} = \mathbf{E[xx^H]} = \mathbf{S \Lambda S^H} $$ where $\mathbf{\Lambda} = \begin{bmatrix} A_1^2 & 0 \\ 0 & A_2^2 \end{bmatrix}$ and $\mathbf{S} = \begin{bmatrix}\mathbf s_1 & \mathbf s_2 \end{bmatrix}$ so that $$ \mathbf{R_y} = \mathbf{S \Lambda S^H} + \sigma^2 \mathbf{I}. $$
Now note that $\mathbf{R_y}$ is Hermitian symmetric and must possess an eigendecomposition with eigenvalues $\lambda_1 \geq \lambda_2 \geq \lambda_3 \geq \cdots \geq \lambda_L$. However, also note that $\mathbf{R_x}$ has two nonzero eigenvalues, say, $\tilde\lambda_1 \geq \tilde\lambda_2$. Since $\mathbf{R_y} = \mathbf{R_x} + \sigma^2 I$ it is easy to verify that $\lambda_1 = \tilde \lambda_1 + \sigma^2$, $\lambda_2 = \tilde \lambda_2 + \sigma^2$ and $\lambda_i = \sigma^2$ for $3\leq i \leq L$.
In the eigendecomposition of $\mathbf{R_y}$, let $\mathbf e_1$ and $\mathbf e_2$ be the eigenvectors corresponding to $\lambda_1$ and $\lambda_2$ and $\mathbf g_1, \ldots, \mathbf g_{L-2}$ be the eigenvectors corresponding to the eigenvalues $\lambda_3, \lambda_4,\ldots, \lambda_L$. One can now show that $\mbox{span}\{\mathbf e_1, \mathbf e_2\} = \mbox{span}\{\mathbf s_1, \mathbf s_2\}$. This is often called the "signal subspace" and the orthogonal subspace $\mbox{span}\{ \mathbf g_1,\ldots \mathbf g_{L-2}\}$ is called the "noise subspace."
Here's the punchline: To estimate $\omega_1$ and $\omega_2$ we need to solve for $\omega$ in $\mathbf s(\omega)^H \mathbf G \mathbf G^H \mathbf s(\omega) = 0$, where $\mathbf G = \begin{bmatrix} \mathbf g_1 & \ldots & \mathbf g_{L-2}\end{bmatrix}$, and $\mathbf s(\omega) = \begin{bmatrix} 1 & e^{j\omega} & e^{j2\omega} & \ldots & e^{j(L-1)\omega} \end{bmatrix}$.
Reference
[^] P. Stoica, R. Moses, Spectral Analysis of Signals 1st Ed. Ch. 4.
No comments:
Post a Comment