I was wondering if someone could suggest a method to differenciate between two
signals, on algorithmic scale for example a criterium to calculate, that
shows how noisy a signal is, i have found a post about Signal-To-noise
ratio but i am not sure if it applies to my study case. This is an example of the signal.
Thank you
Answer
Assumptions
- Your calculations do not to be made in real time.
- You know the times when the signal is high and when it is low.
Suggested solution(s)
SNR
There are many ways to quantify signal quality. SNR certainly is a valid way to achieve this in your case. Calculate
$\textrm{SNR} = \frac{\bar{S}_\textrm{high}}{\sigma_\textrm{low}}$
with $\bar{S}_\textrm{high}$ being the mean value of the signal during the high-phase and $\sigma_\textrm{low}$ being the standard deviation of the values during the low-phase. By this you can differentiate the two signals.
CNR
You could also calculate the contrast-to-noise-ratio as
$\textrm{CNR} = \textrm{SNR}_\textrm{high} - \textrm{SNR}_\textrm{low} =\frac{\bar{S}_\textrm{high}}{\sigma_\textrm{high}} - \frac{\bar{S}_\textrm{low}}{\sigma_\textrm{low}}$
with the signal mean and standard deviation taken in both phases. However, if you have reason to assume that during the low-phase there is only noise with mean 0, then both measures would be the same.
Regression quality parameters
Additionally, you could do a linear regression of a box-car function to your data and check the quality-of-fit measures, such as $\chi^2$, $R^2$, etc.
Remaining questions
The remaining question would be, if you have any limitations or spechial needs for the measure. Does it have to be in a certain range (i.e. always positive, bound between 0 and 1, ...). Is computation speed a limitation? Do you have to detect very small differences between signals, etc.
Linear Regression
In this thread, there is a post giving the relationship between SNR and the correlation coefficient of the known, noise-free signal. I'd like to take a second to point out the relationship with the linear regression, which is essentially the same. Assume that (column vector) $\vec{s}^\prime$ was the noise-free signal (box-car function, 0 in the low phase, 1 in the high phase), and (the column vector) $\vec{y}^\prime$ was the signal we are interested in. The prime symbols are there for further convenience after the next step where the primes will be dropped:
To capture the noise in the low phases properly, it is necessary to rescale the signal. If we did not do this, we would only take the noise in the high phases into consideration, since the model signal $\vec{s}^\prime$ is zero in the low phase by definition. Hence, set lets drop the primes and set:
$\vec{s} = \vec{s}^\prime - \bar{s}$ $\vec{y} = \vec{y}^\prime - \bar{y}$
where $\bar{s}$ denotes the mean value of $s$, and the same for $\bar{y}$, respectively. Please note that those vectors are centralized (i.e. the mean is subtracted, but they are not normalized, i.e. in general $\vec{s}^T\vec{s} \neq 1$. This is not a problem for the linear regression, but will be an argument down below regarding the correlation. For a proper correlation between two vectors, the vectors should be centralized and normalized to yield a correlation coefficient between -1 and 1).
A least squares regression can be done via $A\vec{x} = \vec{y}$, with the system matrix $A$ holding the regressors and $\vec{x}$ holding the contribution of each regressor to the signal. By using a pseudo-inverse, this can be solved in a least-squares sense via:
$A^TA\vec{x} = A^T\vec{y} \Longrightarrow \vec{x} = (A^TA)^{-1}A^T\vec{y}$
In our case, since there is no offset to the signal, we only have one regressor, which is $\vec{s}$, this simplifies the equation to:
$x = \frac{\vec{s}^T\vec{y}}{\vec{s}^T\vec{s}}$.
The vector symbol was dropped, since x is a scalar (i.e. only one regressor, hence only one contribution to the signal).
Please note that the scalar product $\vec{s}^T\vec{s}$ is the variance of the signal: $N\sigma^2 = \sum{(s_i^\prime - \bar{s})^2}$. As mentioned above, to compute a proper correlation, the vectors should be normalized. Hence, we borrow a square root of the variance of $\vec{s}$:
$x = \frac{\vec{s}^T\vec{y}}{\sqrt{\vec{s}^T\vec{s}}\sqrt{\vec{s}^T\vec{s}}} = \frac{\vec{s_{\textrm{norm}}}^T\vec{y}}{\sqrt{\vec{s}^T\vec{s}}}$.
We see from this result that the result $x$ from the linear regression is the same as the correlation coefficient $\rho$, except for a factor $\sqrt{\vec{y}^T\vec{y}}$. Hence $\frac{x}{\sqrt{\vec{y}^T\vec{y}}} = \rho$ can be used in the formula for SNR given in the other post to directly determine the SNR of the signal. Please note that the factor
$\frac{\frac{1}{N}}{\sqrt{\frac{1}{N}}\sqrt{\frac{1}{N}}} = 1$
was dropped/never introduced for convenience.
No comments:
Post a Comment