I have trouble distinguishing between these two concepts. This is my understanding so far.
A stationary process is a stochastic process whose statistical properties do not change with time. For a strict-sense stationary process, this means that its joint probability distribution is constant; for a wide-sense stationary process, this means that its 1st and 2nd moments are constant.
An ergodic process is one where its statistical properties, like variance, can be deduced from a sufficiently long sample. E.g., the sample mean converges to the true mean of the signal, if you average long enough.
Now, it seems to me that a signal would have to be stationary, in order to be ergodic.
- And what kinds of signals could be stationary, but not ergodic?
- If a signal has the same variance for all time, for example, how could the time-averaged variance not converge to the true value?
- So, what is the real distinction between these two concepts?
- Can you give me an example of a process that is stationary without being ergodic, or ergodic without being stationary?
Answer
A random process is a collection of random variables, one for each time instant under consideration. Typically this may be continuous time ($-\infty < t < \infty$) or discrete time (all integers $n$, or all time instants $nT$ where $T$ is the sample interval).
- Stationarity refers to the distributions of the random variables. Specifically, in a stationary process, all the random variables have the same distribution function, and more generally, for every positive integer $n$ and $n$ time instants $t_1, t_2, \ldots, t_n$, the joint distribution of the $n$ random variables $X(t_1), X(t_2), \cdots, X(t_n)$ is the same as the joint distribution of $X(t_1+\tau), X(t_2+\tau), \cdots, X(t_n+\tau)$. That is, if we shift all time instants by $\tau$, the statistical description of the process does not change at all: the process is stationary.
- Ergodicity, on the other hand, doesn't look at statistical properties of the random variables but at the sample paths, i.e. what you observe physically. Referring back to the random variables, recall that random variables are mappings from a sample space to the real numbers; each outcome is mapped onto a real number, and different random variables will typically map any given outcome to different numbers. So, imagine that some higher being as performed the experiment which has resulted in an outcome $\omega$ in the sample space, and this outcome has been mapped onto (typically different) real numbers by all the random variables in the process: specifically, the random variable $X(t)$ has mapped $\omega$ to a real number we shall denote as $x(t)$. The numbers $x(t)$, regarded as a waveform, are the sample path corresponding to $\omega$, and different outcomes will give us different sample paths. Ergodicity then deals with properties of the sample paths and how these properties relate to the properties of the random variables comprising the random process.
Now, for a sample path $x(t)$ from a stationary process, we can compute the time average $$\bar{x} = \frac{1}{2T} \int_{-T}^T x(t) \,\mathrm dt$$ but, what does $\bar{x}$ have to do with $\mu = E[X(t)]$, the mean of the random process? (Note that it doesn't matter which value of $t$ we use; all the random variables have the same distribution and so have the same mean (if the mean exists)). As the OP says, the average value or DC component of a sample path converges to the mean value of the process if the sample path is observed long enough, provided the process is ergodic and stationary, etc. That is, ergodicity is what enables us to connect the results of the two calculations and to assert that $$\lim_{T\to \infty}\bar{x} = \lim_{T\to \infty}\frac{1}{2T} \int_{-T}^T x(t) \,\mathrm dt$$ equals
$$\mu = E[X(t)] = \int_{-\infty}^\infty uf_X(u) \,\mathrm du.$$ A process for which such equality holds is said to be mean-ergodic, and a process is mean-ergodic if its autocovariance function $C_X(\tau)$ has the property: $$\lim_{T\to\infty}\frac{1}{2T}\int_{-T}^T C_X(\tau) \mathrm d\tau = 0.$$
Thus, not all stationary processes need be mean-ergodic. But there are other forms of ergodicity too. For example, for an autocovariance-ergodic process, the autocovariance function of a finite segment (say for $t\in (-T, T)$ of the sample path $x(t)$ converges to the autocovariance function $C_X(\tau)$ of the process as $T\to \infty$. A blanket statement that a process is ergodic might mean any of the various forms or it might mean a specific form; one just can't tell,
As an example of the difference between the two concepts, suppose that $X(t) = Y$ for all $t$ under consideration. Here $Y$ is a random variable. This is a stationary process: each $X(t)$ has the same distribution (namely, the distribution of $Y$), same mean $E[X(t)] = E[Y]$, same variance etc.; each $X(t_1)$ and $X(t_2)$ have the same joint distribution (though it is degenerate) and so on. But the process is not ergodic because each sample path is a constant. Specifically, if a trial of the experiment (as performed by you, or by a superior being) results in $Y$ having value $\alpha$, then the sample path of the random process that corresponds to this experimental outcome has value $\alpha$ for all $t$, and the DC value of the sample path is $\alpha$, not $E[X(t)] = E[Y]$, no matter how long you observe the (rather boring) sample path. In a parallel universe, the trial would result in $Y = \beta$ and the sample path in that universe would have value $\beta$ for all $t$. It is not easy to write mathematical specifications to exclude such trivialities from the class of stationary processes, and so this is a very minimal example of a stationary random process that is not ergodic.
Can there be a random process that is not stationary but is ergodic? Well, NO, not if by ergodic we mean ergodic in every possible way one can think of: for example, if we measure the fraction of time during which a long segment of the sample path $x(t)$ has value at most $\alpha$, this is a good estimate of $P(X(t) \leq \alpha) = F_X(\alpha)$, the value of the (common) CDF $F_X$ of the $X(t)$'s at $\alpha$ if the process is assumed to be ergodic with respect to the distribution functions. But, we can have random processes that are not stationary but are nonetheless mean-ergodic and autocovariance-ergodic. For example, consider the process $\{X(t)\colon X(t)= \cos (t + \Theta), -\infty < t < \infty\}$ where $\Theta$ takes on four equally likely values $0, \pi/2, \pi$ and $3\pi/2$. Note that each $X(t)$ is a discrete random variable that, in general, takes on four equally likely values $\cos(t), \cos(t+\pi/2)=-\sin(t), \cos(t+\pi) = -\cos(t)$ and $\cos(t+3\pi/2)=\sin(t)$, It is easy to see that in general $X(t)$ and $X(s)$ have different distributions, and so the process is not even first-order stationary. On the other hand, $$E[X(t)] = \frac 14\cos(t)+ \frac 14(-\sin(t)) + \frac 14(-\cos(t))+\frac 14 \sin(t) = 0$$ for every $t$ while \begin{align} E[X(t)X(s)]&= \left.\left.\frac 14\right[\cos(t)\cos(s) + (-\cos(t))(-\cos(s)) + \sin(t)\sin(s) + (-\sin(t))(-\sin(s))\right]\\ &= \left.\left.\frac 12\right[\cos(t)\cos(s) + \sin(t)\sin(s)\right]\\ &= \frac 12 \cos(t-s). \end{align} In short, the process has zero mean and its autocorrelation (and autocovariance) function depends only on the time difference $t-s$, and so the process is wide sense stationary. But it is not first-order stationary and so cannot be stationary to higher orders either. Now, when the experiment is performed and the value of $\Theta$ is known, we get the sample function which clearly must be one of $\pm \cos(t)$ and $\pm \sin(t)$ which have DC value $0$ which equals $0$, and whose autocorrelation function is $\frac 12 \cos(\tau)$, same as $R_X(\tau)$, and so this process is mean-ergodic and autocorrelation-ergodic even though it is not stationary at all. In closing, I remark that the process is not ergodic with respect to the distribution function, that is, it cannot be said to be ergodic in all respects.
No comments:
Post a Comment