Given a gaussian white noise with mean 0 and $\sigma = 1$. I'm interested in the PSD (power spectral density), when the signal is plain up- or downsampled. No interpolation, no filtering nothing.
There are some related topics in the web, but they are always ending with someone considering low- or passband filters. They are also often speaking about the aliases which get e.g. back in the nyquist band when downsampling, but they never properly describe it with math:
So, what happens with the given PSD, (which should have an amplitude of 1/fs between -fs/2 and fs/2) when it's led trough a down and upsampler?
I assume first, up and down or the opposite should result in the same PSD. Otherwise one could just up and downsample to benefit from some noise reduction. I could also imagine that up and down sampling make no changes at all to white noise. On the other hand, there might be an effect when downsampling, due to aliases.
Let me try to explain what happens to the PSD (power spectral density) of a discrete-time Gaussian white noise $x[n]$ of power $\sigma_x^2$ after it's either expanded by $L$ (insertion of L-1 zeros between consequitive samples of $x[n]$), or compressed by $M$ (aka decimation, selecting every M-th sample of $x[n]$).
First, let's put a convenient framework, by defining required quantities and their assumptions.
A discrete-time random process (RP) is constructed by a set of random variables (RV) and represented by $\{X[n,s)\}$ where integer $n$ denotes time-indexing for each RV and $s$ denotes outcome of a random experiment associated with those RVs or the RP. Customarily, an outcome $s$ selects a complete discrete-time waveform $x[n]$, which is an instance of the RP, from the ensamble set of it. The ensamble set includes all possible discrete-time waveforms. For notational simplicity, the sequence $x[n]$ is referred to as the random process, unless stated otherwise.
Probabilistic analysis of the RP $x[n]$ proceeds by defining various joint PDFs (probability density/distrbution functions) between random variables of the RP. Among those joint PDFs, (and moments associated) two of them become most useful and important for practical engineering applications. They're the mean and the Auto-Correlation Sequence (ACS) of the process, as defined below for a real $x[n]$ :
1- Mean of a RP is: $\mu_x[n] = \mathcal{E}\{x[n]\} $
2- ACS of a RP is: $\phi_{xx}[n,m] = \mathcal{E}\{x[n]x[n+m]\}$
As can be seen, in general the mean and ACS of a RP depend on time index $n$. The ACS depending also on the lag m. A very important simplification for the analysis of RP is made by defining what's known as a wide sense stationary (WSS) RP, whose mean and ACS are defined to be independent of time; i.e.,
1- Mean $\mu_x[n] = \mu_x = \mathcal{E}\{x[n]\} $
2- ACS $\phi_{xx}[n,m] = \phi_{xx}[m] = \mathcal{E}\{x[n]x[n+m]\}$
Now, for given a WSS random process $x[n]$, its ACS $\phi_{xx}[m]$ holds a very important position. The power spectral density (PSD), $S_{xx}(e^{\omega})$, of a WSS RP is defined to be the discrete-time Fourier transform (DTFT) of its ACS:
$$ S_{xx}(e^{\omega}) = DTFT \{ \phi_{xx}[m] \} = \sum_{m=-\infty}^{\infty} \phi_{xx}[m] e^{-j\omega m} $$
At this point, it's clear that the PSD of a RP exists iff the RP is WSS and further that its ACS is stable so that its DTFT converges. I assume @DilipSarwate may comment if a (major) ill-statement was presented up to here.
we are now ready to tackle your problem !
The compression by integer M (which you refer to as downsampling without filtering) operation is:
$$ x[n] \longrightarrow \boxed{ M \downarrow} \longrightarrow y[n] ; ~~~y=x[Mn] $$
Given an i.i.d (independent, identically distributed) zero mean, WSS, (Gaussian) white noise RP $x[n]$ with power $\sigma_x^2$ it can be shown that its ACS and PSD are:
$$\phi_{xx}[m] = \sigma_{x}^2 \delta[m]$$ $$S_{xx}(e^{j\omega}) = \sigma_x^2 ~~~, ~~~\text{ for all } \omega $$
Then what is the PSD associated with the compressed output $y[n]$ ? To show this, first we must find its ACS and see that the RP $y[n]$ is indeed WSS.
$$\phi_{yy}[n,m] = \mathcal{E}\{y[n]y[n+m]\} = \mathcal{E}\{x[Mn]x[M(n+m)]\} = \mathcal{E}\{x[Mn]x[Mn+ Mm)]\} = \phi_{xx}[Mm] = \sigma_x^2 \delta[Mm] = \sigma_x^2\delta[m]$$
Hence we see (with a slightly bold step of treating $Mn$ the same as $n$ alone, which's justified by the WSS and whiteness of the input $x[n]$) that the ACS associated with the compressed signal $y[n]$ is identical to the ACS of the input $x[n]$. Hence their PSD's are also the same:
$$S_{yy}(e^{j\omega}) = \sigma_x^2 ~~~,~~~\text{ for all } \omega $$
Next we apply the same for the expansion (you refer upsampling) process which can be shown to be as: $$ x[n] \longrightarrow \boxed{ L \uparrow} \longrightarrow y[n] = \begin{cases} { x[n/L] ,~~~n=rL \\ 0 ,\text{ otherwise} }\end{cases} $$
Let's try to compute the ACS of the expanded output $y[n]$. $$\phi_{yy}[n,m] = \mathcal{E}\{y[n]y[n+m]\} = \begin{cases} { \sigma_x^2 ,~~~ m=0 ~~~\text{ and } n=rL \\ 0 ,~~ m \neq 0~~~\text{ or } n \neq rL }\end{cases} $$
When computing $\phi_{yy}[n,m]$, I assumed that the stuffed zeros were deterministic random variables with a pdf of $f_X(x)=\delta(x)$, which can only take a value of $0$ with a probability of one. They are not independent in themselves but independent from nonzero samples. Furthermore, since their value is always zero, when multiplied inside an expectation operator they always yield a zero result.
Now there is a problem! Eventhough the input $x[n]$ is WSS, the output is not. As its ACS $\phi_{yy}[n,m]$ depends not only on the lag $m$, but also on the exact time instance $n$. Hence its ACS is not of the form that's suitable for the DTFT expression. So at this point we can say that the PSD associated with $y[n]$ does not exist in the form of $$S_{yy}(e^{j \omega}) = DTFT\{ \phi_{yy}[m] \} $$
May be a parametric Fourier transform or a Short-Time Fourier transform (STFT) or a 2D Fourier transform can be utilized but I'm not very sure about it.
Finally, we can state the following observation that since the expansion and compression operators are not LTI systems (they are time varying), then we are not surprised to see that their outputs are not WSS even if their inputs are so, which would not be possible with LTI systems.