I have a function that performs gaussian blur on image for some specific σ (the standard deviation).
It first computes kernel of size ⌈3σ⌉ and then performs convolution with that kernel.
However, I would like to specify blurring radius in pixels rather than σ.
I suppose that the blur radius (in pixels) is just σ2 as this denotes variance of a random variable.
Is that right? Can the same thought be extended to 2D?
UPDATE:
The problem is that I need to do things like building a gaussian pyramid (successively blurred and downsampled image).
When the image gets downsampled to 1/2 of its width, I suppose I need a gaussian blur of radius 2 pixels (σ=√2 ?). And for 1/4 subsampling, I would need blur of 4 pixels (σ=2?)... But I am not sure about that...
Answer
The standard deviation σ is itself the appropriate linear scale for a Gaussian. For example, in 1D the Gaussian is f[x,σ]=f[x/σ]∝e−(x/σ)2, i.e. σ has the same units as x. As Arrigo notes, these units can be pixel-units.
The 2 in the exponent of σ2 does not have anything to do with the dimensionality: it's the same in 1D or 2D (or nD).
The use of a t=σ2 to index the "scale" in scale-space is more for 1) mathematical convenience, and 2) connection to the time-scale of diffusion.
No comments:
Post a Comment