Are there any algorithms that attempt to perform the inverse of color quantization on an image? In other words, is there a smart way to increase the bit depth of image?
The obvious answer would be to simply stretch the values of all pixels to fit the new range; however, that would leave gaps in the histogram, and the resulting image would still have the same number of different values. Are there any algorithms that, for example, take neighboring pixels into consideration in order to determine a better estimation for each pixel?
Answer
Are there any algorithms that, for example, take neighboring pixels into consideration in order to determine a better estimation for each pixel?
That would essentially be a low-pass filter.
So, yes, that exists, and is commonly used.
You can see that very nicely if you take e.g. an old computer graphics sprite, and scale it as a high-color-resolution image:
While the simply scaled image has as much discrete colors as the original, the linearly interpolated one has a lot more colors, and the histogram looks much more "continuous".
But:
There's no free lunch, usually. You can't just add color information back! (not being able to add back lost info is one of the fundamental truths of Information Theory, by the way). You have to sacrifice spatial resolution. Essentially, it's the same math that underlies Heisenberg's Uncertainty Principle. You can either have perfect info state of something (the color of a pixel, the impulse of a electron), or its position (sharp edges in an image, knowledge of the position of an electron). See this example:
No comments:
Post a Comment