, , , , , , , , , , , ,

I wanted to talk a bit more about a subject that — despite good efforts — keeps getting misunderstood, especially by newcomers to digital graphics.

The term “resolution independance” does not mean that one can zoom into a picture indefinitely and see ever more detail. This is mathematically impossible. Instead, what is meant is that a picture is described in a way that it can be rendered out to any desired resolution. For example, video game objects are described as polygons, whose edges are always crisp no matter how large they appear.

For pixel-based images such as photographs, the best we can do when upscaling is to keep the apparent edges of differently-colored regions sharp. This is done by finding some geometrical way to describe the pixel regions, and then render to the desired size using said geometry.

In the case of fractals, and the PIFS algorithm in particular, the image is described as a set of recursively iteratable blocks, which produces a sharp but quasi-random blocky look. Other algorithms try to convert groups of similiar pixels into spline paths or polygons, whose edges can be scaled without pixellation.

None of these approaches can increase actual detail, e.g. showing pores as a person’s skin is enlarged, or numbers on a distant licence plate becoming readable. Despite what popular crime-fighting TV shows would have us believe, one cannot enhance video this way. Where detail is added, it has far more chance of being random than meaningful.

Consider an analogy to compression, because we are actually trying to describe a picture with many pixels using fewer pixels. A file cannot be infinitely compressed down to a single bit or byte, because at some point the number of different files than can be represented is too few. A single byte has eight bits, giving 256 different possible meanings, which means it can only expand, at best, into 256 different possible files.

If any file, no matter how little redundant data it contained, could be compressed, then we could simply compress huge files over and over until they shrank to a single byte. But we already know that a byte cannot come anywhere close to representing the billions of files people have. Once a file has no redundant data, it cannot be compressed further.

For images, imagine trying to zoom into a single gray pixel. What could it possibly resolve into when enlarged to, say, 100 x 100 pixels? A 2 x 2 block of black pixels could be enlarged to a black square or to a circle, but there is not enough information to say which shape is the right one. The ambiguity results from information that has been irretriveably lost when the image was first created, and computers are not artists able to creatively decide how to fill in the gaps.

If such magic were possible we would hardly need, for example, telescopes. Astronomers could just snap pictures of the night sky with their smartphones and enhance them until the surfaces of distant planets were visible.