, , , , , , , , , , , , , , , , ,

If I read Dimension’s press release about the 2014 NAB show correctly, they are trying to market upscaling technology only.

This is not a bad idea on the surface. With the SoftVideo codec being obsolete, Dimension may extract value by offering a new realtime image upscaler by implementing their 8,639,053 patent in hardware. The question is, would this work? The two main hurdles are a) can it run fast enough and b) is the quality high enough for anyone to care? High-resolution TVs already have good upscaling hardware.

The upscaler spends most of its time searching for blocks. Specifically, up to sixteen 6 x 6 blocks for each pixel to be zoomed. That can be implemented as a GPU fragment shader, but it is a fair bit of texture lookups to perform for each output pixel. Block searching is nontrivial, because blocks are compared using a summary of their pixel values, requiring additional math. There are other tasks too:

– Converting the input frame to optimum color space (if it is not already).
– Predownsampling the input frame prior to block matching.
– Copying a 4 x 4 block to the output frame with color scaling/shifting.
– Artifact filtering.
– A low-pass (smooth) filter after all pixels have been upscaled.
– Converting output frame to RGB (if necessary).

It gets harder if zooming requires more than 200%, e.g. from standard definition to 4K. The entire process has to happen twice, and then downsampled to the output resolution to avoid enlarging too much. This costs at least five times as much computation as before.

A custom FPGA circuit with dedicated RAM would be the safest bet. GPUs, especially the slower ones in mobile devices, would probably be too slow. Plus all that work would drain their batteries heavily. Also, it has to run at 30 fps, and maybe 60 fps. That means doing all that work in less than 33 or 16 milliseconds.

So if it works, would it look good enough? To answer that, we need Dimension to offer the upscaler as a testable application or library. Or we could develop a reference upscaler using the patent as a specification, but that is nontrivial work we should not have to do. However, a simple upscaler that performs only the block searching is easy enough to develop and would give us some idea of the quality.

The tiny search area could be a problem, even though most block matches occur close to the block being magnified. Whatever the result, it has to be good enough for TV manufacturers to find it worthwhile to include a “premium” upscaler. If it is merely as good as existing technology, then there is no point; it just becomes one method amongst many, and none of the large TV manufacturers like Sony or Samsung will bother since their existing upscalers work well enough.

Going by the results from the VDK, I am not optimistic. The benefit too often appears only for high-contrast imagery. However, the VDK compounds block mapping errors by using block matching for compression as well as for upscaling, so the upscaler by itself may have better quality.

In the end, whether upscaling is something Dimension (or TMM, if they win their lawsuit) can make commercially viable, it is small potatoes next to the codec business, and severely lowers the revenue and growth expectations. The original promise of the technology was to be a main event and not a sideshow.