How much of a display is physics and how much of it is computation?

Certainly some parts are physical hardware components. Broadly, the minimum requirements of a display are a light source of (even if that light source is ambient), a mechanism for modulating the intensity and color of that source. For example, the tiny mirrors of a DLP chip that control how much light reaches the viewer’s eye and a spinning color wheel, and optics to direct the colored rays of light to the viewer.  However, beyond these foundational components, other aspects of display are based on computation and those can be replaced by software.

Solving Display Challenges with Software

Solving Display Challenges with Software

The Mersive team uses this principle – Solving Display Challenges with Software – to write code that empowers display designers to build ever larger, high-resolution, and reconfigurable displays by using other displays as their building blocks. By clustering displays together, adding software that replicates, for example, a single lens system, all those smaller, lower-resolution displays can act in concert to display an image whose quality, size, and brightness is far beyond any of the underlying components. In doing this we have replaced traditional hardware-based approaches to building displays with software.

Solving Display Challenges with Software

Mersive display under construction

Time at the optics bench building shorter-throw lenses can be replaced by clustering a lot of projectors close to the screen. Time spent designing a DLP chip that supports beyond-HD resolution is replaced by software that ties two HD projectors together. Of course, clustering commodity components to achieve better results through software isn’t a new idea. It has been used to build supercomputers out of linux boxes, to achieve incredible real-time ray tracing with GPU render farms, distributed storage…the list goes on.

Other researchers have spent time exploring what other parts of the display can be replicated in software. Some of this work is really cool. At UNC folks realized that some of the operations contained in the optics of a display can be replicated in a more traditional rendering pathResearchers at HP Labs also tackled this.  My team at the “Metaverse Lab” at the University of Kentucky demonstrated how overlapping projectors and intelligent software can lead to Super-resolution and full screen anti-aliasing by rendering images that, when overlaid on one another, lead to a crisper image. There has even been work that uses software to generate a focused image (something traditionally done with a dial on the optics path) regardless of the underlying surface shape (something that I have yet to see accomplished through clever optical engineering).  A great SIGGRAPH paper from Zhang and Nayar on this type of work focused on using a camera to estimate defocus on a projected display and then derive (in software) what image should be projected so that when it hits that surface it appears focused. (see the Video)

This is all part of an exciting trend that should help move displays from a traditional “heavy iron” and hardware-based technology to flexible, software-driven clusters of displays that exploit the GPU, intelligent software, and commodity components.

Share
About Christopher Jaynes

Jaynes received his doctoral degree at the University of Massachusetts, Amherst where he worked on camera calibration and aerial image interpretation technologies now in use by the federal government. Jaynes received his BS degree with honors from the School of Computer Science at the University of Utah. In 2004, he founded Mersive and today serves as the company's Chief Technology Officer. Prior to Mersive, Jaynes founded the Metaverse Lab at the University of Kentucky, recognized as one of the leading laboratories for computer vision and interactive media and dedicated to research related to video surveillance, human-computer interaction, and display technologies.

Submit Comment