The Head-Mounted Display (HMD) is almost synonymous with Virtual Reality. So much so that iconic “VR” images almost always involved someone wearing an 1980’s-chic headpiece wrapping their head with their arms extended into empty space and elegantly manipulating some virtual object. It makes sense that the headmounted display was the centerpiece for early visions of VR. After all, if virtual reality entails replacing your current surroundings with computer generated models a fairly obvious approach is to supplant your visual surround with a couple displays in front of each eye that simply occlude your surroundings.

In fact, the first VR system also highlighted the first headmounted display. In 1968, computer science visionary Ivan Sutherland, with the help of his student Bob Sproull, developed a head-mounted display system that immersed the user in a virtual computer graphics world. The system was incredibly forward-thinking and involved bioncular rendering (an appropriate perspective view for each eye), headtracking (the scene being rendered was driven by changes in the users headposition) and a vector rendering system. The entire system was so cumbersome that the headmounted display was mounted directly to the ceiling and hung over the user in a somewhat intimidating manner. The system was dubbed the “Sword of Damocles”  So what happened? Why didn’t the headmounted display take-off? Outside of niche military training applications and some very limited exposure in entertainment, the headmounted display is very rarely seen. This is despite commercial development of HMDs from some very large players like Sony who developed the LCD-based Glasstron in 1997.

It turns out that HMDs suffered (and to some extent still do) from some very serious technical limitations. When a display is that close to the eye, the effective pixel size for the user can be quite large. This means that, for a given resolution, a single pixel spans a larger angle of the total field-of-view. Large pixels mean that image quality suffers and an artifact known as “aliasing” becomes apparent. This arc miniute per pixel problem can be addressed by adding more pixels on each display but doing this and retaining a lightweight, easy to power system at reasonable cost has proved to be difficult. A second problem is the effective field of view of the display.

The point of VR is to create an immersive experience for the viewer. Initial HMDs had very limited viewing angles and were somewhere in the neighborhood of 35-degrees per eye. This means that your looking at your world through a very narrow tube and leads to all sorts of problems beyond loss of a sense of presence including difficulty in navigating a scene, observing objects in context. If anyone has used an HMD (or watched the behavior of someone with one on) you’ll notice that the viewer continuously rotates their head back and forth, scanning the virtual environment to build a cognitive map of the scene. Again, things have gotten far better and today there are HMDs that have a horizontal field of view as broad as 123-degrees. At the same time the resolution in these systems has improved to 1920×1200 in each eye. Unfortunately HMDs of these quality typically cost around $40,000 dollars.

Perhaps one of the worst problems with HMDs is the latency between the time a user repositions his/her head and the time it takes to render an update to the scene. With an HMD this latency can become problematic, when the user’s head rotates, rotational lag in the scene can be disconcerting and lead to disorientation and dizziness. There are some proposed techniques to address this, but fundamentally because the virtual room behind you doesn’t exist until your rotate your head to look at it, latency will always be a challenge for HMDs.

What I’ve found interesting is the parallel path to visual immersion that utilizes projectors to paint an immersive room. The initial CAVE system was an alternative to visual immersion for VR that was quite successful. You can achive higher visual fidelity simply by adding more projectors (we’ve deployed a CAVE-like display for a German university that has a total resolution of over 30 million pixels) so the burden to achieve high-resolution immersion is no longer placed strictly on the hardware manufacturer. Clearly field of view isn’t as much of an issue (think about a 360-degree dome covered in projection), and because you can render an immersive scene that includes data behind the viewer, rotational latency isn’t as much of a problem.  I’m obviously biased (I work at a company that develops visual computing software for multi-projector systems), but I believe this is a clearer path forward for immersive VR.  What do other people think?

Share
About Christopher Jaynes

Jaynes received his doctoral degree at the University of Massachusetts, Amherst where he worked on camera calibration and aerial image interpretation technologies now in use by the federal government. Jaynes received his BS degree with honors from the School of Computer Science at the University of Utah. In 2004, he founded Mersive and today serves as the company's Chief Technology Officer. Prior to Mersive, Jaynes founded the Metaverse Lab at the University of Kentucky, recognized as one of the leading laboratories for computer vision and interactive media and dedicated to research related to video surveillance, human-computer interaction, and display technologies.

1 Comment for this entry

  • L Fitzpatrick
    March 9th, 2011

    I would like to believe that I will be able to use VR hardware in the privacy of my own home. I believe that VR market will be far bigger on the entertainment side and a fully immersive HMD is needed. Prices, field of vision problems, and latency, I believe can all be fixed with time and hopefully everyone will be able to explore VR environments.

Submit Comment