Someone asked me this morning what I thought was “the cool” thing at CES.  Although not nearly as hyped as other stuff there, I was really into seeing the tech advancements for lifelogging, things like Sony’s eye tracker glasses.

Ever since Gordon Bell started wearing that camera around his neck over a decade ago I’ve been intrigued by the lifelog concept.  Lifelogging refers to the idea that by augmenting ourselves (or our environment) with the appropriate devices, we can construct a searchable archive that, in some sense, is a shadow of our life experiences.  The idea has its roots in a number of areas and reminds me of the wearable computing initiative that was championed by folks like Alex Pentland and the Media Lab’s Human Dynamics Group in the mid 1990s.  I still remember seeing Steve Mann at a DARPA computer vision conferences augmented with his camera system and wearable computer looking uncomfortable but strikingly utilitarian.  The idea of lifelogging is still being explored by a research community today that has much to say about how we interact with our computational worlds, and the technical, social, and legal barriers to widespread use of the technology.

I view the work I’ve done on visual computing software to be fundamentally about increasing pixel availability, but until more recently the general population has been content to live in very text heavy world and not that concerned about how many pixels they have to do other stuff. I imagine that similar to other pursuits that need to make sense of (or even curate) tremendous amounts of data, very inexpensive, widely available pixels will play a role.

Wearable computing innovations by people like Alex Lightman get me excited to think that the day when broad-based video-rich content will be the default medium for our communications – and then visual computing software will truly be mainstream.

In the mean time, we’ll keep coding to make the reality of super high res, inexpensive displays a ubiquitous part of our environment. So… when the rest of the world catches up and wants all those pixels for their lifelogs and other applications, we’ll be ready.

Share
About Christopher Jaynes

Jaynes received his doctoral degree at the University of Massachusetts, Amherst where he worked on camera calibration and aerial image interpretation technologies now in use by the federal government. Jaynes received his BS degree with honors from the School of Computer Science at the University of Utah. In 2004, he founded Mersive and today serves as the company's Chief Technology Officer. Prior to Mersive, Jaynes founded the Metaverse Lab at the University of Kentucky, recognized as one of the leading laboratories for computer vision and interactive media and dedicated to research related to video surveillance, human-computer interaction, and display technologies.

Submit Comment