Useful human-computer interaction (HCI) interfaces haven’t advanced too much since Xerox PARC experimented with the desktop-and-mouse motif, commercialized and made famous by Apple in the 1980s. Luckily, some brainiacs at The University of Toronto’s Dynamic Graphics Project are helping Minority Report-style science fiction become a plausible reality.
Grad student Xiang Cao and professor Ravin Balakrishnan are working on some fascinating HCI techniques using a pen input and a handheld projector. The device projects a small square of light on a flat surface, yet simulates an effect like a flashlight illuminating areas of a gigantic, three dimensional display. The pen can be used for annotation and input, and apparent resolution can increase when the user moves closer to the surface while the scale remains the same (zooming-in on a map, for example). This is all difficult to explain in lay terms, so you kinda have to see it in action.
The handheld aspect of this technology could potentially be of use in small devices, which are traditionally bound by tiny screens and inconvenient input methods. Imagine being in an unfamiliar city and being able to project a Google map of your location on the nearest sidewalk, or pointing your PDA at piece of art in a gallery and seeing notations about it. Of course, this technology would probably receive initial popularity in the gaming community, so perhaps look for it in Playstation 12.
Using Vicon motion tracking technology similar to that used in Hollywood “mo-cap” rigs, the handheld device records movement and adjusts what it projects accordingly. It’s actually the image that changes, but it gives the optical illusion of a stationary “desktop” relevant to the surface. It seems that the image is also warped on the fly to give the projection a less distorted appearance to the observer.
There’s a lot more to this, especially as it pertains to the pen input, but our heads exploded just trying to describe it, so just take a look at the video.