Computer Interfaces: Just Act NaturallySeptember 2011
Topics: Information Interfaces, Human Computer Interaction, Human Factors Engineering
Evolution, but Not Revolution
In the Steven Spielberg sci-fi film Minority Report, Tom Cruise's character, a law enforcement agent, steps in front of a computer screen to track down a man who is minutes away from killing his wife. Cruise doesn't start pecking away at a keyboard or rolling his thumb over a trackball. Instead, he moves his hands through the air with the grand gestures of a symphony conductor, and each twist of a wrist, slash of a finger, or grip of a fist causes images and data to blossom and retreat across the computer screen. His search quickly completed, he and his team rush to the rescue.
It is disappointing that, for most of us, such a simple and efficient computer interface system can only be found in a sci-fi film. Since the 19th century, automatic computing hardware in one form or another has dramatically reduced time, cost, and errors in our daily tasks. But although computers have long made our lives easier, working with them has never been easy.
During the early years, the use of computers was restricted to expert technicians with special training. Early electronic computers featured panels of lights and switches and hard-copy printouts dominated by text. In the 1950s, these gave way to electronic keyboards and cathode ray tube monitors that displayed text and (less often) graphicsa model that lasted about 30 years.
In the late 1970s, the mouse and the graphic user interface (GUI) were developed at Xerox PARC and soon after popularized by Apple and Microsoft. This ushered in an era of computers usable enough to become a feature in nearly every office and many homes. The pointer-and-GUI model has improved over time, especially in terms of support for rich audiovisual media, connectivity,and mobility, but has remained unchanged in its essential character for about 30 years.
The Incredible Disappearing Pointing Device
Today, we are in the midst of another massive shake-up in how people use computers. The humble "single-pointer" model using mouse, track pad, or touchscreen has given rise to multi-touch, a simple technological innovation that pushes the usability of computers far beyond what traditional pointing devices would allow. Multi-touch allows people to rapidly and intuitively pan, zoom, rotate in two and three dimensions, do simple drawing, and manipulate multiple data objects at the same time. And they do it directly on the display rather than having a device off to the side somewhere. The dramatic success of the first major commercial multi-touch devices, the iPhone and iPad, has been driving competitors to play catch-up for the last four years.
Multi-touch devices are so intuitive that a volunteer two-year-old "test subject"grabbed a MITRE-owned iPad out of an adult's hand and went to work, knowing what to do after just a few seconds of watching others. When the child was later shown a desktop computer, she tried to use her fingers on the screen. Upon being told that she needed to use the mouse, she picked up the mouse, placed it directly on the screen, and moved it as she had used her finger earlier.
The Touchless Touchscreen
Multi-touch interfaces have their drawbacks,however: the user must remain close to the screen, fingers and hands can block portions of the display, and accurate operations like drawing are difficult. The next step in usable and intuitive interfaces will free us from having to touch anything at all.
Oblong Industries has introduced a system based on the ideas shown in Minority Report. The system requires the user to operate in a special room with motion-capture gloves and to learn a special "hand-jive" of motor commands. Microsoft recently released its Kinect gaming system, which uses an infrared camera-based technology that senses gamers' complete range of motions,allowing them to control an interface with their whole body.
These "stand-off" interfaces are being developed mostly for the entertainment sector, in an effort to leapfrog the massive appeal of the Nintendo Wii gaming console. MITRE is partnering with Microsoft Research to go beyond entertainment to develop stand-off interfaces for business and productivity applications,ranging from control of a computer's functions to media-augmented communication and collaboration.
Brain-computer interfaces could, in theory, eliminate the need to move one's body at all to control a computer. Noninvasive methods for brain-computer interfaces have shown promise for allowing locked-in quadriplegic patients to regain some control of their surroundings. An application of special interest to the government is in combat casualty care, restoring function to wounded combat veterans. Methods like functional near-infrared spectroscopy and electroencephalography are compact enough to be deployable for such purposes, unlike their unwieldy counterparts functional magnetic resonance imaging and magnetoencephalography,which require patients to place their heads inside large, heavy machinery in special laboratory facilities.
Another way brain-computer interfaces can be useful to the government is by measuring the mental activity and physiological state of users. MITRE is using brain-computer interfaces to measure user stress responses during controlledsimulations of air traffic control operations. This knowledge can help in designing air traffic control systems that can anticipate and avoid situations in which controllers are overworked.
However, the ability to measure brain activity is not yet finely tuned enough to design a truly effective human-computer interface. The "telepathy headband" and the "mind control headset" will have to remain within the realm of science fiction, at least for now. But as a toddler fiddling with an iPad demonstrates,yesterday's sci-fi has a way of becoming tomorrow's reality.
by Jeff Colombe