MITRE's Human-Computer Interface Gets a GRIP on the Future

April 2011
Topics: Human Computer Interaction, Sensor Technology, Information Interfaces
Multi-touch platforms will allow applications to be controlled via 3-D gestures and a MITRE-developed software framework called GRIP is helping make it possible.
touchscreen

The days of reliance on a mouse and keyboard to interact with computers are numbered—and emerging interfaces will make it possible to develop intriguing new applications for addressing government challenges.

MITRE's Gestural Reference Implementation Portfolio (GRIP) team is working on several such applications. They use the team's newly developed software framework, which works with next-generation human-computer interaction technologies. GRIP supports applications running on a multi-touch platform. It also allows applications to be controlled via 3-D gestures through sensor devices such as the Microsoft Kinect (originally developed for the XBox 360).

"The primary motivation for this work is to let our government sponsors try these interfaces before they buy them," explains Jason Letourneau, a MITRE senior software systems engineer who works on the project.

At the request of a sponsor, the team recently developed an emergency-response training prototype and tested it on Microsoft Surface. Surface is a multi-user, coffee table-sized PC that responds to touch commands and is equipped with low-resolution cameras. Users can also control the action with specially developed byte tags.

"This allows people to interact with multiple objects at the same time," explains John Dewsnap, a MITRE senior software application development engineer who also works on the project.

Virtual Patients, Real Breakthroughs

MITRE's prototype application runs medical training simulations in which trainees can "treat" wounded individuals. The "patients" appear as flat images on the computer's screen.

To interact with the flat images, Dewsnap and his team created small blocks containing the byte tags. When moved around the screen, the byte tags represent different medical instruments that medical trainees manipulate within the screen image.

"When you put the block on the table, it brings up a 2-D user interface that allows for physical interaction with the virtual patient," Dewsnap explains. "The sponsor asked us to make sure the students felt empathy towards the fictional patient. So we came up with the idea that during the curriculum the students would learn the patient's back story."

Trainees assess and treat the patient's wounds by applying byte tags to the screen. "We built this prototype so multiple students would have to work together to complete the exercise, just as they would in the real world," Dewsnap notes.

Using a Natural-looking Environment Reinforces Training

In a related application, the team used GRIP with the Kinect sensor to manipulate Google Earth via 3-D gestures.

"Our goal is to determine how quickly people can become adept at 3-D gestures, just as they would with a keyboard and mouse," Letourneau says. "If you can give that person a natural 3-D interface, the result may be a more efficient interaction."

He adds that Kinect, which combines a regular camera with an infrared camera, provides information about the depth of an object along with spectrum information. "The combination opens up new possibilities for scene recognition."

By feeding Google Earth into the GRIP framework, a person can stand in front of the Kinect sensor and zoom into images of a landscape (which can be projected on a screen) by gesturing with his arms.

"We've decoupled the gesture recognition from both the source and the receiving applications," Letourneau says. "When the system recognizes a gesture, it sends the information off to an application running in conjunction with Google Earth to control the Google Earth application."

Beyond Tabletop Games to Tabletop Learning

Kinect interprets 3-D scene information from a continuously projected infrared structured light. It features a color video camera and depth sensor, which enable full-body 3-D motion capture and advanced gesture recognition, facial recognition, and voice recognition.

Surface can be used in a table format or mounted on a wall. It was originally developed for use in hotels, restaurants, and retail businesses (for applications such as tabletop video games), but is also seen as having potential for government or military applications.

Through this type of experimentation, the team hopes to develop insights into emerging human-computer interfaces that can be applied across a broad spectrum of MITRE work. The need for next-generation interfaces isn't limited to a single domain, Letourneau adds.

"In healthcare, airspace management, intelligence, surveillance and reconnaissance, and many other areas of MITRE's work, people are constantly looking for better ways to interact with complex systems," he says. What's more, GRIP is "hardware agnostic," meaning people aren't limited to a single type of device, Dewsnap adds. "We can tie into any multi-touch system that's out there."

Next Steps Give a Leg Up

According to Dewsnap and Letourneau, the next steps for the GRIP team include analyzing emerging input technologies for 2-D and 3-D gesturing and experimenting with immersive combinations of display and interaction to address sponsor human-computer interaction concerns. The team also plans to promote the GRIP software within both the open source community and the company.

"We really want to encourage others at MITRE to do rapid prototyping with the interface," Letourneau says. "Within MITRE, anyone who wants to work with this type of interface can get a leg up by building on what we've learned, which means we can get solutions to our sponsors faster."

—by Maria S. Lee

Publications

Interested in MITRE's Work?

MITRE provides affordable, effective solutions that help the government meet its most complex challenges.
Explore Job Openings

Publication Search