Do You Understand and Trust Your Teammate, the Robot?December 2018
Topics: Autonomous Systems, Artificial Intelligence, Social Behavior, Innovation
Today, robots build cars, help doctors perform operations, and remove IEDs from roadways. Autonomous systems are everywhere: in aviation, transportation, and even the stock market. Many more applications are in development.
Knowing the best ways to develop these systems and introduce them into the workplace is critical to ensuring human-machine teams work together successfully and effectively. Together, they can enhance each other's strengths and compensate for each other's weaknesses.
MITRE’s Human-Machine Social Systems (HMSS) Lab works with government, industry, and academia to address this challenge by investigating and improving the ways that robotic systems interact with humans.
"The engineers who design and program robots or autonomous systems need a greater understanding of inherent social cues," says Dr. Charlene Stokes, director of the HMSS Lab. "There's a reason why Amazon's virtual assistant, Alexa, has a human name and voice. And it's why many robots have faces that imitate human expressions. These factors help us relate to our new colleagues."
Improving Trust in Human-Machine Teaming
"Improved interaction capabilities can translate into increased trust by humans," she adds. "Trust—believing these systems will do what they were designed to do—is critical to deriving the most value from autonomous systems. That's vital in terms of untapped efficiency and effectiveness, for a host of new applications in domains such as health, defense, aviation, and agriculture.
"Many companies are propelling the technology boom by developing or investing in the most advanced robots and autonomous systems we've ever seen. But the most advanced technologies won't matter if humans don't adopt, trust, or feel comfortable using them," she says.
Stokes has explored the nuances of human-machine interaction for 15 years. She works with a team of researchers to better understand the social cues in human-machine teaming (HMT) and document best practices, so they can be used by other organizations.
"The word 'intelligence' in artificial intelligence [AI] is a social cue in itself, which users unconsciously respond to," Stokes adds. "As true AI comes to fruition over the years and systems get better at learning not only our likes and responses, but also our human idiosyncrasies, our interactions with and trust in these systems will become stronger.
"Our focus is on understanding and shaping how users perceive and respond to the increasing social dynamics of future technologies."
In MITRE's HMSS lab, researchers infuse varied social cues at nearly every stage of development from system design to joint human-system training and field operations. They target cross-cutting technologies to uncover the fundamental hurdles and points of leverage for social cues. Working across multiple domains, including military and commercial sectors, the MITRE team investigates the application of common social cues. The goal is to promote key human-machine teaming factors, such as trust, reliance, communication, coordination, and rapport.
Building Social Cues into Autonomous Systems
"Industry is leading the way in the development of AI technology and systems," Stokes says. "Partnering with the commercial tech industry makes sense for MITRE because we can play the role of 'bridge,' connecting our government sponsors to high-tech industry. We help business understand the specific needs of our government sponsors, and we connect non-traditional innovation and innovators to government agencies." As a manager of federally funded research and development centers, MITRE serves the government as an unbiased advisor and does not compete with industry.
For example, MITRE is working with two startup companies that grew out of Boston's MassRobotics innovation hub and the MassChallenge business accelerator to help them understand and apply the nuances of HMT. One of these companies, American Robotics, has developed a fully automated drone system that allows farmers to do season-long automated crop scouting.
The system, called Scout, is housed in a weatherproof station in the field. Every step of the crop scouting process is automated—from planning and launch to imaging and data management. Piloting is not required. Farmers can simply set the drone to fly at pre-scheduled times. After it docks autonomously in its station, the drone automatically processes and uploads datasets to the cloud, so farmers can access the information on their devices.
"We want farmers to think of our system, from the get-go, as an ever-vigilant teammate who will alert them about issues in the field before they become big problems" says Vijay Somandepalli, CTO of American Robotics. "Catching problems early and reliably not only means less input costs and more yield for the farmers, but also helps improve the sustainability of the food ecosystem and protects the environment from over-application or mis-application of harmful chemicals, which are used to treat myriad problems in farming and could end up in other food that we consume or as run-off in groundwater and rivers."
Where do social cues come into play in this system? Almost everywhere. They can be applied to the look of the drone and its docking station, to the sounds the system emits, to the ease of use of the interface, to the dependability of the system. How do farmers (and their neighbors) perceive the drones? Are they friendly? What do users like and not like about how the system works? How can the product be adapted to improve usability? MITRE's social cognitive scientists are helping the company address these issues as it tests its product in the field.
Developing Safer Autonomous Cars
Another member of the MITRE HMSS team, Dr. Monika Lohani, is also an assistant professor in the Department of Educational Psychology at the University of Utah. Lohani and Stokes are collaborating with a startup company, TeleLingo, to test its in-vehicle AI system called dreyev ("drive").
Designed for use in both manned and autonomous vehicles, dreyev observes driver attention, evaluates driving risks, and generates alerts in real-time using a self-learning user interface to prevent crashes and dangerous behaviors. Dreyev uses computer vision and machine learning to detect drowsiness and distraction in operators who are driving manual and autonomous cars. In other words, dreyev can determine if drivers are paying adequate attention and have their eyes on the road. Dreyev also creates a skill, attention, and risk attitude profile of drivers by evaluating their responsiveness to alerts and the effectiveness of corrective actions by the drivers.
The team is exploring how these systems can connect with users on a personal level. For example, should dreyev be programmed to identify risks and then tell the human driver, "Your eyes aren't on the road, Jim. This system is not designed to manage upcoming traffic conditions that I predict you will encounter any minute now. You will need to take over!"
"If users don’t like or trust the technology, they won’t use it and rely on it," Lohani says. "For example, we all know that if you don’t like the voice and tone of your GPS system, you’ll get frustrated and may turn it off. We're looking at what users respond to in different situations so that they’ll use the technology and prevent crashes."
In the first phase of testing, the team is using the high-fidelity driving simulators at the Center for Driving Safety and Technology at the University of Utah. "We can run various scenarios, simulating real-world situations and distractions, to see what interventions help in which situations. Interventions include dialogues, sounds, lights, and even vibrations. We’ll use the data to train customized models that learn people's actions and preferences," explains Lohani.
Boosting Effectiveness of Military Robots
"Once we determine what works in one domain, we can translate that information into other domains," Stokes adds.
For example, the HMSS team is collaborating with the high-tech commercial sector on behalf of MITRE's defense sponsors. We're helping the U.S. Army enable soldiers to work seamlessly with the ever-increasing number of machines being put into the field. The military has been an early adopter of robots, unmanned ground vehicles, and unmanned aerial vehicles. In fact, the Army alone has about 800 fielded robots today.
"Eventually, robots could be programmed to act more 'human' with warfighters, using AI technology that identifies social cues from American soldiers," Stokes says. "Because lives will be on the line in many situations, it's critical to truly understand the risks and benefits of imbuing robots with social cues, real or perceived.
"Both government and commercial sectors need to invest the time and energy to get this right, and we can help them do that."
—by Beverly Wood
Learn more MITRE’s work on applying human-machine teaming research findings toward systems engineering methods in our new guide.