As our nation's critical infrastructure increasingly relies upon artificial intelligence, bad actors are finding ways to fool machine learning—with potentially dangerous consequences. Can AI red teams help to protect against such potential attacks?
Creating an AI Red Team to Protect Critical Infrastructure
Today’s most powerful artificial intelligence (AI) tool is machine learning.
Just like the name suggests, machine learning (ML) programs, when exposed to enough data, can learn to recognize faces, navigate congested highways, and transcribe spoken sentences.
Yet—like humans—they can also make mistakes, and bad actors can fool them.
This is a growing concern for many, particularly because we're increasingly relying upon machine learning in our nation's critical infrastructure, including the transportation, healthcare, and energy sectors.
For instance, by placing a small sticker on his lapel that the human eye can barely notice, a bad actor could fool a facial recognition system into identifying him as someone else—allowing him to bypass or evade detection from a machine learning system.
"AI attacks are different from cyberattacks, which insert code changes," says Mikel Rodriguez, who oversees MITRE’s Decision Science research programs. "Attacks to ML systems trick the artificial intelligence logic in ways that may not even be clear to the author of the original algorithm."
These vulnerabilities have been demonstrated in nearly every application domain of machine learning, including computer vision, speech recognition, and natural language processing. Numerous studies have demonstrated how easily potential attackers can evade and confuse face and object recognition systems with specially crafted imperceptible changes to images. Stickers on a stop sign could make a self-driving car’s vision system mistake it for a yield sign. And AI attackers could subtly modify an audio signal so that it sounds like background noise to a human—but triggers a voice assistant like Alexa or Siri to perform a specific command.
In response to these vulnerabilities in machine learning, MITRE is developing an ecosystem of vendor-agnostic AI supply chain vulnerability assessment tools. “These innovations will help the public sector to quantitatively assess counter-AI vulnerabilities, apply safeguards, provide transparency to vendors, and reduce risk to critical infrastructure," Rodriguez says.
What's more, he says MITRE's deep experience in cybersecurity holds part of the answer.
Red Teaming: Taking an Adversarial Approach to AI Security
Rodriguez says that we're now in a similar position with AI that we were with cybersecurity back in the mid-1990s.
He says there's also a lesson we can learn from cybersecurity—the power of an adversarial approach, or red team. A "red team" (or "ethical hacker”) works to expose vulnerabilities with the goal of making a system stronger.
As our experience with MITRE's ATT&CK (a globally accessible knowledge base of adversary tactics and techniques based on real-world observations) has shown, a good offense truly is the best defense.
Rodriguez believes there's a need for an independent AI red team that works across not only government, but academia and industry as well. This is particularly important because the greatest innovations in AI (and by far the largest R&D expenditures) are within academia and industry, not government.
The Characteristics of a Successful AI Red Team
For an AI red team to be successful, Rodriguez says it must:
- Operate independently to assure both security and a fresh perspective. An AI red team should not be located within a specific government agency or commercial company. Virtually all of AI is built upon proprietary algorithms, with the overwhelming growth in the private sector. A red team must be able to keep sensitive data secure, while being transparent about identifying risks.
- Be informed about the rapidly evolving landscape of AI attack vectors. An AI red team must know what real adversaries know. And real adversaries may attack anywhere in the ML system lifecycle, such as poisoning the system early, or working independently to design and test attacks for their desired effect and releasing them when they are ready—such as putting a sticker on a stop sign to fool an autonomous car. So, the red team must collect and retain all available information about existing AI systems and the attack vectors.
- Develop and maintain a counter-AI threat model. This model would provide a framework to better understand potential threats, vulnerabilities, and risks to systems that are dependent on machine learning. This threat model identifies the likelihood of a given counter-AI vulnerability and the impact an attack would have on a machine learning system—and the infrastructure it supports.
- Base recommendations on quantitative evidence. Currently, much of the evidence available about AI threats is based upon academic metrics that are not always relevant in actual operations. The AI red team must invent ways to measure both the vulnerability and the potential impact of adversaries attacking real-world systems.
"The good news is that we have the opportunity to start dealing with AI attacks at an earlier stage than we did with cybersecurity," Rodriguez says.
"The World Wide Web was developed with security as an afterthought, rather than a core design component—and we're still paying the price for it today. With AI, it is not too late to consider safety, security, and privacy before society increasingly relies on this technology.”
—by Bill Eidson