MITRE ATLAS Takes on AI System TheftJune 2021
Topics: Artificial Intelligence, Cybersecurity, Information Systems Security, Risk Management
Creating a new artificial intelligence (AI) system is no small feat for any company. It’s time-consuming and expensive. And there’s a lot riding on its effectiveness.
“Bad actors can reverse engineer valuable AI systems and training data by exploiting the fact that most AI systems remember and reveal too much,” says Mikel Rodriguez, Ph.D., director of MITRE’s AI & Autonomy Innovation Center.
Outright system theft is just one of the concerns. In the last three years, major companies such as Google, Amazon, and Tesla have had their machine learning (ML) systems compromised. And this trend is only set to rise: According to a Gartner report, by 2022, 30 percent of cyberattacks will involve data poisoning, model theft, or adversarial examples (think optical illusions for machines).
Industry is unprepared to respond. In a 2020 report by Microsoft, 89 percent of organizations were “not equipped with tactical and strategic tools to protect, detect, and respond to attacks on their machine learning systems.”
Of course, problems go far beyond individual companies. “These critical technologies are fundamental to a broader technology competition between nation states,” says Charles Clancy, senior vice president and general manager of MITRE Labs.
“Countries that are successful at mitigating the challenges and vulnerabilities of the current crop of AI and autonomous systems will drive economies, shape societies, and exert influence and exercise power in the world.”
Trusting AI for More Than Movie Recommendations
Currently, the most ubiquitous applications of AI and autonomy in the United States are in narrow, low-risk environments like internet search engines.
“It’s one thing to let an AI system recommend movies to you, quite another to let it drive your car,” Clancy says. “But even beyond that, if adversaries can steal or reverse engineer valuable AI systems and get ahead by quickly replicating them at a fraction of the cost, it gives them a tremendous competitive advantage.”
Clancy says the United States needs to do more to keep pace with the technology advances in foreign countries, and that we cannot let them take the lead in AI.
That’s why in the fall of 2020, Microsoft and MITRE, released the Adversarial ML (AdvML) Threat Matrix in collaboration with Bosch, IBM, NVIDIA, Airbus, Deep Instinct, Two Six Technologies, the University of Toronto, Cardiff University, Software Engineering Institute/Carnegie Mellon University, PricewaterhouseCoopers, and Berryville Institute of Machine Learning. This open framework empowers security analysts to detect, respond to, and mitigate threats against ML systems.
“Security and privacy of AI systems is a cornerstone of Microsoft's Responsible AI principles,” says Ram Shankar Siva Kumar, principal lead for Microsoft Azure Trustworthy ML. “One of the ways we fulfill this commitment is continuous risk assessment of critical AI systems by automating parts of MITRE’s ATLAS framework using our recently open sourced tool. These exercises have led to increased security visibility into our enterprise AI systems.”
In spring 2021, Cardiff University, Citadel AI, McAfee, Palo Alto Networks, and several other organizations joined MITRE and Microsoft to release Version 2.0. They also gave the matrix a new name: MITRE ATLAS, Adversarial Threat Landscape for Artificial-Intelligence Systems.
“As the matrix matured between version 1.0 and 2.0, we knew it needed a stronger name to foster even more community adoption,” says Christina Liaghati, Ph.D., operations manager of MITRE’s AI & Autonomy Innovation Center.
ATLAS is presented in an interactive MITRE ATT&CK®-style format with connections between the case studies, threats, vulnerabilities, and the matrix itself. Version 2.0 also brings several additional case studies from the new contributors, as MITRE continues to build community engagement and collaborations to focus on the realities of AI security challenges.
Putting a Brand on AI Systems
Rodriguez says artificial intelligence and machine learning are starting to touch every part of our economy, national security, and daily life. “Yet, AI systems demand huge training datasets, are vulnerable to counter-AI attacks, and can be difficult to understand—and thus trust.”
To ensure the integrity and reliability of AI-enabled systems, MITRE is focused on identifying emerging threats through the ATLAS framework. We’re also developing vendor-agnostic tools to help organizations protect themselves, creating blue and red team handbooks, and more.
“This includes developing ways for organizations to ‘watermark’ their AI systems—basically being able to brand them, to help prevent theft,” says Jonathan Broadbent, solutions architect for our AI & Autonomy Innovation Center.
Coming Together to Maintain the U.S. Lead in AI
According to Doug Robbins, vice president of innovation and capabilities for MITRE Labs, national security requires the cooperation of private actors as much as public investors. “We need to engage government and private-sector efforts if we’re to win the technology competition with our nation’s adversaries. That’s what it will take to enable the fairness, interpretability, privacy, and security necessary for future AI applications.
“We should not lose our current bottom-up innovation culture," Robbins adds. "But even large tech firms cannot be expected to compete with the resources of a global power competition nation state or make the big investments the U.S. will need to stay ahead.”
“That’s why we’re pleased to be working with our partners in the ATLAS framework collaboration to make AI more secure,” says Clancy. “This technology is vital to our nation’s security, prosperity, and health.
“Together, we must protect it from theft and manipulation—and we need to do it now.”
—by Bill Eidson