Crowds quickly existing a train onto the platform

MITRE, Microsoft, and 11 Other Organizations Take on Machine-Learning Threats

MITRE is partnering with Microsoft and collaborating with other organizations on the Adversarial Machine Learning Threat Matrix, an industry-focused open framework to empower security analysts to detect, respond, and remediate threats.

Whether you welcome or fear it, artificial intelligence (AI) is hurtling down the tracks toward our future explains Charles Clancy, MITRE’s chief futurist, senior vice president, and general manager of MITRE Labs. 

MITRE has been involved in artificial intelligence research and development for decades, across many different domains. But recent technological improvements in machine learning (ML) will make enormous transformations in critical areas such as finance, healthcare, and defense.   

Many businesses, in their eagerness to capitalize on the advancements, have not scrutinized the security of their ML systems. Microsoft and MITRE, in collaboration with Bosch, IBM, NVIDIA, Airbus, Deep Instinct, Two Six Labs, the University of Toronto, Cardiff University, Software Engineering Institute/Carnegie Mellon University, PricewaterhouseCoopers, and Berryville Institute of Machine Learning, are releasing the Adversarial ML Threat Matrix, an industry-focused open framework, to empower security analysts to detect, respond, and remediate threats against ML systems.

We asked Clancy to join Mikel Rodriguez, director of MITRE’s Artificial Intelligence and Autonomy Innovation Center, to discuss artificial intelligence, machine learning, and the work ahead with the AdvML Threat Matrix.

Why is machine learning receiving so much attention these days? 

Charles Clancy: Artificial intelligence has ebbed and flowed since the 1950s, as the algorithms and underlying technologies matured. But in 2012, there was a significant breakthrough in the underlying technology—the dawn of so-called “deep learning revolution.” These algorithms, combined with a rapid increase in the availability of computing processing power and new crowdsourced data sets, unlocked a new domain of AI, deep learning, that we previously thought was impossible.

This led to many different applications, starting with image processing and image classification. These technologies intersected and now support a future that is heavily automated and relies on sophisticated machine learning algorithms. Communications, healthcare, transportation, the military—our critical infrastructure—the applications are virtually limitless.

Now, as we move from 4G to 5G communications, connectivity will dramatically increase. And that means we’ll be increasingly reliant on the Internet of Things. From the smart grid to intelligent transportation, the ability to gain access to the internet is essential for taking advantage of AI to automate and gain efficiencies out of everything around us.

But doesn’t machine learning combined with the Internet of Things create risk?

Clancy: Certainly, there’s some risk. Whether it’s just a failure of the system or because a malicious actor is causing it to behave in unexpected ways, AI can cause significant disruptions. Studies have shown a sticker on a stop sign can make a self-driving car’s vision system mistake it for a speed limit sign. Or a voice-enabled system can be tricked into releasing private records.

But when it comes to understanding machine learning risk, there’s at least two ways of looking at it: One, AI is going to add efficiencies and capabilities to systems all around us. And two, some fear that the systems we depend on, like critical infrastructure, will be under attack, hopelessly hobbled because of AI gone bad.

I believe the risks of AI are often overstated and the truth is in between. Typically, AI isn’t the first avenue for our adversaries, particularly regarding attacking our critical infrastructure. There’s a truism in the power industry that the most dangerous adversaries to our electric grid are…squirrels. Keep that in mind—there are risks to AI, but it’s also extremely valuable. Either way, the train is barreling down the tracks, so we need to ensure AI is as safe as possible from attackers.

That’s why I’m so pleased MITRE, Microsoft, and these other organizations are teaming up to support the Adversarial Machine Learning Threat Matrix.

You’ve said we’re now at the same stage with AI as we were with the internet in the late 1980s. What did you mean?

Mikel Rodriguez: Back then, people were just trying to make the internet work, they weren’t building in security. And we’ve been paying the price ever since. Same with the AI field. As Charles said, 2012 caught us by surprise. We were focused on getting AI to work in the real world, not baking in security.

Suddenly, machine learning began working and growing much faster than we expected. The good news with AI is that it's potentially not too late. So, I'm excited to work on this matrix on technical challenges around security, privacy, and safety. While there will be plenty of big problems ahead that we aren’t addressing with this initiative—we’ll be addressing the kind of fundamentals that were ignored during the early days of the internet.   

Why is the concept of “red teaming” so important to understanding the threats to machine-learning systems?

Rodriguez: Red teaming came out of cybersecurity, which is the power of an adversarial approach or “ethical hacker” who works to expose vulnerabilities with the goal of making a system stronger. We have a great deal of experience with red teaming through the MITRE ATT&CK™ framework.

A key element of red teaming is to work with real threats. Because if you just try to imagine the universe of potential challenges and vulnerabilities, you’ll never get anywhere. Instead, with this threat matrix, security analysts will be able to work with threat models that are grounded in real-world incidents that emulate adversary behavior with machine learning.

The Adversarial Machine Learning Threat Matrix will also help security analysts think holistically. While there’s excellent work happening in the academic community that looks at specific vulnerabilities, it’s important to think about how these things play off one another. Also, by giving a common language or taxonomy of the different vulnerabilities, the threat matrix will spur better communication and collaboration across organizations.

Why did MITRE join the Adversarial Machine Learning Threat Matrix initiative?   

Clancy: MITRE has deep experience with technically complex multi-stakeholder problems, such as the Aviation Safety Information Analysis and Sharing (ASIAS) initiative as well as ATT&CK. We’re used to working on problems like this—where the work is never “done.”

To succeed, we know we need to bring the experience of a community of analysts sharing real threat data and improving defenses. And for that to work, all the organizations and analysts involved need to be assured they have a trustworthy, neutral party who can aggregate these real-world incidents and maintain a level of privacy—and they have that in MITRE.  

What’s your long-term outlook for AI, the Internet of Things, and threats against machine learning systems?

Clancy: I’m optimistic about the future and how AI and the Internet of Things will make for a safer world. But no doubt, AI will also bring challenges, many of them not strictly technical. We may see AI-enabled deep fakes empowering disinformation campaigns, and we’ve already seen algorithmic bias because of bad data, and more. All of these are weighty issues, and I’m sure we’ll be involved in dealing with them.

Yet first we must focus on the fundamentals of machine learning safety, security, and privacy. And I really look forward to the great things we can accomplish with Microsoft and the full community.

To learn more, see Microsoft’s Security Blogpost.

—interviews conducted by Bill Eidson