The global security organization’s dedicated exploratory teams will address two of AI’s most pressing challenges—effective adoption and security—on an international stage.
The dialogue and news surrounding artificial intelligence (AI) can be dizzying. For those who research and innovate around AI, it often feels incomplete.
“There’s a lot of chatter about the responsible and ethical use of AI, but not enough coherent voices from policymakers, AI developers, and end users to provide the appropriate guidance for effective safeguards and adoption,” explains MITRE’s Tien Pham, Ph.D., citing government and industry’s rush to adapt AI-supported technology.
Pham, an AI assurance solutions lead, is a member at large within NATO’s Information Systems and Technology (IST) Panel, one of seven technical groups within the NATO Science and Technology Organization. He supports the U.S. national representatives from DoD on the IST Panel and promotes cooperative research alongside fellow scientists, engineers, and information specialists from the 31-nation member organization.
Two AI-Focused Exploratory Teams Are Born
The IST Panel oversees technical activities in data and information processing, communications and networks, and cybersecurity. Specifically, the exploratory teams (ETs) are one-year collaborative research activities designed to convene experts and share ideas around strategic topics related to emerging technologies or critical technology gaps. The ETs help the panels develop recommendations on future work programs and require buy-in from at least two member nations to get off the ground.
Last year, Pham helped initiate technical activity proposals in the IST Panel to establish two new ETs. Eric Bloedorn, Ph.D., and Diane Kotras lead one centered on AI adoption acceleration. Christina Liaghati, Ph.D., and Keith Manville lead the other, which focuses on the evolving threat and vulnerability landscape for coalition AI-enabled systems. All four are Pham’s MITRE colleagues.
Everyone is recognizing that no one is going to solve AI assurance and security in a silo.
The DNA of an ET is fluid and dictated by member nations who choose to participate. “Countries enlist subject matter experts to get involved where priorities align,” Pham says. “These groups offer opportunities to collaborate with coalition partners via NATO’s network of 6,000 scientists, engineers, and researchers.”
More than 10 countries and NATO bodies in total backed the teams’ concepts (described below), providing more than enough traction to get started. They officially kicked off efforts in March in Paris. NATO plans to use both teams’ findings to inform its 2023 biannual AI strategy update.
Exploratory Team Topic: Accelerating AI Adoption for NATO
Highly publicized AI tools have created a stir among organizations eager to take advantage of new technology. “It’s easy to forget Microsoft is investing $10 billion and years of dedicated resources to Chat GPT,” Bloedorn notes.
When sponsors express their excitement about entering the AI arena, Bloedorn and Kotras diplomatically point to an AI maturity assessment they’ve fine-tuned over the years.
“Before we start having conversations about what's happening at the bleeding edge, we need to reference our roadmap,” Bloedorn explains. “These discussions lead to AI infrastructure requirement questions that aren't as appealing.”
Relevant questions include:
- Do you have policies and governance in place?
- Do you have data to support these efforts?
- Do you have professionals to operate these tools?
- Do you have a budget for maintenance costs?
In short, MITRE’s AI maturity model focuses and grounds organizations eager to embark on AI adoption practices for their workforce. The first step is establishing where they are, and the second is determining where they want to go, Bloedorn says.
For NATO’s purposes, the team is now building on the AI maturity model methodology to enable groups within the NATO community to benchmark AI adoption practices and capabilities and eventually integrate AI into their strategy responsibly. One of MITRE’s government sponsors is supporting the effort.
“The success of NATO’s AI adoption strategy is predicated on interoperability—a common understanding of how to communicate effectively using AI technologies with a corresponding focus on implementing foundational AI principles responsibly,” Kotras explains.
The bottom line: Help NATO rapidly identify, acquire, test, deploy, and maintain trustworthy AI.
Exploratory Team Topic: Securing and Assuring AI-enabled Systems
“Without proactive consideration of AI security and assurance threats and vulnerabilities, organizations run the risk of operating AI-enabled systems in environments that are susceptible to adversary attack or catastrophic failure,” says Liaghati, MITRE’s AI strategy execution and operations manager.
The ET led by Liaghati and Manville will primarily focus on AI security and tie directly to MITRE’s flagship AI security capability, ATLAS™, which was inspired by MITRE ATT&CK® as a mechanism to aggregate and share adversary tactics. Since its inception three years ago, an engaged community of 80-plus organizations has grown and shaped the ATLAS knowledge base and its capabilities.
“Organizations don't realize how vulnerable they can become as AI is incorporated into their systems. ATLAS breaks down that threat to reveal new vulnerabilities,” Liaghati says. “Traditional cybersecurity methods don’t mitigate these risks nearly as well as people assume.”
The team’s broader goal is to expand beyond security into AI assurance—creating an ecosystem of trust to operate AI systems productively. Liaghati offers an example of a military officer trying to make a decision using AI-driven tools without fully understanding them.
“In this scenario, it’s very possible that the officer will revert back to old-school methods because they know, trust, and understand them better,” she says. “We’re working together to help decision-makers use AI in high-consequence environments, with our work directly informing their risk assessments and ultimately their trust in AI-enabled systems.”
“MITRE’s goal is to increase the security of the entire community by sharing and building capabilities in AI security to help organizations better understand and mitigate risks,” Liaghati explains. “Government understands they're not likely to develop all cutting-edge AI-enabled systems and that most major advancements in tech will be developed in industry.
“This is a situation where the community is only as strong as its weakest link, and we’re all incentivized to work together and make the entire community stronger.”
The bottom line: Share protected threat intelligence and vulnerability data, enhance defensive and mitigation techniques, and develop red-teaming capabilities and exercises.
A “Whole-of-World Approach” for Solving AI Challenges
The NATO community enthusiastically welcomed both ETs, which Pham calls a “tremendous level of interest and support.” If the work sustains momentum and interest, the teams can evolve into research task groups, which comprise more formal three-year engagements.
“Everyone is recognizing that no one is going to solve AI assurance and security in a silo,” Liaghati says. “We need to share as much data as we can across these geographical and organizational boundaries.”
MITRE is uniquely suited to collect data from industry and governments, anonymize it, then push it back out to benefit the larger community. For example, the Aviation Safety Information Analysis and Sharing public-private collaboration has produced dramatic improvements in U.S. aviation safety over the years. It now serves as an effective model for a wide variety of other safety applications.
“How countries and companies adopt AI technology and respond to threats aren’t geographically bound issues,” Pham says. “These major challenges—involving healthcare, climate, financial markets, and more—require a whole-of-world approach.”
Join our community of innovators, learners, knowledge-sharers, and risk takers. View our Job Openings.