Engineer designing AI technology

MITRE’s Response to the NIST RFI on the Executive Order Concerning Artificial Intelligence

MITRE’s data-driven responses to a NIST RFI on several sections of the 2023 Executive Order on Artificial Intelligence.

Download Resources

What’s the issue? In response to the rapid pace of innovation in artificial intelligence (AI), the White House issued an Executive Order (EO) in October 2023 establishing new standards for AI safety and security, privacy protection, equity, and more. The National Institute of Standards and Technology (NIST) is charged with developing guidelines to implement the EO with a goal of ensuring the safe and trustworthy development and responsible use of AI.

What did we do? MITRE operates two Federally Funded Research and Development Centers (FFRDC) on behalf of the Department of Commerce and NIST with roles supporting the implementation of the AI EO, the Center for Enterprise Modernization (Department of Commerce) and the National Cybersecurity FFRDC. To support a comprehensive view of the EO, MITRE’s Center for Data-Driven Policy led a joint analysis of NIST’s posed questions seeking to uncover data and evidence from our work in the public interest that would help the agency understand opportunities and develop plans that are evidence-based, actionable, and effective.

What did we find? Our inputs highlight a systematic approach for assessing the assurance of AI systems within their sociotechnical contexts, the importance of red-teaming, AI incident and risk mitigation sharing, strategies for reducing the risk of synthetic content, and an AI lifecycle framework for managing risks and promoting trustworthiness.

Our overarching recommendation is that NIST adopt a holistic perspective in implementing the EO rather than viewing each requirement as an isolated task—an approach similar to our own Cybersecurity Framework.

The benefits of this approach include:

  • Providing a coherent structure that links the disparate elements of AI safety, security, and global technical standards development;
  • Enabling NIST to prioritize activities based on potential impact on AI safety and security, while also facilitating identification and mitigation of risks throughout the AI lifecycle;
  • Allowing NIST the flexibility to adapt its approach as AI technologies and their associated risks evolve;
  • Helping NIST ensure that the guidelines, standards, and best practices they develop are comprehensive, consistent, and aligned with the overall goal of promoting AI safety and security.