This document provides an executive summary of "AI Assurance: A Repeatable Process for Assuring AI-enabled Systems," which defines the process for discovering, assessing, and managing risk throughout the life cycle of an AI-enabled system.
AI Assurance: A Repeatable Process for Assuring AI-Enabled Systems—Executive Summary
Federal agencies are being encouraged by the White House to remove barriers to innovation, accelerate the use of artificial intelligence (AI) tools, and leverage AI to better fulfill their missions, all while setting up guardrails to mitigate risks. In recent years, the U.S. has made progress in addressing these concerns, but there are significant gaps in our current understanding of the risks posed by AI-enabled applications when they support consequential government functions. A repeatable engineering approach for assuring AI-enabled systems is required to extract maximum value from AI while protecting society from harm.
This document provides an executive summary of "AI Assurance: A Repeatable Process for Assuring AI-enabled Systems," which defines and articulates AI assurance as a process for discovering, assessing, and managing risk throughout an AI-enabled system's life cycle to ensure it operates effectively for the benefit of its stakeholders. The process is designed to be adaptable to different contexts and sectors, making it relevant to the national discussion on regulating AI.