Extracting maximum value from AI while protecting against societal harm requires a repeatable engineering approach for assuring AI-enabled systems in mission contexts. This paper articulates such an approach and outlines how it can be used to develop an AI Assurance Plan to achieve and maintain assurance throughout the life cycle of an AI-enabled system.
AI Assurance: A Repeatable Process for Assuring AI-enabled Systems
Artificial intelligence increasingly informs consumer choices—from movie recommendations to routine customer service inquiries. However, high-consequence AI applications such as autonomous vehicles, accessing government benefits, or healthcare decisions surface significant concerns. The MITRE-Harris Poll survey on AI trends finds that just 48% of the American public believes AI is safe and secure, whereas 78% voice concern about its potential for malicious use. Assuring AI-enabled systems to address these concerns is nontrivial and will require collaboration across government, industry, and academia.
In AI Assurance: A Repeatable Process for Assuring AI-enabled Systems, we define and articulate AI assurance as a process for discovering, assessing, and managing risk throughout an AI-enabled system's life cycle to ensure it operates effectively for the benefit of its stakeholders. The process results in an AI Assurance Plan, a comprehensive artifact outlining the necessary activities to achieve and maintain the assurance of an AI-enabled system in a mission context. We also discuss how this process may be applied in different phases of the AI life cycle, such as system development, acquisition, certification, and deployment. We conclude by highlighting the need for sector-specific AI assurance labs and emphasize the importance of public-private partnerships in advancing AI assurance.