This paper explores potential options for AI regulation and makes recommendations on how to establish guardrails to shape the development and use of AI.
Artificial intelligence can bring precision and speed to every sector—defense, healthcare, transportation, education, and more. At the same time, AI poses potential risks to people and property, raising social, ethical, geopolitical, even existential questions. The prospect of AI with a “mind” of its own, or a human directing an AI to cause harm, is alarming. MITRE is applying our deep systems engineering expertise to enable impactful, secure, and equitable AI.
With AI technologies growing at a staggering rate, the time is now to put best practices in place to promote AI for public good. We’re drawing on six decades of experience advancing cyber and AI in the defense and civil domains and forging strategic collaborations with government, industry, and academia to inform sensible AI regulation in the near- and long-term. We recommend where to place guardrails to best shape AI development and application.