Abstract of data forming a chain

A Sensible Regulatory Framework for AI Security

By Charles Clancy, Ph.D. , Douglas Robbins , Ozgur Eris, Ph.D. , Lashon Booker, Ph.D. , Katie Enos

This paper explores potential options for AI regulation and makes recommendations on how to establish guardrails to shape the development and use of AI.

Download Resources

Artificial intelligence can bring precision and speed to every sector—defense, healthcare, transportation, education, and more. At the same time, AI poses potential risks to people and property, raising social, ethical, geopolitical, even existential questions. The prospect of AI with a “mind” of its own, or a human directing an AI to cause harm, is alarming. MITRE is applying our deep systems engineering expertise to enable impactful, secure, and equitable AI.

With AI technologies growing at a staggering rate, the time is now to put best practices in place to promote AI for public good. We’re drawing on six decades of experience advancing cyber and AI in the defense and civil domains and forging strategic collaborations with government, industry, and academia to inform sensible AI regulation in the near- and long-term. We recommend where to place guardrails to best shape AI development and application.

 

 

Abstract network visualization
Focus Area

Artificial Intelligence

We’re drawing on our 65 years as a trusted, objective adviser with cross-domain expertise to catalyze the consequential use of AI in all sectors, for all people.