AI ASSURANCE
A Repeatable Process for Assuring AI-enabled Systems
Executive Summary
AI ASSURANCE
A Repeatable Process for Assuring AI-enabled Systems
Federal agencies are being encouraged by the White House to remove
barriers to innovation, accelerate the use of artificial intelligence (AI)
tools, and to leverage AI to better fulfill their missions, all while setting
up guardrails to mitigate risks.
Increasing the use of AI in government activities will likely have a
consequential impact on the nation and world, in areas ranging from
transportation to more efficient government to strengthened national
security. Given this promise, how do we assure that these systems
function as intended and are safe, secure, and trustworthy?
In the last two years, the U.S. has made progress in addressing
these concerns, most noteworthy among them are the creation and
publication of the National Institute of Standards and Technology
(NIST) AI Risk Management Framework (RMF) (Tabassi, 2023), and
the recent AI executive order (EO) from the Biden administration
(U.S. Office of the President, 2023). There are significant gaps in our
current understanding of the risks posed by AI-enabled applications
when they support consequential government functions. While the
NIST AI RMF and AI EO actions are useful catalysts, a repeatable
engineering approach for assuring AI-enabled systems is required to
extract maximum value from AI while protecting society from harm.
In this paper, we articulate AI assurance as a process for discovering,
assessing, and managing risk throughout an AI-enabled system’s
life cycle to ensure it operates effectively for the benefit of its
stakeholders. The process is designed to be adaptable to different
contexts and sectors, making it relevant to the national discussion on
regulating artificial intelligence.
MITRE defines AI
assurance as a process
for discovering,
assessing, and managing
risk throughout the life
cycle of an AI-enabled
system so that it operates
effectively to the benefit
of its stakeholders.
© 2024 MITRE Approved for Public Release. Distribution Unlimited. 24-01019-6
www.mitre.org