EDITORIAL
DARPA's explainable AI (XAI) program: A retrospective
Defense Advanced Research Projects Agency (DARPA) formulated the explainable artificial intelligence (XAI) program in
2015 with the goal to enable end users to better understand, trust, and effectively manage artificially intelligent systems. In
2017, the 4-year XAI research program began. Now, as XAI come s to an end in 2021, it is time to reflect on what succeeded,
what failed, and what was learned. This article summarizes the goals, organization, and research progress of the XAI program.
1 | CREATION OF XAI
Dramatic success in machine learning has created an explosion of new AI capabilities. Continued advances promise to
produce autonomous systems that perceive, learn, decide, and act on their own. These systems offer tremendous bene-
fits, but their effectiveness will be limited by the machine's inability to explain its decisions and actions to human users.
This issue is especially important for the United States Department of Defense (DoD), which faces challenges that
require the development of more intelligent, autonomous, and reliable systems. XAI will be essential for users to under-
stand, appropriately trust, and effectively manage this emerging generation of artificially intelligent partners.
The problem of explainability is, to some extent, the result of AI's success. In the early days of AI, the predominant rea-
soning methods were logical and symbolic. These early systems reasoned by performing some form of logical inference on
(somewhat) human readable symbols. Early systems could generate a trace of their inference steps, which could then
become the basis for explanation. As a result, there was significant work on how to make these systems explainable.
1-5
Yet, these early AI systems were ineffective; they proved too expensive to build and too brittle against the complexities
of the real world. Success in AI came as researchers developed new machine learning techniques that could construct
models of the world using their own internal representations (eg, support vectors, random forests, probabilistic models,
and neural networks). These new models were much more effective, but necessarily more opaque and less explainable.
The year 2015 was an inflection point in the need for XAI. Data analytics and machine learning had just experienced a
decade of rapid progress.
6
The deep learning revolution had just begun, following the breakthrough ImageNet demonstration
in 2012.
6,7
The popular press was alive with animated speculation about Superintelligence
8
and the coming AI Apocalypse.
9,10
Everyone wanted to know how to understand, trust, and manage these mysterious, seemingly inscrutable, AI systems.
2015 also saw the emergence of initial ideas for providing explainability. Some researchers were exploring deep
learning techniques, such as the use of deconvolutional networks to visualize the layers of convolutional networks.
11
Other researchers were pursuing techniques to learn more interpretable models, such as Bayesian Rule Lists.
12
Others
were developing model-agnostic techniques that could experiment with a machine learning model—as a black box—to
infer an approximate, explainable model, such as LIME.
13
Yet, others were evaluating the psychological and human-
computer interaction aspects of the explanation interface.
13,14
DARPA spent a year surveying researchers, analyzing possible research strategies, and formulating the goals and
structure of the program. In August 2016, DARPA released DARPA-BAA-16-53 to call for proposals.
1.1 | XAI program goals
The stated goal of explainable artificial intelligence (XAI) was to create a suite of new or modified machine learning
techniques that produce explainable models that, when combined with effective explanation techniques, enable end
users to understand, appropriately trust, and effectively manage the emerging generation of AI systems.
Received: 9 November 2021 Revised: 29 November 2021 Accepted: 30 November 2021
DOI: 10.1002/ail2.61
This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided
the original work is properly cited.
© 2021 The Authors. Applied AI Letters published by John Wiley & Sons Ltd.
Applied AI Letters. 2021;2:e61. wileyonlinelibrary.com/journal/ail2 1of11
https://doi.org/10.1002/ail2.61