DARPA的可解释人工智能(XAI)计划回顾 2021

VIP文档

ID:71582

大小:2.78 MB

页数:11页

时间:2024-10-18

金币:20

上传者:人情世故
EDITORIAL
DARPA's explainable AI (XAI) program: A retrospective
Defense Advanced Research Projects Agency (DARPA) formulated the explainable artificial intelligence (XAI) program in
2015 with the goal to enable end users to better understand, trust, and effectively manage artificially intelligent systems. In
2017, the 4-year XAI research program began. Now, as XAI come s to an end in 2021, it is time to reflect on what succeeded,
what failed, and what was learned. This article summarizes the goals, organization, and research progress of the XAI program.
1 | CREATION OF XAI
Dramatic success in machine learning has created an explosion of new AI capabilities. Continued advances promise to
produce autonomous systems that perceive, learn, decide, and act on their own. These systems offer tremendous bene-
fits, but their effectiveness will be limited by the machine's inability to explain its decisions and actions to human users.
This issue is especially important for the United States Department of Defense (DoD), which faces challenges that
require the development of more intelligent, autonomous, and reliable systems. XAI will be essential for users to under-
stand, appropriately trust, and effectively manage this emerging generation of artificially intelligent partners.
The problem of explainability is, to some extent, the result of AI's success. In the early days of AI, the predominant rea-
soning methods were logical and symbolic. These early systems reasoned by performing some form of logical inference on
(somewhat) human readable symbols. Early systems could generate a trace of their inference steps, which could then
become the basis for explanation. As a result, there was significant work on how to make these systems explainable.
1-5
Yet, these early AI systems were ineffective; they proved too expensive to build and too brittle against the complexities
of the real world. Success in AI came as researchers developed new machine learning techniques that could construct
models of the world using their own internal representations (eg, support vectors, random forests, probabilistic models,
and neural networks). These new models were much more effective, but necessarily more opaque and less explainable.
The year 2015 was an inflection point in the need for XAI. Data analytics and machine learning had just experienced a
decade of rapid progress.
6
The deep learning revolution had just begun, following the breakthrough ImageNet demonstration
in 2012.
6,7
The popular press was alive with animated speculation about Superintelligence
8
and the coming AI Apocalypse.
9,10
Everyone wanted to know how to understand, trust, and manage these mysterious, seemingly inscrutable, AI systems.
2015 also saw the emergence of initial ideas for providing explainability. Some researchers were exploring deep
learning techniques, such as the use of deconvolutional networks to visualize the layers of convolutional networks.
11
Other researchers were pursuing techniques to learn more interpretable models, such as Bayesian Rule Lists.
12
Others
were developing model-agnostic techniques that could experiment with a machine learning modelas a black boxto
infer an approximate, explainable model, such as LIME.
13
Yet, others were evaluating the psychological and human-
computer interaction aspects of the explanation interface.
13,14
DARPA spent a year surveying researchers, analyzing possible research strategies, and formulating the goals and
structure of the program. In August 2016, DARPA released DARPA-BAA-16-53 to call for proposals.
1.1 | XAI program goals
The stated goal of explainable artificial intelligence (XAI) was to create a suite of new or modified machine learning
techniques that produce explainable models that, when combined with effective explanation techniques, enable end
users to understand, appropriately trust, and effectively manage the emerging generation of AI systems.
Received: 9 November 2021 Revised: 29 November 2021 Accepted: 30 November 2021
DOI: 10.1002/ail2.61
This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided
the original work is properly cited.
© 2021 The Authors. Applied AI Letters published by John Wiley & Sons Ltd.
Applied AI Letters. 2021;2:e61. wileyonlinelibrary.com/journal/ail2 1of11
https://doi.org/10.1002/ail2.61
资源描述:

当前文档最多预览五页,下载文档查看全文

此文档下载收益归作者所有

当前文档最多预览五页,下载文档查看全文
温馨提示:
1. 部分包含数学公式或PPT动画的文件,查看预览时可能会显示错乱或异常,文件下载后无此问题,请放心下载。
2. 本文档由用户上传,版权归属用户,天天文库负责整理代发布。如果您对本文档版权有争议请及时联系客服。
3. 下载前请仔细阅读文档内容,确认文档内容符合您的需求后进行下载,若出现内容与标题不符可向本站投诉处理。
4. 下载文档时可能由于网络波动等原因无法下载或下载错误,付费完成后未能成功下载的用户请联系客服处理。
关闭