AD1121116 信任机器:人工智能、自主性和军事决策(2019)

VIP文档

ID:72239

阅读量:1

大小:0.87 MB

页数:60页

时间:2024-12-01

金币:10

上传者:战必胜
REPORT DOCUMENTATION PAGE
Form Approved
OMB No. 0704-0188
Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and
completing and reviewing this collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden to Department of Defense,
Washington Headquarters Services, Directorate for Information Operations and Reports (0704-0188), 1215 Jefferson Davis Highway, Suite 1204, Arlington, VA 22202-4302. Respondents should be aware that notwithstanding any
other provision of law, no person shall be subject to any penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE
ABOVE ADDRESS.
1. REPORT DATE (DD-MM-YYYY)
05-06-2019
2. REPORT TYPE
FINAL
3. DATES COVERED (From - To)
July 2018 - June 2019
4. TITLE AND SUBTITLE
5a. CONTRACT NUMBER
N/A
Trust in the Machine:
AI, Autonomy, and Military Decision Making with Lethal Consequences
5b. GRANT NUMBER
N/A
5c. PROGRAM ELEMENT NUMBER
N/A
6. AUTHOR(S)
5d. PROJECT NUMBER
N/A
CDR Christi S. Montgomery
5e. TASK NUMBER
N/A
Paper Advisor (if Any): CDR Michael O’Hara, PhD
5f. WORK UNIT NUMBER
N/A
7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES)
8. PERFORMING ORGANIZATION REPORT
NUMBER
Ethics and Emerging Military Technology Certificate Program
Naval War College
686 Cushing Road
Newport, RI 02841-1207
N/A
9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES)
N/A
10. SPONSOR/MONITOR’S ACRONYM(S)
11. SPONSOR/MONITOR'S REPORT
NUMBER(S)
N/A
12. DISTRIBUTION / AVAILABILITY STATEMENT
Distribution Statement A: Approved for public release; Distribution is unlimited.
13. SUPPLEMENTARY NOTES Submitted to the Faculty of the U.S. Naval War College Newport, RI in partial satisfaction of the requirements of the
Certificate Program in Ethics and Emerging Military Technology (EEMT).
14. ABSTRACT
The U.S. military has employed artificially intelligent and autonomous-capable weapons systems since the 1980s, but technology and capabilities have drastically
changed in the past three decades. In order to remain competitive, the U.S. military leaders must reconsider AI and autonomous weapons employment doctrine
across the spectrum of conflict, as well as work to improve trust in AI and autonomous technology. As codified in Department of Defense Policy, the development
of artificial intelligence (AI) and autonomous weapons in mostly peacetime conditions has favored policy-maker insistence that military leaders who employ such
technology exercise appropriate levels of human judgment over the use of force. In their insistence on human judgment, policy-makers made outdated assumptions
about the availability of time, and implied a trust in the supremacy of human judgment over machine performance. Technological developments in recent years
have compressed reaction times. The time available to bring lethal force to bear has decreased while the amount of contextual information that enables decisions on
the use of force has increased. At the same time, gray zone conflict activity is increasingly blurring the line between peacetime operations and warfare. U.S.
military forces exerting forward, deterrent presence in areas prone to activity that is not in accordance with international norms or law are increasingly exposed to
complex risk that may be misunderstood and lethally miscalculated.
When operating in the gray zone, where the distinction between peace and war blurs and where technology has compressed reaction times, servicemembers face a
moral gray zone. In the moral gray zone, operators encounter a potential dilemma between the duty to abide by the principle of distinction in the law of armed
conflict (LOAC) and the inherent right to self-defense. The ambiguity inherent in the operational gray zone contributes to a higher-than-average likelihood of
human judgment failures in the accompanying moral gray zone. AI and autonomous technology have the potential to improve both the success of self-defensive
actions and the adherence to LOAC particularly in compressed timescales but only if humans and organizations are able to establish trust in the machine
operating intelligently and autonomously. Establishing trust requires that humans perceive machine actions as predictable, transparent, and traceable. Trust also
requires understanding how judgments of accountability, morality, and ethics differ between machine and human. Addressing these considerations in the
development of new AI and autonomous systems will be necessary to ensure that servicemember and societal trust in the Department of Defense (DoD) is
preserved.
15. SUBJECT TERMS
artificial intelligence, autonomous, autonomy, trust, decision-making, heuristics, gray zone, command, aegis, aegis weapons system, patriot missile
system, ethics, morality, deontological, utilitarian, robot, machine, autonomous weapons systems, accountability, predictability
16. SECURITY CLASSIFICATION OF:
UNCLASSFIED
17. LIMITATION
OF ABSTRACT
18. NUMBER
OF PAGES
19a. NAME OF RESPONSIBLE
PERSON
Director, EEMT Program
a. REPORT
UNCLASSIFIED
b. ABSTRACT
UNCLASSIFIED
c. THIS PAGE
UNCLASSIFIED
59
19b. TELEPHONE NUMBER
(include
area code)
401-841-7542
Standard Form 298 (Rev. 8-98)
资源描述:

当前文档最多预览五页,下载文档查看全文

此文档下载收益归作者所有

当前文档最多预览五页,下载文档查看全文
温馨提示:
1. 部分包含数学公式或PPT动画的文件,查看预览时可能会显示错乱或异常,文件下载后无此问题,请放心下载。
2. 本文档由用户上传,版权归属用户,天天文库负责整理代发布。如果您对本文档版权有争议请及时联系客服。
3. 下载前请仔细阅读文档内容,确认文档内容符合您的需求后进行下载,若出现内容与标题不符可向本站投诉处理。
4. 下载文档时可能由于网络波动等原因无法下载或下载错误,付费完成后未能成功下载的用户请联系客服处理。
关闭