REPORT DOCUMENTATION PAGE
Form Approved
OMB No. 0704-0188
Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and
completing and reviewing this collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden to Department of Defense,
Washington Headquarters Services, Directorate for Information Operations and Reports (0704-0188), 1215 Jefferson Davis Highway, Suite 1204, Arlington, VA 22202-4302. Respondents should be aware that notwithstanding any
other provision of law, no person shall be subject to any penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE
1. REPORT DATE (DD-MM-YYYY)
05-06-2019
3. DATES COVERED (From - To)
July 2018 - June 2019
Trust in the Machine:
AI, Autonomy, and Military Decision Making with Lethal Consequences
5c. PROGRAM ELEMENT NUMBER
N/A
CDR Christi S. Montgomery
Paper Advisor (if Any): CDR Michael O’Hara, PhD
7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES)
8. PERFORMING ORGANIZATION REPORT
NUMBER
Ethics and Emerging Military Technology Certificate Program
Naval War College
686 Cushing Road
Newport, RI 02841-1207
N/A
9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES)
10. SPONSOR/MONITOR’S ACRONYM(S)
11. SPONSOR/MONITOR'S REPORT
NUMBER(S)
12. DISTRIBUTION / AVAILABILITY STATEMENT
Distribution Statement A: Approved for public release; Distribution is unlimited.
13. SUPPLEMENTARY NOTES Submitted to the Faculty of the U.S. Naval War College Newport, RI in partial satisfaction of the requirements of the
Certificate Program in Ethics and Emerging Military Technology (EEMT).
The U.S. military has employed artificially intelligent and autonomous-capable weapons systems since the 1980s, but technology and capabilities have drastically
changed in the past three decades. In order to remain competitive, the U.S. military leaders must reconsider AI and autonomous weapons employment doctrine
across the spectrum of conflict, as well as work to improve trust in AI and autonomous technology. As codified in Department of Defense Policy, the development
of artificial intelligence (AI) and autonomous weapons in mostly peacetime conditions has favored policy-maker insistence that military leaders who employ such
technology exercise appropriate levels of human judgment over the use of force. In their insistence on human judgment, policy-makers made outdated assumptions
about the availability of time, and implied a trust in the supremacy of human judgment over machine performance. Technological developments in recent years
have compressed reaction times. The time available to bring lethal force to bear has decreased while the amount of contextual information that enables decisions on
the use of force has increased. At the same time, gray zone conflict activity is increasingly blurring the line between peacetime operations and warfare. U.S.
military forces exerting forward, deterrent presence in areas prone to activity that is not in accordance with international norms or law are increasingly exposed to
complex risk that may be misunderstood and lethally miscalculated.
When operating in the gray zone, where the distinction between peace and war blurs and where technology has compressed reaction times, servicemembers face a
moral gray zone. In the moral gray zone, operators encounter a potential dilemma between the duty to abide by the principle of distinction in the law of armed
conflict (LOAC) and the inherent right to self-defense. The ambiguity inherent in the operational gray zone contributes to a higher-than-average likelihood of
human judgment failures in the accompanying moral gray zone. AI and autonomous technology have the potential to improve both the success of self-defensive
actions and the adherence to LOAC – particularly in compressed timescales – but only if humans and organizations are able to establish trust in the machine
operating intelligently and autonomously. Establishing trust requires that humans perceive machine actions as predictable, transparent, and traceable. Trust also
requires understanding how judgments of accountability, morality, and ethics differ between machine and human. Addressing these considerations in the
development of new AI and autonomous systems will be necessary to ensure that servicemember and societal trust in the Department of Defense (DoD) is
preserved.
artificial intelligence, autonomous, autonomy, trust, decision-making, heuristics, gray zone, command, aegis, aegis weapons system, patriot missile
system, ethics, morality, deontological, utilitarian, robot, machine, autonomous weapons systems, accountability, predictability
16. SECURITY CLASSIFICATION OF:
UNCLASSFIED
OF ABSTRACT
OF PAGES
PERSON
UNCLASSIFIED
UNCLASSIFIED
UNCLASSIFIED
59
(include
area code)
Standard Form 298 (Rev. 8-98)