MITRE MITRE在国防创新委员会关于人工智能原则的项目上的声明2019年(3页)

ID:22718

大小:0.06 MB

页数:3页

时间:2022-11-28

金币:15

上传者:战必胜
The MITRE Center for Technology
& National Security
First, we’d like to thank the Defense Innovation Board for the opportunity to briefly touch
on the ethical considerations for military application of artificial intelligence. The MITRE
Corporation is deeply committed both to ethical approaches to modern warfare and to
enabling our Service men and women to have at hand the best technology available to
protect them and to help them achieve their mission. Artificial intelligence clearly impacts
both of those commitments. AI is a key emerging technology that will enable the Joint
Force to fight and win future wars. Yet for several reasons, the Department has struggled
to field relevant capabilities leveraging this technology. Some of these reasons revolve
around AI’s being developed largely in the commercial sector for consumer applications.
Some revolve around technical challenges with dirty data and complex system dynamics.
Some however revolve around ethical concerns related to AI weapons and military
decision making. Clearly, the Department’s integration of AI into military operations
must be done in a manner consistent both with our country’s ethics and the laws of
warfare. We believe however that from an ethical perspective, AI is similar to a host of
technologies that have preceded it and that have been fielded and used in ethical ways.
In fact, we believe integrating AI into military systems and operations can help to reduce
civilian casualties while providing our troops a critical military advantage.
For example, take Claymore mines, a remotely triggered anti-personnel device not
banned by the Ottawa convention. Yet, they can be detonated by tripwire or other ways
that don’t require actually seeing the target. What if instead they came equipped with
a sensor that only allowed detonation if the targets were determined to be adult-sized
humans carrying weapons? Or take the tragic 1988 downing of an Iranian airliner by the
USS Vincennes. The crew of the Vincennes was forced to make a split-second decision
about the threat posed by an unknown aircraft before they fired the missile.
What if instead the missile had an AI-based seeker which could distinguish between a
civilian airliner or enemy aircraft and shut off its fuze, or even guided itself away from
the aircraft? In both cases, as well as in many others, AI could enable both enhanced
capabilities for our warfighters and reductions in the likelihood of non-combatant
casualties. These examples highlight two of the three points wed like to bring to your
attention about AI’s use in DoD system.
MITRE STATEMENT TO THE DEFENSE INNOVATION
BOARD’S PROJECT ON AI PRINCIPLES
By Eliahu Niewood, Director, Cross-Cutting Urgent Innovation Cell
©2019 The MITRE Corporation. All rights reserved. Approved for Public Release. Distribution unlimited. Case number 19-0168
The first point is that AI is not a fundamental change in the way we employ advanced
weapons. Many of the weapons in our inventory today select their own aimpoints
or home in on a target within a set of constraints. The Tomahawk cruise missile, for
example, uses seekers and guidance algorithms which correlate the surrounding terrain
to onboard digital maps to guide itself to its target. Many air-to-air missiles “lock on”
after launch, meaning that the weapon finds its own aimpoint when its seeker is turned
on during flight. Torpedoes search out specific acoustic signatures, matching those
signatures against onboard libraries. All of these weapons already make autonomous
decisions” about where they go and what they do once a human makes the decision
to launch them. With AI technologies, we may have less real-time visibility into how the
weapon makes a decision in a specific scenario, we may have more difficulty testing
the weapon because of the complexity of the AI, but at a fundamental level the human
has given up control and decision making with many existing weapons once they are
launched. That launch decision, with or without AI inside the weapon, must be an ethical
one that balances risk to others with risk to the warfighter. That was true in WWII, that is
true today, and that will still be true in the future.
A second point these examples highlight is that the human is not an ideal
decision-maker, let alone a perfect decision-maker. Take the example of the USS
Vincennes mentioned above. According to some reports, the Aegis Weapon System
on the cruiser recorded that the Iranian aircraft was squawking a civilian transponder
code and climbing away from the Vincennes at the time the weapon was fired. Under
threat, forced to make a decision in a very short time, it is understandable that the crew
of the Vincennes was not able to fully process all available information. In his book “The
Fighters”, C.J. Chivers describes a young US Navy pilot early in the war with Afghanistan
launching a precision guided weapon, knowing at the time that something felt wrong
about the weapon’s target but not sure enough to hold off on the weapon’s release.
That pilot was haunted by that decision, never knowing whether it was right or not but
wishing he could change it, for the rest of his career. Used properly, AI technology can
lead to better decision making and should lead to reductions in errors that result in
collateral damage and unnecessary civilian casualties.
The last point we would like to make today is that AI technology is not primarily focused
on the “pointy end of the spear,” directly making decisions to launch and point weapons.
Far from it. Most of the applications envisioned for AI in the Department of Defense today
revolve around other parts of operations, around better maintenance for aircraft, around
fusing data from different sources, around finding “signals” in high volumes of data, and
around making better strategic decisions.
These applications not only do not directly put lives at risk, but could actually serve to
better protect civilian populations, as well as our warfighters – even while dramatically
improving our warfighting capabilities.
In closing, it is important to remember that there are three ethical commitments we
must balance in any set of principles to be developed. We have an ethical responsibility to
minimize harm to civilians in any military operation. We also have an ethical responsibility
to our fellow citizens to find ways to use AI to enhance their security, whether that’s in
helping deter or defeat a North Korean nuclear weapon launch, finding a terrorist cell
before they develop a dirty bomb, or preventing nation state cyber attacks on our power
grid. And above all, we have an ethical commitment to our Soldiers, Sailors, Airmen
and Marines, who put their lives at risk for all of us, to find ways to protect them and to
provide them with the absolute best capabilities our nation can produce. AI can be a
positive enabler for all of these commitments.
Thank you for your time.
1
AI is a key
emerging
technology
that will
enable the
Joint Force to
fight and win
future wars.
资源描述:

当前文档最多预览五页,下载文档查看全文

此文档下载收益归作者所有

当前文档最多预览五页,下载文档查看全文
温馨提示:
1. 部分包含数学公式或PPT动画的文件,查看预览时可能会显示错乱或异常,文件下载后无此问题,请放心下载。
2. 本文档由用户上传,版权归属用户,天天文库负责整理代发布。如果您对本文档版权有争议请及时联系客服。
3. 下载前请仔细阅读文档内容,确认文档内容符合您的需求后进行下载,若出现内容与标题不符可向本站投诉处理。
4. 下载文档时可能由于网络波动等原因无法下载或下载错误,付费完成后未能成功下载的用户请联系客服处理。
关闭