RAND:人工智能在大规模生物攻击中的操作风险 红队研究结果(2024)

VIP文档

ID:69337

大小:0.28 MB

页数:48页

时间:2024-02-03

金币:10

上传者:战必胜
CHRISTOPHER A. MOUTON, CALEB LUCAS, ELLA GUEST
The Operational Risks
of AI in Large-Scale
Biological Attacks
Results of a Red-Team Study
T
he rapid advancement of artificial intelligence (AI) technologies and applications has far-
reaching implications across multiple domains, including in the potential development of a
biological weapon. This potential application of AI raises particular concerns because it is
accessible to nonstate actors and individuals. The speed at which AI technologies are evolv-
ing often surpasses the capacity of government regulatory oversight, leading to a potential gap in
existing policies and regulations.
The coronavirus disease 2019 (COVID-19) pandemic serves as an example of the damaging
impact that even a moderate pan-
demic can have on global systems.
1
Exacerbating the risk is the eco-
nomic imbalance between offense
and defense in biotechnology. For
instance, the marginal cost for a
university laboratory to resurrect a
dangerous virus similar to smallpox
can be as little as $100,000,
2
while
developing a complex vaccine against
such a virus can cost over $1 bil-
lion.
3
Previous attempts to weapon-
ize biological agents—such as that of
the apocalyptic cult Aum Shinrikyo,
which attacked the Tokyo subway
with botulinum toxin—failed because
of a lack of understanding of the bac-
terium.
4
However, there is concern
KEY FINDINGS
Our research involving multiple LLMs indicates that biological
weapon attack planning currently lies beyond their capability fron-
tier as assistive tools. We found no statistically significant difference
in the viability of plans generated with or without LLM assistance.
Our research did not measure the distance between the existing
LLM capability frontier and the knowledge needed for biological
weapon attack planning. Given the rapid evolution of AI, it is pru-
dent to monitor future developments in LLM technology and the
potential risks associated with its application to biological weapon
attack planning.
Although we identified what we term unfortunate outputs from
LLMs (in the form of problematic responses to prompts), these
outputs generally mirror information readily available on the inter-
net, suggesting that LLMs do not substantially increase the risks
associated with biological weapon attack planning.
To enhance possible future research, we would aim to increase the
sensitivity of our tests by expanding the number of LLMs tested,
involving more researchers, and removing unhelpful sources of
variability in the testing process. Those efforts will help ensure a
more accurate assessment of potential risks and offer a proactive
way to manage the evolving measure-countermeasure dynamic.
Research Report
资源描述:

当前文档最多预览五页,下载文档查看全文

此文档下载收益归作者所有

当前文档最多预览五页,下载文档查看全文
温馨提示:
1. 部分包含数学公式或PPT动画的文件,查看预览时可能会显示错乱或异常,文件下载后无此问题,请放心下载。
2. 本文档由用户上传,版权归属用户,天天文库负责整理代发布。如果您对本文档版权有争议请及时联系客服。
3. 下载前请仔细阅读文档内容,确认文档内容符合您的需求后进行下载,若出现内容与标题不符可向本站投诉处理。
4. 下载文档时可能由于网络波动等原因无法下载或下载错误,付费完成后未能成功下载的用户请联系客服处理。
关闭