CHRISTOPHER A. MOUTON, CALEB LUCAS, ELLA GUEST
The Operational Risks
of AI in Large-Scale
Biological Attacks
Results of a Red-Team Study
T
he rapid advancement of artificial intelligence (AI) technologies and applications has far-
reaching implications across multiple domains, including in the potential development of a
biological weapon. This potential application of AI raises particular concerns because it is
accessible to nonstate actors and individuals. The speed at which AI technologies are evolv-
ing often surpasses the capacity of government regulatory oversight, leading to a potential gap in
existing policies and regulations.
The coronavirus disease 2019 (COVID-19) pandemic serves as an example of the damaging
impact that even a moderate pan-
demic can have on global systems.
1
Exacerbating the risk is the eco-
nomic imbalance between offense
and defense in biotechnology. For
instance, the marginal cost for a
university laboratory to resurrect a
dangerous virus similar to smallpox
can be as little as $100,000,
2
while
developing a complex vaccine against
such a virus can cost over $1 bil-
lion.
3
Previous attempts to weapon-
ize biological agents—such as that of
the apocalyptic cult Aum Shinrikyo,
which attacked the Tokyo subway
with botulinum toxin—failed because
of a lack of understanding of the bac-
terium.
4
However, there is concern
KEY FINDINGS
■ Our research involving multiple LLMs indicates that biological
weapon attack planning currently lies beyond their capability fron-
tier as assistive tools. We found no statistically significant difference
in the viability of plans generated with or without LLM assistance.
■ Our research did not measure the distance between the existing
LLM capability frontier and the knowledge needed for biological
weapon attack planning. Given the rapid evolution of AI, it is pru-
dent to monitor future developments in LLM technology and the
potential risks associated with its application to biological weapon
attack planning.
■ Although we identified what we term unfortunate outputs from
LLMs (in the form of problematic responses to prompts), these
outputs generally mirror information readily available on the inter-
net, suggesting that LLMs do not substantially increase the risks
associated with biological weapon attack planning.
■ To enhance possible future research, we would aim to increase the
sensitivity of our tests by expanding the number of LLMs tested,
involving more researchers, and removing unhelpful sources of
variability in the testing process. Those efforts will help ensure a
more accurate assessment of potential risks and offer a proactive
way to manage the evolving measure-countermeasure dynamic.
Research Report