AMERICAN ENTERPRISE INSTITUTE
1
AI and Gray-Zone Aggression:
Risks and Opportunities
July 2023
In mid-May 2023, Sam Altman, the CEO of ChatGPT-
creator OpenAI, told a congressional committee that
one of his “greatest concerns” was the potential for artif-
icial intelligence (AI) to persuade voters and provide
“interactive disinformation” in the 2024 US election
campaign.
1
One might ask why, given these concerns,
Altman and OpenAI decided to release their technology.
But even if they hadn’t, considering how much damage
Russian disinformation caused during the 2016 election
campaign, AI-aided falsehoods in upcoming election
campaigns are an alarming prospect.
Generative AI, the category to which ChatGPT and
other chatbots belong, refers to algorithms often called
large language models (LLM), which are capable of gen-
erating new and credible content including text, images,
and (to a lesser extent) video and audio from training
data. During the early months of 2023, such AI caught
the public’s attention with the arrival of ChatGPT, a
chatbot that composes prose as elegant and informative
as that written by humans. The skyrocketing popularity
of the tool, which reached 100 million active monthly
users within two months of its launch, on November 30,
2022, helped people realize that any written text can
now be the work of a robot and that the reader is mostly
not in a position to establish whether a written work’s
author is a human or a machine.
2
Like human-written prose, chatbot writings can
include inaccuracies and falsehoods that most people
lack the skills to detect.
3
AI, though, can help scale such
falsehoods: Bots will—quickly, elegantly, and at high
volume—write using the information they have been
fed, which can be poisoned in different ways. This poi-
soning can result from poor production and quality of
the datasets fed to the system or from data inbreeding,
when models are trained on synthetic data—created
not by humans but by other generative AIs.
Elisabeth Braw
Key Points
• Generative artificial intelligence (AI), which causes confusion among the public through its
generation of sophisticated and credible-seeming text, audio, and imagery, poses a consider-
able threat to societal discourse, especially since it can be used in hostile powers’ disinformation.
• Western countries’ legislators are struggling to keep pace with generative AI’s rapid advance.
In June 2023, the EU became the first jurisdiction to pass legislation aimed at limiting generative
AI’s harm.
• At the same time, AI can be useful in detecting gray-zone aggression, which can appear any-
where, anytime, in any shape. Today, countries targeted by gray-zone aggression struggle to
identify it at an early stage because doing so primarily involves monitoring by humans.