https://crsreports.congress.gov
Updated June 3, 2022
Deep Fakes and National Security
“Deep fakes”—a term that first emerged in 2017 to describe
realistic photo, audio, video, and other forgeries generated
with artificial intelligence (AI) technologies—could present
a variety of national security challenges in the years to
come. As these technologies continue to mature, they could
hold significant implications for congressional oversight,
U.S. defense authorizations and appropriations, and the
regulation of social media platforms.
How Are Deep Fakes Created?
Though definitions vary, deep fakes are most commonly
described as forgeries created using techniques in machine
learning (ML)—a subfield of AI—especially generative
adversarial networks (GANs). In the GAN process, two ML
systems called neural networks are trained in competition
with each other. The first network, or the generator, is
tasked with creating counterfeit data—such as photos, audio
recordings, or video footage—that replicate the properties
of the original data set. The second network, or the
discriminator, is tasked with identifying the counterfeit
data. Based on the results of each iteration, the generator
network adjusts to create increasingly realistic data. The
networks continue to compete—often for thousands or
millions of iterations—until the generator improves its
performance such that the discriminator can no longer
distinguish between real and counterfeit data.
Though media manipulation is not a new phenomenon, the
use of AI to generate deep fakes is causing concern because
the results are increasingly realistic, rapidly created, and
cheaply made with freely available software and the ability
to rent processing power through cloud computing. Thus,
even unskilled operators could download the requisite
software tools and, using publically available data, create
increasingly convincing counterfeit content.
How Could Deep Fakes Be Used?
Deep fake technology has been popularized for
entertainment purposes—for example, social media users
inserting the actor Nicholas Cage into movies in which he
did not originally appear and a museum generating an
interactive exhibit with artist Salvador Dalí. Deep fake
technologies have also been used for beneficial purposes.
For example, medical researchers have reported using
GANs to synthesize fake medical images to train disease
detection algorithms for rare diseases and to minimize
patient privacy concerns.
Deep fakes could, however, be used for nefarious purposes.
State adversaries or politically motivated individuals could
release falsified videos of elected officials or other public
figures making incendiary comments or behaving
inappropriately. Doing so could, in turn, erode public trust,
negatively affect public discourse, or even sway an election.
Indeed, the U.S. intelligence community concluded that
Russia engaged in extensive influence operations during the
2016 presidential election to “undermine public faith in the
U.S. democratic process, denigrate Secretary Clinton, and
harm her electability and potential presidency.” Likewise,
in March 2022, Ukrainian President Volodymyr Zelensky
announced that a video posted to social media—in which he
appeared to direct Ukrainian soldiers to surrender to
Russian forces—was a deep fake. While experts noted that
this deep fake was not particularly sophisticated, in the
future, convincing audio or video forgeries could
potentially strengthen malicious influence operations.
Deep fakes could also be used to embarrass or blackmail
elected officials or individuals with access to classified
information. Already there is evidence that foreign
intelligence operatives have used deep fake photos to create
fake social media accounts from which they have attempted
to recruit sources. Some analysts have suggested that deep
fakes could similarly be used to generate inflammatory
content—such as convincing video of U.S. military
personnel engaged in war crimes—intended to radicalize
populations, recruit terrorists, or incite violence. Section
589F of the FY2021 National Defense Authorization Act
(P.L. 116-283) directs the Secretary of Defense to conduct
an intelligence assessment of the threat posed by deep fakes
to servicemembers and their families, including an
assessment of the maturity of the technology and how it
might be used to conduct information operations.
In addition, deep fakes could produce an effect that
professors Danielle Keats Citron and Robert Chesney have
termed the “Liar’s Dividend”; it involves the notion that
individuals could successfully deny the authenticity of
genuine content—particularly if it depicts inappropriate or
criminal behavior—by claiming that the content is a deep
fake. Citron and Chesney suggest that the Liar’s Dividend
could become more powerful as deep fake technology
proliferates and public knowledge of the technology grows.
Some reports indicate that such tactics have already been
used for political purposes. For example, political
opponents of Gabon President Ali Bongo asserted that a
video intended to demonstrate his good health and mental
competency was a deep fake, later citing it as part of the
justification for an attempted coup. Outside experts were
unable to determine the video’s authenticity, but one expert
noted, “in some ways it doesn’t matter if [the video is] a
fake… It can be used to just undermine credibility and cast
doubt.”
How Can Deep Fakes Be Detected?
Today, deep fakes can often be detected without specialized
detection tools. However, the sophistication of the