CRS报告 IF11333

免费文档

VIP文档

ID:29408

大小:0.57 MB

页数:3页

时间:2023-01-10

金币:0

上传者:战必胜
https://crsreports.congress.gov
Updated June 3, 2022
Deep Fakes and National Security
“Deep fakes”—a term that first emerged in 2017 to describe
realistic photo, audio, video, and other forgeries generated
with artificial intelligence (AI) technologiescould present
a variety of national security challenges in the years to
come. As these technologies continue to mature, they could
hold significant implications for congressional oversight,
U.S. defense authorizations and appropriations, and the
regulation of social media platforms.
How Are Deep Fakes Created?
Though definitions vary, deep fakes are most commonly
described as forgeries created using techniques in machine
learning (ML)a subfield of AIespecially generative
adversarial networks (GANs). In the GAN process, two ML
systems called neural networks are trained in competition
with each other. The first network, or the generator, is
tasked with creating counterfeit datasuch as photos, audio
recordings, or video footagethat replicate the properties
of the original data set. The second network, or the
discriminator, is tasked with identifying the counterfeit
data. Based on the results of each iteration, the generator
network adjusts to create increasingly realistic data. The
networks continue to competeoften for thousands or
millions of iterationsuntil the generator improves its
performance such that the discriminator can no longer
distinguish between real and counterfeit data.
Though media manipulation is not a new phenomenon, the
use of AI to generate deep fakes is causing concern because
the results are increasingly realistic, rapidly created, and
cheaply made with freely available software and the ability
to rent processing power through cloud computing. Thus,
even unskilled operators could download the requisite
software tools and, using publically available data, create
increasingly convincing counterfeit content.
How Could Deep Fakes Be Used?
Deep fake technology has been popularized for
entertainment purposesfor example, social media users
inserting the actor Nicholas Cage into movies in which he
did not originally appear and a museum generating an
interactive exhibit with artist Salvador Dalí. Deep fake
technologies have also been used for beneficial purposes.
For example, medical researchers have reported using
GANs to synthesize fake medical images to train disease
detection algorithms for rare diseases and to minimize
patient privacy concerns.
Deep fakes could, however, be used for nefarious purposes.
State adversaries or politically motivated individuals could
release falsified videos of elected officials or other public
figures making incendiary comments or behaving
inappropriately. Doing so could, in turn, erode public trust,
negatively affect public discourse, or even sway an election.
Indeed, the U.S. intelligence community concluded that
Russia engaged in extensive influence operations during the
2016 presidential election to “undermine public faith in the
U.S. democratic process, denigrate Secretary Clinton, and
harm her electability and potential presidency.” Likewise,
in March 2022, Ukrainian President Volodymyr Zelensky
announced that a video posted to social mediain which he
appeared to direct Ukrainian soldiers to surrender to
Russian forceswas a deep fake. While experts noted that
this deep fake was not particularly sophisticated, in the
future, convincing audio or video forgeries could
potentially strengthen malicious influence operations.
Deep fakes could also be used to embarrass or blackmail
elected officials or individuals with access to classified
information. Already there is evidence that foreign
intelligence operatives have used deep fake photos to create
fake social media accounts from which they have attempted
to recruit sources. Some analysts have suggested that deep
fakes could similarly be used to generate inflammatory
contentsuch as convincing video of U.S. military
personnel engaged in war crimesintended to radicalize
populations, recruit terrorists, or incite violence. Section
589F of the FY2021 National Defense Authorization Act
(P.L. 116-283) directs the Secretary of Defense to conduct
an intelligence assessment of the threat posed by deep fakes
to servicemembers and their families, including an
assessment of the maturity of the technology and how it
might be used to conduct information operations.
In addition, deep fakes could produce an effect that
professors Danielle Keats Citron and Robert Chesney have
termed the “Liar’s Dividend”; it involves the notion that
individuals could successfully deny the authenticity of
genuine contentparticularly if it depicts inappropriate or
criminal behaviorby claiming that the content is a deep
fake. Citron and Chesney suggest that the Liar’s Dividend
could become more powerful as deep fake technology
proliferates and public knowledge of the technology grows.
Some reports indicate that such tactics have already been
used for political purposes. For example, political
opponents of Gabon President Ali Bongo asserted that a
video intended to demonstrate his good health and mental
competency was a deep fake, later citing it as part of the
justification for an attempted coup. Outside experts were
unable to determine the video’s authenticity, but one expert
noted, “in some ways it doesn’t matter if [the video is] a
fake… It can be used to just undermine credibility and cast
doubt.”
How Can Deep Fakes Be Detected?
Today, deep fakes can often be detected without specialized
detection tools. However, the sophistication of the
资源描述:

当前文档最多预览五页,下载文档查看全文

此文档下载收益归作者所有

当前文档最多预览五页,下载文档查看全文
温馨提示:
1. 部分包含数学公式或PPT动画的文件,查看预览时可能会显示错乱或异常,文件下载后无此问题,请放心下载。
2. 本文档由用户上传,版权归属用户,天天文库负责整理代发布。如果您对本文档版权有争议请及时联系客服。
3. 下载前请仔细阅读文档内容,确认文档内容符合您的需求后进行下载,若出现内容与标题不符可向本站投诉处理。
4. 下载文档时可能由于网络波动等原因无法下载或下载错误,付费完成后未能成功下载的用户请联系客服处理。
关闭