人工智能和灰色地带攻击:风险和机遇(2023)

VIP文档

ID:67555

大小:0.13 MB

页数:8页

时间:2023-07-27

金币:10

上传者:战必胜
AMERICAN ENTERPRISE INSTITUTE
1
AI and Gray-Zone Aggression:
Risks and Opportunities
July 2023
In mid-May 2023, Sam Altman, the CEO of ChatGPT-
creator OpenAI, told a congressional committee that
one of his “greatest concerns” was the potential for artif-
icial intelligence (AI) to persuade voters and provide
“interactive disinformation” in the 2024 US election
campaign.
1
One might ask why, given these concerns,
Altman and OpenAI decided to release their technology.
But even if they hadn’t, considering how much damage
Russian disinformation caused during the 2016 election
campaign, AI-aided falsehoods in upcoming election
campaigns are an alarming prospect.
Generative AI, the category to which ChatGPT and
other chatbots belong, refers to algorithms often called
large language models (LLM), which are capable of gen-
erating new and credible content including text, images,
and (to a lesser extent) video and audio from training
data. During the early months of 2023, such AI caught
the public’s attention with the arrival of ChatGPT, a
chatbot that composes prose as elegant and informative
as that written by humans. The skyrocketing popularity
of the tool, which reached 100 million active monthly
users within two months of its launch, on November 30,
2022, helped people realize that any written text can
now be the work of a robot and that the reader is mostly
not in a position to establish whether a written work’s
author is a human or a machine.
2
Like human-written prose, chatbot writings can
include inaccuracies and falsehoods that most people
lack the skills to detect.
3
AI, though, can help scale such
falsehoods: Bots will—quickly, elegantly, and at high
volume—write using the information they have been
fed, which can be poisoned in different ways. This poi-
soning can result from poor production and quality of
the datasets fed to the system or from data inbreeding,
when models are trained on synthetic data—created
not by humans but by other generative AIs.
Elisabeth Braw
Key Points
Generative artificial intelligence (AI), which causes confusion among the public through its
generation of sophisticated and credible-seeming text, audio, and imagery, poses a consider-
able threat to societal discourse, especially since it can be used in hostile powers’ disinformation.
Western countries’ legislators are struggling to keep pace with generative AI’s rapid advance.
In June 2023, the EU became the first jurisdiction to pass legislation aimed at limiting generative
AI’s harm.
At the same time, AI can be useful in detecting gray-zone aggression, which can appear any-
where, anytime, in any shape. Today, countries targeted by gray-zone aggression struggle to
identify it at an early stage because doing so primarily involves monitoring by humans.
资源描述:

当前文档最多预览五页,下载文档查看全文

此文档下载收益归作者所有

当前文档最多预览五页,下载文档查看全文
温馨提示:
1. 部分包含数学公式或PPT动画的文件,查看预览时可能会显示错乱或异常,文件下载后无此问题,请放心下载。
2. 本文档由用户上传,版权归属用户,天天文库负责整理代发布。如果您对本文档版权有争议请及时联系客服。
3. 下载前请仔细阅读文档内容,确认文档内容符合您的需求后进行下载,若出现内容与标题不符可向本站投诉处理。
4. 下载文档时可能由于网络波动等原因无法下载或下载错误,付费完成后未能成功下载的用户请联系客服处理。
关闭