《论促进安全和社会效益的人工智能 2017年 》英文电子版 (14 页)

ID:25438

阅读量:0

大小:0.13 MB

页数:14页

时间:2022-12-01

金币:20

上传者:战必胜
On the Promotion of Safe and Socially Beneficial Artificial Intelligence
Seth D. Baum
Global Catastrophic Risk Institute
http://sethbaum.com * http://gcrinstitute.org
AI & Society, 32(4):543-551 (2017), doi:10.1007/s00146-016-0677-0
This version dated 10 October 2017.
Abstract
This paper discusses means for promoting artificial intelligence (AI) that is designed to be safe
and beneficial for society (or simply “beneficial AI”). The promotion of beneficial AI is a social
challenge because it seeks to motivate AI developers to choose beneficial AI designs. Currently,
the AI field is focused mainly on building AIs that are more capable, with little regard to social
impacts. Two types of measures are available for encouraging the AI field to shift more towards
building beneficial AI. Extrinsic measures impose constraints or incentives on AI researchers to
induce them to pursue beneficial AI even if they do not want to. Intrinsic measures encourage AI
researchers to want to pursue beneficial AI. Prior research focuses on extrinsic measures, but
intrinsic measures are at least as important. Indeed, intrinsic factors can determine the success of
extrinsic measures. Efforts to promote beneficial AI must consider intrinsic factors by studying
the social psychology of AI research communities.
1. Introduction
The challenge of building technologies that are safe and beneficial for society is really two
challenges in one. There is the technical challenge of developing safe and beneficial technology
designs, and there is the social challenge of ensuring that such designs are used. The two
challenges are interrelated. Motivating technologists to pursue safe and beneficial designs is
itself a social challenge. Furthermore, motivating people to use safe and beneficial designs is
made easier when the designs also have other attractive features such as low cost and ease of use;
creating these features is a technical challenge.
This paper is concerned with the social challenge. Specifically, the paper examines a range of
approaches to motivating technologists to pursue safe and beneficial technology designs. The
paper focuses on artificial intelligence (AI) technologies, including both near-term AI and the
proposed future “strong” or “superintelligent” AI that some posit could bring extreme social
benefits or harms depending on its design. Much of the paper’s discussion also applies to other
technologies.
That AI has significant social impacts is now beyond question. AI is now being used in
finance, medicine, military, transportation, and a range of other critical sectors. The impact is
likely to grow over time as new technologies are adopted, such as autonomous vehicles and
lethal autonomous weapons (unless the latter are banned or heavily restricted). The prospects for
strong AI are controversial; this paper takes the position that the stakes are sufficiently high that
it warrants careful attention even if the probability of achieving it appears to be low. Regardless,
while the paper is motivated in part by the risk of strong AI, the insights are more general.
1
For brevity, the paper uses the term “beneficial AI” to refer to AI that is safe and beneficial
for society. It also uses the term “promoting beneficial AI” to refer to efforts to encourage
1
For perspectives on near-term AI impacts, see e.g. Lin et al. (2011). For strong AI, see e.g. Eden et al. (2013).
1
资源描述:

当前文档最多预览五页,下载文档查看全文

此文档下载收益归作者所有

当前文档最多预览五页,下载文档查看全文
温馨提示:
1. 部分包含数学公式或PPT动画的文件,查看预览时可能会显示错乱或异常,文件下载后无此问题,请放心下载。
2. 本文档由用户上传,版权归属用户,天天文库负责整理代发布。如果您对本文档版权有争议请及时联系客服。
3. 下载前请仔细阅读文档内容,确认文档内容符合您的需求后进行下载,若出现内容与标题不符可向本站投诉处理。
4. 下载文档时可能由于网络波动等原因无法下载或下载错误,付费完成后未能成功下载的用户请联系客服处理。
关闭