原创翻译:龙腾网 http://www.ltaaa.com 翻译:星际圣母 转载请注明出处
论坛地址:http://www.ltaaa.com/bbs/thread-479999-1-1.html




A remote-controlled robot equipped with a machine gun under development by the United States Marine Corps.

图:美国海军陆战队正在开发的配备机枪的远程操控机器人

Tech leaders, including Elon Musk and the three co-founders of Google’s AI subsidiary DeepMind, have signed a pledge promising to not develop “lethal autonomous weapons.”

科技大佬们,包括伊隆·马斯克和谷歌人工智能子公司迪普曼的三位联合创始人,签订了一份保证不发展“自主致命武器”的协议。

It’s the latest move from an unofficial and global coalition of researchers and executives that’s opposed to the propagation of such technology. The pledge warns that weapon systems that use AI to “[select] and [engage] targets without human intervention” pose moral and pragmatic threats. Morally, the signatories argue, the decision to take a human life “should never be delegated to a machine.” On the pragmatic front, they say that the spread of such weaponry would be “dangerously destabilizing for every country and individual.”

这是全球研究者和公司高层反对该技术扩散的最新运动。该协议警告,“在没有人为干涉的情况下【先择】和【攻击】目标”的人工智能武器系统会引发道德风险和实际威胁。协议中写道,从道德上来说,我们“永远也不应该让一台机器来决定”是否夺取一个人类的生命。协议中还写道,从务实的角度来看,此类武器的扩散很危险,将“破坏每个国家的稳定并让每个人都感到不安”。

The pledge was published today at the 2018 International Joint Conference on Artificial Intelligence (IJCAI) in Stockholm, and it was organized by the Future of Life Institute, a research institute that aims to “mitigate existential risk” to humanity. The institute has previously helped issue letters from some of the same individuals, calling on the United Nations to consider new regulations for what are known as lethal autonomous weapons, or LAWS. This, however, is the first time those involved have pledged individually to not develop such technology.

该协议于今天在斯德哥尔摩举办的2018国际人工智能联合会议上发布,协议的签署是由生命未来研究所安排的,该研究所旨在“减少人类生存的风险”。此前,本次签署协议的其中一些科技大佬已经通过生命未来研究所发布了一次公开信,他们在信中呼吁联合国考虑制定限制开发自主致命武器的新规定。这是科技大佬们第一次代表个人承诺不发展这种技术。



So far, attempts to muster support for the international regulation of autonomous weapons have been ineffectual. Campaigners have suggested that LAWS should be subject to restrictions, similar to those placed on chemical weapons and landmines. But note that it’s incredibly difficult to draw a line between what does and does not constitute an autonomous system. For example, a gun turret could target individuals but not fire on them, with a human “in the loop” simply rubber-stamping its decisions.

到目前为止,制定国际法规限制发展自主武器的呼吁还没有得到人们的响应。活动家们提议自主致命武器应该受制于各种限制条件,就像对使用化学武器和地雷的制约一样。但是值得注意的是,界定什么情况下构成自主系统、什么情况下不构成自主系统是一件极其困难的事情。比如,一座炮塔可以自主锁定人员,但不会向其开火,炮塔内的人则只是不加思考便机械式地批准开火。

They also point out that enforcing such laws would be a huge challenge, as the technology to develop AI weaponry is already widespread. Additionally, the countries most involved in developing this technology (like the US and China) have no real incentive not to do so.

他们也指出,强制推行这样的法律会碰到巨大的挑战,因为发展人工智能武器的技术已经很普遍了。另外,大部分开发这种技术的国家(如美国和中国)都没有不开发该技术的动机。



AI is already being developed to analyze video footage from military drones.

图:军用无人机的人工智能已经可以分析视频片段了 。

Paul Scharre, a military analyst who has wrriten a book on the future of warfare and AI, told The Verge that the pledge was unlikely to have an effect on international policy, and that such documents did not do a good enough job of teasing out the intricacies of this debate. “What seems to be lacking is sustained engagement from AI researchers in explaining to policymakers why they are concerned about autonomous weapons,” said Scharre.

军事分析师斯查瑞写了一本关于战争和人工智能的未来的书,他告诉本站记者,这份协议不太可能对国际政策产生效力,这样的文件不足以梳理此中错综复杂的关系。斯查尔表示,“在向决策者们解释为什么担心自主武器的问题上,人工智能研究者们似乎缺乏恒心”。

He also added that most governments were in agreement with the pledge’s main promise — that individuals should not develop AI systems that target individuals — and that the “cat is already out of the bag” on military AI used for defensive. “At least 30 nations have supervised autonomous weapons used to defend against rocket and missile attack,” said Scharre. “The real debate is in the middle space, which the press release is somewhat ambiguous on.”

他还说道,大多数国家的政府都同意这份协议的主要条件——私人不应该开发以人为目标的人工智能系统,他们也同意,军用人工智能负责国家的防御已经是众所周知的事情。斯查尔说,“至少有30个国家已经拥有用于防御火箭和导弹打击的自主武器,不过他们对其进行了监管,真正引发争论的是界限不清的地方,这也是媒体在某种程度上模糊的地方”

However, while international regulations might not be coming anytime soon, recent events have shown that collective activism like today’s pledge can make a difference. Google, for example, was rocked by employee protests after it was revealed that the company was helping develop non-lethal AI drone tools for the Pentagon. Weeks later, it published new research guidelines, promising not to develop AI weapon systems. A threatened boycott of South Korea’s KAIST university had similar results, with the KAIST’s president promising not to develop military AI “counter to human dignity including autonomous weapons lacking meaningful human control.”

不过,虽然国际法规短期内不可能出台,但是最近的事件显示今天这样的集体活动是可以产生影响的。比如,谷歌在被揭露为五角大楼开发非致命性人工智能无人机之后,遭到了其员工的抗议。几星期后,该公司发布了新的研究指导方针,承诺不会开发人工智能武器系统。遭到联合抵制威胁的韩国KAIST大学也面临了同样的结果,KAIST的主席承诺不会开发违背人类尊严的军用人工智能,包括没有人类控制的自主武器。

In both cases, it’s reasonable to point out that the organizations involved are not stopping themselves from developing military AI tools with other, non-lethal uses. But a promise not to put a computer solely in charge of killing is better than no promise at all.

不管怎样,应当指出的是,这些作出承诺的组织并没有表示不会开发非军用人工智能。但是,得到一个不让电脑完全自主杀人的承诺总比没有承诺强。