Researchers are exploring how they might create robots endowed with their own sense of morality (Photo: Shutterstock)

研究者们正在探索如何把他们自己的道德观灌输给机器人。(拍照: Shutterstock)

A group of researchers from Tufts University, Brown University and the Rensselaer Polytechnic Institute are collaborating with the US Navy in a multi-year effort to explore how they might create robots endowed with their own sense of morality. If they are successful, they will create an artificial intelligence able to autonomously assess a difficult situation and then make complex ethical decisions that can override the rigid instructions it was given.

一个由塔夫斯大学, 布朗大学和伦斯勒理工学院的研究者组成的团队正和美国海军进行一项长期合作项目, 目的是研究如何制造一种被成功灌输了人类道德观念的机器人。如果他们取得成功,将创造一种能够在面临困难抉择的情况下自动作出评估的人工智能,可以按照复杂的道德标准来解决问题, 而不是机械地遵循指令。

Seventy-two years ago, science fiction writer Isaac Asimov introduced "three laws of robotics" that could guide the moral compass of a highly advanced artificial intelligence. Sadly, given that today's most advanced AIs are still rather brittle and clueless about the world around them, one could argue that we are nowhere near building robots that are even able to understand these rules, let alone apply them.

七十二年前,艾萨克·阿西莫夫在他的科幻小说中发明了可以对高级人工智能进行道德规范的"机器人学三定律"。不幸的是,直到今天最高级的人工智能仍然是脆弱和无能的。甚至应该说,我们到现在都没有能力制造出能理解这三大定律的机器人,更不要说去遵循它们。

A team of researchers led by Prof. Matthias Scheutz at Tufts University is tackling this very difficult problem by trying to break down human moral competence into its basic components, developing a framework for human moral reasoning. Later on, the team will attempt to model this framework in an algorithm that could be embedded in an artificial intelligence. The infrastructure would allow the robot to override its instructions in the face of new evidence, and justify its actions to the humans who control it.

一个在塔夫斯大学的Matthias Scheutz教授带领下的团队正着手解决一个困难的问题,即把人类的道德行为分解成基本的要素,来组成人类道德推论的框架。接下来他们将按照这个道德框架建立一种算法,并把它植入人工智能中。这项技术将使机器人可以在遇到新情况时不遵守指令,并向它的控制者输出为什么不遵守命令的原因。

"Moral competence can be roughly thought about as the ability to learn, reason with, act upon, and talk about the laws and societal conventions on which humans tend to agree," says Scheutz. "The question is whether machines – or any other artificial system, for that matter – can emulate and exercise these abilities."

"粗略地讲,道德行为就是如何在法律和社会习俗这些人类共识的基础上学习、推论,行为,和交流的能力。"Scheutz教授这样说,"问题就在于如何让机器,或者其它人造的智能系统可以学习和用这种能力进行实践。"

For instance, a robot medic could be ordered to transport urgently needed medication to a nearby facility, and encounter a person in critical condition along the way. The robot's "moral compass" would allow it to assess the situation and autonomously decide whether it should stop and assist the person or carry on with its original mission.

比如,一个机器人医生在被下令紧急去往附近的一个医疗设施进行医疗工作,途中遇到一个有生命危险的人,这个机器人的"道德电路"就会评估当时的状况,自己决定是停下救治这个人,或者继续进行自己的任务。

If Asimov's novels have taught us anything, it's that no rigid, pre-programmed set of rules can account for every possible scenario, as something unforeseeable is bound to happen sooner or later. Scheutz and colleagues agree, and have devised a two-step process to tackle the problem.

假如说阿西莫夫的小说情节是万能的,只需要几条并不严密的,在机器人脑子里早已设定好的规则可以解决所有能想到的问题,针对那些早晚会发生的不可预见的问题,Scheutz教授和他的同事们就想出了一个用两步过程就可以解决问题的方法。

In their vision, all of the robot's decisions would first go through a preliminary ethical check using a system similar to those in the most advanced question-answering AIs, such as IBM's Watson. If more help is needed, then the robot will rely on the system that Scheutz and colleagues are developing, which tries to model the complexity of human morality.

在他们的设想中,机器人所有的决定第一步先经过一层道德检查,由一个曾在比如像IBM的Watson这样的高级人工智能上模拟问答过的系统来完成;如果还需要进一步核实,那么机器人将会把这个问题提交给Scheutz教授和他的同事们设计的,完全模拟人类道德标准的系统。

As the project is being developed in collaboration with the US Navy, the technology could find its first application in medical robots designed to assist soldiers in the battlefield.

由于这个工程和美国海军有着合作, 这项技术即将首先在被设计为救治战场伤员的医疗机器人身上得到应用。