APP下载

Top 6 Ethical Issues in Artificial Intelligence人工智能导致的六大伦理问题

2018-01-08茱莉亚博斯曼周臻

英语世界 2017年9期
关键词:埃隆马斯克机器

文/茱莉亚·博斯曼 译/周臻

By Julia Bossmann

Optimizing logistics, detecting fraud, composing art, conducting research, providing translations:intelligent machine systems are transforming our lives for the better. As these systems become more capable,our world becomes more efficient and consequently richer.

[2] Tech giants such as Alphabet1Alphabet公司(Alphabet Inc.)是一家设在美国加州的控股公司。公司前身为谷歌。公司重整后,谷歌成为其最大子公司。,Amazon, Facebook, IBM and Microsoft—as well as individuals like Stephen Hawking and Elon Musk2埃隆·马斯克为SpaceX 的CEO 和首席设计师,以联合创办了特斯拉汽车和PayPal而闻名。—believe that now is the right time to talk about the nearly boundless landscape of artificial intelligence. In many ways,this is just as much a new frontier for ethics and risk assessment as it is for emerging technology. So which issues and conversations keep AI experts up at night?

优化物流、检测欺诈、创作艺术、开展研究、提供翻译:智能机器系统正在改善我们的生活。随着这些系统变得更能干,我们的世界变得更高效,进而更富有。

[2]诸如 Alphabet、亚马逊、脸书、IBM和微软这样的科技巨头,以及诸如史蒂芬·霍金和埃隆·马斯克这样的人士相信,现在正是讨论人工智能无限前景的好时机。从许多方面来看,这既是新兴技术,也是伦理和风险评估的一个新的前沿。那么是哪些问题和讨论让人工智能专家们睡不着觉呢?

1. 失业。工作消失后会怎样?

[3]劳工阶层主要关注自动化问题。当我们发明了工作自动化的方法时,我们可以为人们创造机会来担任更复杂的角色,从主导前工业时代的体力劳动,转到全球化社会中战略和行政工作特有的认知劳动。

[4]以卡车运输为例:目前仅在美国就有数百万人从事该职业。如果特斯拉的埃隆·马斯克所承诺的无人驾驶卡车在未来十年能够广泛应用,他们怎么办?但在另一方面,如果我们降低事故风险,无人驾驶卡车似乎是一种合乎道德的选择。同样的情形也可能适用于办公人员和发达国家的大多数劳动力。

[5]这取决于我们如何利用我们的时间。大多数人仍然依靠用时间来换取收入,以维持自己和家庭的生活。我们只能希望这个机会能帮人们从非劳力的活动中找到意义,比如照顾家庭,融入社区,或者学习新的方式为人类社会做出贡献。

[6]如果我们成功过渡,某天我们可能会回头发觉,仅仅为了谋生而出卖大部分醒着的时间是多么愚昧。

1. Unemployment. What happens after the end of jobs?

[3] The hierarchy of labour is concerned primarily with automation. As we’ve invented ways to automate jobs, we could create room for people to assume more complex roles, moving from the physical work that dominated the preindustrial globe to the cognitive labour that characterizes strategic and administrative work in our globalized society.

[4] Look at trucking: it currently employs millions of individuals in the United States alone. What will happen to them if the self-driving trucks promised by Tesla’s Elon Musk become widely available in the next decade?But on the other hand, if we consider the lower risk of accidents, self-driving trucks seem like an ethical choice. The same scenario could happen to office workers, as well as to the majority of the workforce in developed countries.

[5] This is where we come to the question of how we are going to spend our time. Most people still rely on selling their time to have enough income to sustain themselves and their families.We can only hope that this opportunity will enable people to find meaning in non-labour activities, such as caring for their families, engaging with their communities and learning new ways to contribute to human society.

[6] If we succeed with the transition,one day we might look back and think that it was barbaric that human beings were required to sell the majority of their waking time just to be able to live.

2. Racist robots. How do we eliminate AI bias?

[7] Though artificial intelligence is capable of a speed and capacity of processing that’s far beyond that of humans, it cannot always be trusted to be fair and neutral. Google and its parent company Alphabet are one of the leaders when it comes to artif i cial intelligence, as seen in Google’s Photos service, where AI is used to identify people, objects and scenes. But it can go wrong, such as when a camera missed the mark on racial sensitivity, or when a software used to predict future criminals showed bias against black people.

[8] We shouldn’t forget that AI systems are created by humans, who can be biased and judgemental. Once again, if used right, or if used by those who strive for social progress, artificial intelligence can become a catalyst for positive change.

2. 种族主义机器人。如何消除人工智能的偏见?

[7]虽然人工智能的处理速度和能力远远超越人类,但不能信任它是永远公正和中立的。谷歌及其控股集团Alphabet是人工智能的领先者之一,其提供的谷歌照片服务是人工智能的一种,主要用于识别人物、物体和场景。但这会出错,比如一台相机没能标记种族敏感信息,或者预测未来犯罪的软件表现出对黑人的偏见。

[8]我们不要忘记,人工智能系统是由有偏见、武断的人类所创造的。再说,如果正确使用,或者用于努力实现社会进步,人工智能会成为积极变革的催化剂。

3. 安全。我们如何不让人工智能恶意使用?

[9]一项科技变得越强大,它越会被用于善良抑或邪恶目的。这不仅指用于取代人类士兵的机器人或自主武器,而且也指那些如被恶意使用会带来破坏的人工智能系统。由于这些战斗并不只在战场上发生,网络安全将变得尤为重要。毕竟,我们应对的是一个速度和杀伤力比我们大几个数量级的系统。

3. Security. How do we keep AI safe from adversaries?

[9] The more powerful a technology becomes, the more can it be used for nefarious reasons as well as good. This applies not only to robots produced to replace human soldiers, or autonomous weapons, but to AI systems that can cause damage if used maliciously.Because these fights won’t be fought on the battleground only, cybersecurity will become even more important. After all, we’re dealing with a system that is faster and more capable than us by orders of magnitude.

4. Evil genies. How do we protect against unintended consequences?

[10] It’s not just adversaries we have to worry about. What if artificial intelligence itself turned against us?This doesn’t mean by turning “evil”in the way a human might, or the way AI disasters are depicted in Hollywood movies. Rather, we can imagine an advanced AI system as a “genie in a bottle3在英语里let the genie out of the bottle本身就比喻to allow something evil to happen that cannot then be stopped。” that can fulfill wishes, but with terrible unforeseen consequences.

[11] In the case of a machine, there is unlikely to be malice at play, only a lack of understanding of the full context in which the wish was made. Imagine an AI system that is asked to eradicate cancer in the world. After a lot of computing,it spits out a formula that does, in fact,bring about the end of cancer—by killing everyone on the planet. The computer would have achieved its goal of “no more cancer” very efficiently, but not in the way humans intended i

4. 邪怪。我们如何防止意外的后果?

[10]我们不仅要提防对手。如果人工智能本身亦背叛我们呢?这不是指像人类一样变“邪恶”,也不会像好莱坞电影里描绘的人工智能灾难那样。相反,我们可以预想到一个像“瓶子里的精灵”一样的发达的人工智能系统,能实现愿望,但会有可怕的不可预见的后果。

[11]对机器来说,实现愿望的过程中不太可能产生恶意,只是缺乏对愿望范畴的全面理解。试想一个人工智能系统被要求根除全世界的癌症。经过大量的计算,它搞出一个方案,事实上,的确可以根除癌症——杀死地球上的所有人。计算机可以非常有效地实现“再无癌症”的目标,但却

5. 奇点。我们如何保持对复杂智能系统的控制?

[12]人类能够处于食物链顶端,并不是因为有尖利的牙齿或强肌肉。人类的主导地位几乎完全取决于我们的聪明才智。我们可以胜过更大、更快、更强壮的动物,是因为我们能创造并使用工具来控制它们:既有笼子和武器之类的物理工具,也有训练和调理等认知工具。

[13]这就产生了一个关于人工智能的严肃问题:会不会有一天,人工智能对我们也有相同的优势?我们也没法指望“拔插头”,因为一台足够先进的机器会预见到这一举动并保护自己。 这就是所谓的“奇点”:人类不再是地球上最聪明生物的时间点。

6. 机器人权利。我们如何界定人工智能的人道待遇?

[14]神经科学家仍在努力破解意识的秘密,我们也越来越多地了解奖励和厌恶的基本原理。我们甚至与智力低下的动物共用这种机制。某种程度上,我们正在人工智能系统中建立类似的奖励和厌恶机制。例如,强化学习类似于训练狗:通过虚拟奖励来提升表现。

5. Singularity. How do we stay in control of a complex intelligent system?

[12] The reason humans are on top of the food chain is not down to sharp teeth or strong muscles. Human dominance is almost entirely due to our ingenuity and intelligence. We can get the better of bigger, faster, stronger animals because we can create and use tools to control them: both physical tools such as cages and weapons, and cognitive tools like training and conditioning.

[13] This poses a serious question about artificial intelligence: will it, one day, have the same advantage over us?We can’t rely on just “pulling the plug”either, because a sufficiently advanced machine may anticipate this move and defend itself. This is what some call the“singularity”: the point in time when human beings are no longer the most intelligent beings on earth.

6. Robot rights. How do we define the humane treatment of AI?

[14] While neuroscientists are still working on unlocking the secrets of conscious experience, we understand more about the basic mechanisms of reward and aversion. We share these mechanisms with even simple animals.In a way, we are building similar mechanisms of reward and aversion in systems of artificial intelligence. For example, reinforcement learning is similar to training a dog: improved performance is reinforced with a virtual reward.

[15] Right now, these systems are fairly superficial, but they are becoming more complex and life-like. Could we consider a system to be suffering when its reward functions give it negative input? What’s more, so-called genetic algorithms work by creating many instances of a system at once, of which only the most successful“survive” and combine to form the next generation of instances. This happens over many generations and is a way of improving a system. The unsuccessful instances are deleted. At what point might we consider genetic algorithms a form of mass murder?

[16] Once we consider machines as entities that can perceive, feel and act, it’s not a huge leap to ponder their legal status. Should they be treated like animals of comparable intelligence?Will we consider the suffering of“feeling” machines?

[17] Some ethical questions are about mitigating suffering, some about risking negative outcomes. While we consider these risks, we should also keep in mind that, on the whole, this technological progress means better lives for everyone. Artificial intelligence has vast potential, and its responsible implementation is up to us. ■

[15]现今,这些系统是相当肤浅的,但是它们正变得越来越复杂和逼真。我们可以认为一个系统正为自我负面评价痛苦么?更甚者,在所谓的遗传算法中,一次性创建一种体系的多个实例,仅使其中最成功的那些“存活”、结合并形成下一代的实例,让其经过许多世代,是改进一种体系的方式。不成功的实例被删除。什么时候我们可以认为,遗传算法其实是一种形式的大规模谋杀?

[16]一旦我们将机器视为可以感知、感觉和行为的实体,那么思考其法律地位就迫在眉睫了。他们应该像拥有类同智慧的动物一样被对待吗?我们会考虑“有感觉的”机器的痛苦吗?

[17]一些道德问题是关于减轻痛苦的,一些是关于承担不良后果风险的。在考虑这些风险的同时,我们也应该记住,这项技术的进步,总体上意味着带给每个人更好的生活。人工智能具有巨大的潜力,而我们要对其实施负责。 □

猜你喜欢

埃隆马斯克机器
机器狗
机器狗
马斯克打脸简史
梅耶·马斯克的“育儿经”
埃隆·马斯克的新型脑机接口为何人开发?
失败也是一个选项
未来机器城
如果你只做肯定能成功的事情
马斯克预计“猎鹰”9可在12月复飞
只因他是马斯克