How_Will_AI_Change_the_World_AI_是怎么改变世界的

合集下载
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

How Will AI Change the World AI 是怎么改变世界的◎
TED-Ed
In the coming years, artificial intelligence(AI) is probably going to change your life, and likely the entire world. But people have a hard time agreeing on exactly how. There’s a big difference between asking a human to do something and giving that as the 1)objective to an AI system. When you ask a human to get you a cup of coffee, you don’t mean this should be their life’s mission, and nothing else in the universe matters.
And the problem with the way we build AI systems now is that we give them a fixed objective. The algorithms require us to 2)specify everything in the objective. And if you say, “Can we fix the acidification of the oceans?” “Yeah, you could have a catalytic reaction that does that
extremely efficiently, but it consumes a quarter of the oxygen
in the atmosphere, which would apparently cause us to die
fairly slowly and unpleasantly over the course of several
hours.” The AI system may answered.
So, how do we avoid this problem? You might say, okay,
well, just be more careful about specifying the objective—
don’t forget the atmospheric oxygen. And then, of course,
some side effect of the reaction in the ocean poisons all the
自从2017年人工智能商业化爆发,尤其是ChatGPT 面世以来,
行业内外出现了许多讨论的声音,下面这篇文章节选自TED 演讲,
可能会让我们用更加理性的态度看待人工智能。

Spotlight /聚光灯
Spotlight/聚光灯
/聚光灯
fish. Okay, well I mean don’t kill the fish either. And
then, well, what about the seaweed? Don’t do anything
that’s going to cause all the seaweed to die. And on
and on and on.
And the reason that we don’t have to do that with
humans is that humans often know that they don’t
know all the things that we care about. For example, if
you ask a human to get you a cup of coffee, and you
happen to be in the Hotel George Sand in Paris, where
the coffee is 13 euros a cup, it’s entirely 3)reasonable to come back and Array say, “Well, it’s 13 euros, are you sure you want it? Or I could go next door and get one?” And it’s a perfectly normal thing for a person to do. For another example, to ask, “I’m going to repaint your house—is it okay if I take off the drainpipes and then put them back?” We don’t think of this as
a terribly sophisticated capability, but AI systems don’t have it because
the way we build them now, they have to know the full objective. If we build systems that know that they don’t know what the objective is, then they start to exhibit these behaviors, like asking permission before getting rid of all the oxygen in the atmosphere.
In all these senses, control over the AI system
comes from the machine’s uncertainty about
what the true objective is. And it’s when you build
machines that believe with certainty that they
have the objective, that’s when you get this sort of
psychopathic behavior. And I think we see the same
thing in humans.
There’s an interesting story that E.M. Forster
wrote, where everyone is entirely machine-dependent. The story is really about the fact that if you hand over the management of your civilization to machines, you then lose the incentive to understand it yourself or to teach the next generation how to understand it. You can see “WALL-E” actually as a modern version, where everyone is enfeebled and infantilized by the machine, and that hasn’t been possible up to now.
We put a lot of our civilization into books, but the books can’t run it for us. And so we always have to teach the next generation. If you work it out, it’s about a trillion person years of teaching and learning and an unbroken chain that goes back tens of thousands of generations. What happens if that chain breaks?
I think that’s something we have to understand as AI moves forward. The actual date of arrival of general purpose AI—you’re not going to be able to 4)pinpoint, it isn’t a single day. It’s also not the case that it’s all or nothing. The impact is going to be increasing. So with every advance in AI, it significantly expands the range of tasks.
So in that sense, I think most experts say by the end of the century, we’re very, very likely to have general purpose AI. The median is something around 2045. I’m a little more on the conservative side. I think the problem is harder than we think.
I like what John McAfee, he was one of
the founders of AI, when he was asked this
question, he said, somewhere between 5 and
500 years. And we’re going to need, I think,
several Einsteins to make it happen.
1) objective n.目标 2) specify v.明确规定
3) reasonable adj.明智的 4) pinpoint v.明确指出
/聚光灯在将来的岁月里,人工智能极
有可能会改变你的生活,甚至有可
能改变全世界。

但人们对于这种改
变的呈现方式结论不一。

要求一个人做某件事与将其作为目标交给人工智能系统是有很
大区别的。

当你拜托一个人帮你拿杯咖啡时,你并不是在让这个人奉它为人生使命,以致宇宙间再也没有更重要的事了。

而我们现在构建人工智能系统的问题是我们给了它们一个固定
目标。

算法是要求我们规定目标里的一切。

如果你说:“我们能解决海洋的酸化问题吗?”人工智能可能会回答:“没问题,可以形成一种非常有效的化学反应,但这将会吞噬大气层里四分之一的氧气,从而导致我们全都慢慢地、不愉快地在几个小时后死去。


那,我们该如何避免这种问题呢?你可能会说,好吧,那我们就对目标更具体地说明一下——别忘了大气层里的氧气。

然后,当然也要避免海洋里某种效应的副作用会毒死所有的鱼。

好吧,那我就再定义一下,也别毒死鱼。

那么,海藻呢?也
别做任何会导致海藻全部死亡的事。

以此类推。

我们对人类不需要这样做是因为人们大都
明白自己并不可能对每个人的爱好无不知晓。

例如,如果一个人拜托你买咖啡,而你刚好在
一杯咖啡为13欧元的巴黎乔治圣德酒店,你
很有可能会再回去问一下:“喂,这里咖啡得
13欧元,你还要吗?要不我去隔壁店里帮你买
杯?”这对人类来讲再正常不过。

又如,当你
词组加油站
side effect 副作用
care about 关心
ask permission 取得许可
Spotlight /
聚光灯
问道:“我要重新粉刷你的房子,我可以先把排水管拆了再装回去吗?”我们并不觉得这是一种特别复杂厉害的能力,但人工智能系统没有这种能力,因为在我们当下的建构方法里,它们必须知道全部目标。

如果我们构建的系统明白它们并不了解目标,它们就会开始出现此类行动:比如在除掉大气层里的氧气之前先征求许可。

在这种意义上,对于人工智能系统的控制源于机器对真正目标的不确定性。

而只有在构建对目标自以为有着绝对肯定性的机器时,才会产生这种精神错乱的行为。

我觉得对于人类,也是相同的理念。

E.M.福斯特写过一篇引人深思的故事。

故事里的人们都完全依赖机器。

其中寓意是,如果你把文明的管理权交给了机器,那你将会失去自身了解文明、把文明传承给下一代的动力。

我们可以将《机器人总动员》视为现代版:由于机器,人们变得衰弱与幼儿化,到目前为止,这还不可能。

我们把大量文明写入书籍,但书籍无法为我们管理文明。

所以,我们必须一直指导下一代。

计算下来,这是一个在一万亿年、数以万计的世代之间绵延不绝的教导与学习的链条。

这条链如果断了,将会如何?
随着人工智能的发展,我认为这是我们必须了解的事情。

我们将无法精准地确认通用型人工智能真正来临的时日,因为那并不会是一日之功。

也并不是存在或不存在的两项极端。

这方面的影响力将是与日俱增的。

所以随着人工智能的进步,它所能完成的任务将显著扩展。

这样一看,我觉得大部分的专家都说我们极有可能在21世纪末前生产通用型人工智能。

中位数位置在2045年左右。

我对此偏于保守派。

我认为问题比我们想象的还要难。

我喜欢人工智能的发明家之一约翰·麦卡菲对这个问题的答案:他说,应该在5到500年之间。

我觉得,这得要几位爱因斯坦才能实现。

相关文档
最新文档