2016地大考博英语专业英语翻译真题
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
AlphaGo是怎么学会下围棋的 2017北京地大考博群 221616188
Where Computers Defeat Humans, and Where They Can’t AlphaGo是怎么学会下围棋的
ALPHAGO, the artificial intelligence system built by the Google subsidiary DeepMind, has just defeated the human champion, Lee Se-dol, four games to one in the tournament of the strategy game of Go. Why does this matter? After all, computers surpassed humans in chess in 1997, when IBM’s Deep Blue beat Garry Kasparov. So why is AlphaGo’s victory significant?
由Google的子公司DeepMind创建的人工智能系统AlphaGo,刚刚在一场围棋比赛中以四比一的成绩战胜了人类冠军李世石(Lee Se-dol)。此事有何重大意义?毕竟在1997年IBM深蓝(Deep Blue)击败加里·卡斯帕罗夫(Garry Kasparov)后,电脑已经在国际象棋上超越了人类。为什么要对AlphaGo的胜利大惊小怪呢?
Like chess, Go is a hugely complex strategy game in which chance and luck play no role. Two players take turns placing white or black stones on a 19-by-19 grid; when stones are surrounded on all four sides by those of the other color they are removed from the board, and the player with more stones remaining at the game’s end wins.
和国际象棋一样,围棋也是一种高度复杂的策略性游戏,不可能靠巧合和运气取胜。两名棋手轮番将黑色或白色的棋子落在纵横19道线的网格棋盘上;一旦棋子的四面被另一色棋子包围,就要从棋盘上提走,最终在棋盘上留下棋子多的那一方获胜。
Unlike the case with chess, however, no human can explain how to play Go at the highest levels. The top players, it turns out, can’t fully access their own knowledge about how they’re able to perform so well. This self-ignorance is common to many human abilities, from driving a car in traffic to recognizing a face. This strange state of affairs was beautifully summarized by the philosopher and scientist Michael Polanyi, who said, “We know more than we can tell.” It’s a phenomenon that has come to be known as “Polanyi’s Paradox.”
然而和国际象棋不一样的是,没有人能解释顶尖水平的围棋是怎么下的。我们发现,顶级棋手本人也无法解释他们为什么下得那么好。人类的许多能力中存在这样的不自知,从在车流中驾驶汽车,到辨识一张面孔。对于这一怪象,哲学家、科学家迈克尔·波兰尼(Michael Polanyi)有精彩的概括,他说,“我们知道的,比我们可言说的多。”这种现象后来就被称为“波兰尼悖论”。
Polanyi’s Paradox hasn’t prevented us from using computers to accomplish complicated tasks, like processing payrolls, optimizing flight schedules, routing telephone calls and calculating taxes. But as anyone who’s written a traditional computer program can tell you, automating these activities has required painstaking precision to explain exactly what the computer is supposed to do.
波兰尼悖论并没有阻止我们用电脑完成一些复杂的工作,比如处理工资单、优化航班安排、转送电话信号和计算税单。然而,任何一个写过传统电脑程序的人都会告诉你,要想将这些事务自动化,必须极度缜密地向电脑解释要它做什么。
This approach to programming computers is severely limited; it can’t be used in the many domains, like Go, where we know more than we can tell, or other tasks like recognizing common objects in photos, translating between human languages and diagnosing diseases — all tasks where the rules-based approach to programming has failed badly over the years.
这样的电脑编程方式是有很大局限的;在很多领域无法应用,比如我们知道但不可言说的围棋,或者对照片中寻常物品的识别、人类语言间的转译和疾病的诊断等——多年来,基于规则的编程方法在这些事务上几无建树。Deep Blue achieved its superhuman performance almost by sheer computing power: It was fed millions of examples of chess games so it could sift among the possibilities to determine the optimal move. The problem is that there are many more possible Go games than there are atoms in the universe, so even the fastest computers can’t simulate a meaningful fraction of them. To make matters worse, it’s usually far from clear which possible moves to even start exploring.
“深蓝”几乎全凭强大的计算力实现了超人表现:它吸收了数百万份棋局实例,在可能选项中搜索最佳的走法。问题是围棋的可能走法比宇宙间的原子数还多,即使最快的电脑也只能模拟微不足道的一小部分。更糟的是,我们甚至说不清该从哪一步入手进行探索。
What changed? The AlphaGo victories vividly illustrate the power of a new approach in which instead of trying to program smart strategies into a computer, we instead build systems that can learn winning strategies almost entirely on their own, by seeing examples of successes and failures.
这次有什么不同?AlphaGo的胜利清晰地呈现了一种新方法的威力,这种方法并不是将聪明的策略编入电脑中,而是建造了一个能学习制胜策略的系统,系统在几乎完全自主的情况下,通过观看胜负实例来学习。
Since these systems don’t rely on human knowledge about the task at hand, they’re not limited by the