计算机专业外文资料翻译----微机发展简史

合集下载
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

附录外文文献及翻译
Progress in computers
The first stored program computers began to work around 1950. The one we built in Cambridge, the EDSAC was first used in the summer of 1949.
These early experimental computers were built by people like myself with varying backgrounds. We all had extensive experience in electronic engineering and were confident that that experience would standus in good stead. This proved true, although we had some new things to learn. The most important of these was that transients must be treated correctly; what would cause a harmless flash on the screen of a television set could lead to a serious error in a computer.
As far as computing circuits were concerned, we found ourselves with an embarrass de riches. For example, we could use vacuum tube diodes for gates as we did in the EDSAC or pentodes with control signals on both grids, a system widely used elsewhere. This sort of choice persisted and the term famillogic came into use. Those who have worked in the computer field will remember TTL, ECL and CMOS. Of these, CMOS has now become dominant.
In those early years, the IEE was still dominated by power engineering and we had to fight a number of major battles in order to get radio engineering along with the rapidly developing subject of electronics. dubbed in the IEE light current electrical engineering. properlyrecognized as an activity in its own right. I remember that we had some difficulty in organizing a co nference because the power engineers‟ ways of doing things were not our ways. A minor source of irritation was that all IEE published papers were expected to start with a lengthy statement of earlier practice, something difficult to do when there was no earlier practice
Consolidation in the 1960s
By the late 50s or early 1960s, the heroic pioneering stage was over and the computer field was starting up in real earnest. The number of computers in the world had increased and they were much more reliable than the very early ones . To those years we can ascribe the first steps in high level languages and the first operating systems. Experimental time-sharing was beginning, and ultimately computer graphics was to come along.
Above all, transistors began to replace vacuum tubes. This change presented a formidable challenge to the engineers of the day. They had to forget what they knew about circuits and start again. It can only be said that they measured up superbly well to the challenge and that the change could not have gone more smoothly.
Soon it was found possible to put more than one transistor on the same bit of silicon, and this was the beginning of integrated circuits. As time went on, a sufficient level of integration was reached for one chip to accommodate enough transistors for a small number of gates or flip flops. This led to a range of chips known as the 7400 series. The gates and flip flops were independent of one another and each had its own pins. They could be connected by off-chip wiring to make a computer or anything else.
These chips made a new kind of computer possible. It was called a minicomputer. It was something less that a mainframe, but still very powerful, and much more affordable. Instead of having one expensive mainframe for the whole organization, a business or a university was able to have a minicomputer for each major department.
Before long minicomputers began to spread and become more powerful. The world was hungry for computing power and it had been very frustrating for industry not to be able to supply it on the scale
required and at a reasonable cost. Minicomputers transformed the situation.
The fall in the cost of computing did not start with the minicomputer; it had always been that way. This was what I meant when I referred in my abstract to inflation in the computer industry …going the other way‟. As time goes on people get more for their money, not less.
Research in Computer Hardware.
The time that I am describing was a wonderful one for research in computer hardware. The user of the 7400 series could work at the gate and flip-flop level and yet the overall level of integration was sufficient to give a degree of reliability far above that of discreet transistors. The researcher, in a university orelsewhere, could build any digital device that a fertile imagination could conjure up. In the Computer Laboratory we built the Cambridge CAP, a full-scaleminicomputer with fancy capability logic.
The 7400 series was still going strong in the mid 1970s and was used for the Cambridge Ring, a pioneering wide-band local area network. Publication of the design study for the Ring came just before the announcement of the Ethernet. Until these two systems appeared, users had mostly been content with teletype-based local area networks. Rings need high reliability because, as the pulses go repeatedly round the ring, they must be continually amplified and regenerated. It was the high reliability provided by the 7400 series of chips that gave us the courage needed to embark on the project for the Cambridge Ring.
The RISC Movement and Its Aftermath
Early computers had simple instruction sets. As time went on designers of commercially available machines added additional features which they thought would improve performance. Few comparative measureme nts were done and on the whole the choice of features depended upon the designer‟s intuition.
In 1980, the RISC movement that was to change all this broke on the world. The movement opened with a paper by Patterson and ditzy entitled The Case for the Reduced Instructions Set Computer.
Apart from leading to a striking acronym, this title conveys little of the insights into instruction set design which went with the RISC movement, in particular the way it facilitated pipelining, a system whereby several instructions may be in different stages of execution within the processor at the same time. Pipelining was not new, but it was new for small computers
The RISC movement benefited greatly from methods which had recently become available for estimating the performance to be expected from a computer design without actually implementing it. I refer to the use of a powerful existing computer to simulate the new design. By the use of simulation, RISC advocates were able to predict with some confidence that a good RISC design would be able to out-perform the best conventional computers using the same circuit technology. This prediction was ultimately born out in practice.
Simulation made rapid progress and soon came into universal use by computer designers. In consequence, computer design has become more of a science and less of an art. Today, designers expect to have a roomful of, computers available to do their simulations, not just one. They refer to such a roomful by the attractive name of computer farm.
The x86 Instruction Set
Little is now heard of pre-RISC instruction sets with one major exception, namely that of the Intel 8086 and its progeny, collectively referred to as x86. This has become the dominant instruction set and the RISC instruction sets that originally had a considerable measure of success are having to put up a hard fight for survival.
This dominance of x86 disappoints people like myself who come from the research wings. both academic and industrial. of the computer field. No doubt, business considerations have a lot to do with the survival of x86, but there are other reasons as well. However much we research oriented people would like
to think otherwise. high level languages have not yet eliminated the use of machine code altogether. We need to keep reminding ourselves that there is much to be said for strict binary compatibility with previous usage when that can be attained. Nevertheless, things might have been different if Intel‟s major attempt to produce a good RISC chip had been more successful. I am referring to the i860 (not the i960, which was something different). In many ways the i860 was an excellent chip, but its software interface did not fit it to be used in aworkstation.
There is an interesting sting in the tail of this apparently easy triumph of the x86 instruction set. It proved impossible to match the steadily increasing speed of RISC processors by direct implementation ofthe x86 instruction set as had been done in the past. Instead, designers took a leaf out of the RISC book; although it is not obvious, on the surface, a modern x86 processor chip contains hidden within it a RISC-style processor with its own internal RISC coding. The incoming x86 code is, after suitable massaging, converted into this internal code and handed over to the RISC processor where the critical execution is performed. In this summing up of the RISC movement, I rely heavily on the latest edition of Hennessy and Patterson‟s books on computer design as my supporting authority; see in particular Computer Architecture, third edition, 2003, pp 146, 151-4, 157-8.
The IA-64 instruction set.
Some time ago, Intel and Hewlett-Packard introduced the IA-64 instruction set. This was primarily intended to meet a generally recognized need for a 64 bit address space. In this, it followed the lead of the designers of the MIPS R4000 and Alpha. However one would have thought that Intel would have stressed compatibility with the x86; the puzzle is that they did the exact opposite.
Moreover, built into the design of IA-64 is a feature known as predication which makes it incompatible in a major way with all other instruction sets. In particular, it needs 6 extra bits with each instruction. This upsets the traditional balance between instruction word length and information content, and it changes significantly the brief of the compiler writer.
In spite of having an entirely new instruction set, Intel made the puzzling claim that chips based on IA-64 would be compatible with earlier x86 chips. It was hard to see exactly what was meant.
Chips for the latest IA-64 processor, namely, the Itanium, appear to have special hardware for compatibility. Even so, x86 code runs very slowly.
Because of the above complications, implementation of IA-64 requires a larger chip than is required for more conventional instruction sets. This in turn implies a higher cost. Such at any rate, is the received wisdom, and, as a general principle, it was repeated as such by Gordon Moore when he visited Cambridge recently to open the Betty and Gordon Moore Library. I have, however, heard it said that the matter appears differently from within Intel. This I do not understand. But I am very ready to admit that I am completely out of my depth as regards the economics of the semiconductor industry.
Shortage of Electrons
Although shortage of electrons has not so far appeared as an obvious limitation, in the long term it may become so. Perhaps this is where the exploitation of non-conventional CMOS will lead us. However, some interesting work has been done. notably by HuronAmend and his team working in the Cavendish Laboratory. on the direct development of structures in which a single electron more or less makes the difference between a zero and a one. However very little progress has been made towards practical devices that could lead to the construction of a computer. Even with exceptionally good luck, many tens of years must inevitably elapse before a working computer based on single electron effects can be contemplated.
微机发展简史
第一台存储程序的计算开始出现于1950前后,它就是1949年夏天在剑桥大学,我们创造的延迟存储自动电子计算机(EDSAC)。

最初实验用的计算机是由象我一样有着广博知识的人构造的。

我们在电子工程方面都有着丰富的经验,并且我们深信这些经验对我们大有裨益。

后来,被证明是正确的,尽管我们也要学习很多新东西。

最重要的是瞬态一定要小心应付,虽然它只会在电视机的荧幕上一起一个无害的闪光,但是在计算机上这将导致一系列的错误。

在电路的设计过程中,我们经常陷入两难的境地。

举例来说,我可以使用真空二级管做为门电路,就象在EDSAC中一样,或者在两个栅格之间用带控制信号的五级管,这被广泛用于其他系统设计,这类的选择一直在持续着直到逻辑门电路开始应用。

在计算机领域工作的人都应该记得TTL,ECL和CMOS,到目前为止,CMOS已经占据了主导地位。

在最初的几年,IEE(电子工程师协会)仍然由动力工程占据主导地位。

为了让IEE 认识到无线工程和快速发展的电子工程并行发展是它自己的一项权利,我们不得不面对一些障碍。

由于动力工程师们做事的方式与我们不同,我们也遇到了许多困难。

让人有些愤怒的是,所有的IEE出版的论文都被期望以冗长的早期研究的陈述开头,无非是些在早期阶段由于没有太多经验而遇到的困难之类的陈述。

60年代的巩固阶段
60年代初,个人英雄时代结束了,计算机真正引起了重视。

世界上的计算机数量已经增加了许多,并且性能比以前更加可靠。

这些我认为归因与高级语言的起步和第一个操作系统的诞生。

分时系统开始起步,并且计算机图形学随之而来。

综上所述,晶体管开始代替正空管。

这个变化对当时的工程师们是个不可回避的挑战。

他们必须忘记他们熟悉的电路重新开始。

只能说他们鼓起勇气接受了挑战,尽管这个转变并不会一帆风顺。

小规模集成电路和小型机
很快,在一个硅片上可以放不止一个晶体管,由此集成电路诞生了。

随着时间的推移,一个片子能够容纳的最大数量的晶体管或稍微少些的逻辑门和翻转门集成度达到了一个最大限度。

由现了我们所知道7400系列微机。

每个门电路或翻转电路是相互独立的并且有自己的引脚。

他们可通过导线连接在一起,作成一个计算机或其他的东西。

现了我们所知道7400系列微机。

每个门电路或翻转电路是相互独立的并且有自己的引脚。

他们可通过导线连接在一起,作成一个计算机或其他的东西。

这些芯片为制造一种新的计算机提供了可能。

它被称为小型机。

他比大型机稍逊,但功能强大,并且更能让人负担的起。

一个商业部门或大学有能力拥有一台小型机而不是得到一台大型组织所需昂贵的大型机。

随着微机的开始流行并且功能的完善,世界急切获得它的计算能力但总是由于工业上不能规模供应和它可观的价格而受到挫折。

微机的出现解决了这个局面。

计算消耗的下降并非起源与微机,它本来就应该是那个样子。

这就是我在概要中提到的“通货膨胀”在计算机工业中走上了歧途之说。

随着时间的推移,人们比他们付出的金钱得到的更多。

硬件的研究
我所描述的时代对于从事计算机硬件研究的人们是令人惊奇的时代。

7400系列的用户能够工作在逻辑门和开关级别并且芯片的集成度可靠性比单独晶体管高很多。

大学或各地的研究者,可以充分发挥他们的想象力构造任何微机可以连接的数字设备。

在剑桥大学实验室力,我们构造了CAP,一个有令人惊奇逻辑能力的微机。

7400在70年代中期还不断发展壮大,并且被宽带局域网的先驱组织Cambridge Ring所采用。

令牌环设计研究的发表先于以太网。

在这两种系统出现之前,人们大多满足于基于电报交换机的本
地局域网。

令牌环网需要高可靠性,由于脉冲在令牌环中传递,他们必须不断的被放大并且再生。

是7400的高可靠性给了我们勇气,使得我们着手Cambridge Ring.项目。

精简指令计算机的诞生
早期的计算机有简单的指令集,随着时间的推移,商业用微机的设计者增加了另外的他们认为可以微机性能的特性。

很少的测试方法被建立,总的来说特性的选取很大程度上依赖于设计者的直觉。

1980年,RISC运动改变了微机世界。

该运动是由Patterson 和Ditzel发表了一篇命名为精简指令计算机的情况论文而引起的。

除了RISC这个引人注目缩略词外,这个标题传达了一些指令集合设计的见解,随之引发了RISC 运动。

从某种意义上说,它推动了线程的发展,在处理器中,同一时间有几个指令在不同的执行阶段称为线程。

线程不是个新概念,但是它对微机来说是从未有过的。

RISC受益于一个最近的可用的方法的诞生,该方法使估计计算机性能成为可能而不去真正实现该微机的设计。

我的意思是说利用目前存在的功能强大的计算机去模拟新的设计。

通过模拟该设计,RISC的提倡者能够有信心的预言,一台使用和传统计算机相同电路的RISC计算机可以和传统的最好的计算机有同样的性能。

模拟仿真加快了开发进度并且被计算机设计者广泛采用。

随后,计算机设计者变的多些可理性少了一些艺术性。

今天,设计者们希望有满屋可用计算机做他们的仿真,而不只是一台。

x86指令集
除非出现很大意外,要不很少听到有计算机使用早期的RISC指令集了。

INTEL 8086及其后裔都与x86密切相关。

x86构架已经占据了计算机核心指令集的主导地位。

被认为是相当成功的RISC 指令集现在的生存空间越来越小了。

对于我们这些从事计算机学术研究的人,x86的统治地位让我们感到失望。

毫无疑问,商业上对于x86的生存会有更多的考虑,但是这里还有很多原因,尽管我们多么希望人们考虑其他的方面。

高级语言并没有完全消除对机器原始编码的的使用。

我们仍需要不断提醒我们自己:我们应该严格的与先前的应用在机器层面上保持兼容。

然而,情况也许有所不同,如果Intel的主要目的是为是生产一个好的RISC芯片。

有一个已经取得了更大的成功,我所说的i860(不是i960,它们有一些不同)。

从许多方面来说,i860是个卓越的芯片,但是它的软件借口不适合在工作站上应用。

对于x86取得胜利的最后有一件有意思的事情。

直接应用先前x86的实现方式对于满足RISC处理器的持续增长的速度要求,是不可能的。

因此,设计者们没有完全实现RISC指令集,尽管这不是很明显。

表面上,一片现代的x86芯片包含了隐藏实现的部分,好象和实现RISC指令集的芯片一样。

当致命的异常发生时,x86引入的代码是,经过适当的篡改后,被转化为它的内部代码并且被RISC芯片处理。

对于以上RISC运动的总结,我非常信赖最新版本的哈里斯和培生出版社的有关计算机设计的书籍。

请参考特殊计算机体系构造,第三版,2003,P146,151-4,157-8
IA-64指令集
很久以前,Intel 和Hewlett-Packard引进了IA-64指令集。

这最初主要是为了满足通常的64位地址空间问题。

在这种情况下,随后出现了MIPS R4000和Alpha。

然而,人们普遍认为Intel应该与x86构架保持兼容,可令人疑惑的是恰恰相反。

进一步说,IA-64的设计与其他所有的指令集在主要实现方式上有所不同。

特别的,每条指令它需要附加的6位。

这打乱了传统的在指令字长和信息内容的平衡,并且它改变了编译器作者的原先的大纲。

尽管IA-64是个全新的指令集,但Intel发表了一个令人困惑的声明:基于IA-64的芯片将与早期的x86芯片保持兼容。

很难弄懂它所指的是什么。

最新的称为Itaninu IA-64处理器显然需要特殊的兼容性的硬件,尽管如此,x86编码运行的相
当慢。

由于以上的复杂因素,IA-64的实现需要更大的体积相对与传统的指令集,这暗示着更大的消耗。

因此,在任何情况下,作为常识和一般性的标准,Gordon Moore在访问剑桥最近开放的Betty and Gordon Moore 图书馆时所反复强调。

在听到他说问题出现在Intel内部也许有所不同,我很不理解。

但是我已经作好了准备,去接受这样的事实,我已经完全不了解半导体经济学了。

电子的不足
尽管目前为止,电子每表现出明显的不足,然而从长远看来,它最终会不能满足要求。

也许这是我们开发非传统CMOS管的原因。

在Cavendish实验室里,HaroonAmed已经作了很多有意义的工作,他们想通过一个单独电子或多或少的表现出0和1的区别。

然而对于构造实用的计算机设备只取得了一点点进展。

也许由于偶然的好运气,数十年后一台基于一个单独电子的计算机也许是可以实现的。

相关文档
最新文档