[10] H. Alt and M. Godau. Computing the Fréchet distance between two polygonal curves.
英语翻译练习
英语高考突围Test 11.在钟发明之前,人们常根据太阳的位置来判断时间。
(invention)2.消息传来,被绑架的七名人质获释,人们如获重负。
(同位语从句)3.你练习瑜伽(Yoga)越多,你就越擅长应付压力。
(The more…)4.只有通过孜孜不倦地学习,才能发现学习中的不足之处。
(Only…)5.尽管自己很忙,他十年如一日,不辞辛苦为孤老买菜烧饭。
(…as)Test 21.素质教育应鼓励学生在学习过程中采取主动。
(encourage)2.尽管我后悔自己做过的一些事情,然而都不能同你组的相提并论。
(compare)3.勤洗手是一个良好的个人习惯,它能有效阻止许多传染病传播。
(spread)4.两本书中都没有持这种观点:核战争的危险正在逐渐增加。
5.想到还要重新考试令他很沮丧,以至于他不得不取消原定于周末的生日晚会。
(schedule)Test 31.根据对行星的新定义,(Pluto)不再是一个行星了。
(base)2.像世界上其他语言一样,中文随着时代不断地变化。
(keep)3.事与愿违,因为缺乏资金,他们只好放弃开发那个新的软件。
(can not but)4.大量的暴力犯罪是在酒精的作用下实施的,并非巧合。
(coincidence crime)5.经过10年的艰苦奋斗,这个残疾人终于成为一名律师,圆了自己儿时的梦想。
(After)Test 41.许多家长发现很难和自己的孩子交流。
(find)2.他开始认识到当时他在做出做过决定时起了多大的作用。
(role)3.我们确保这些新发明的产品要符合环保要求。
(make sure)4.据报道在我国每年因意外事故而造成的损失达数百亿元。
(cause)5.尽管过去了几年,但在北京2008奥运火炬传递为全世界人民留下了许多深刻的记忆,创造了新的梦想。
(leave)Test 51.病得很重时,他才卧床休息。
(Only)2.他对新来的同事一无所知除了他曾经是一位网络作家。
经济学人双语阅读:超级计算 更深奥的思维
【经济学人】双语阅读:超级计算更深奥的思维Science and technology科学技术Supercomputing超级计算Deeper thought更深奥的思维The world has a new fastest computer, thanks to video games多亏电子游戏,让世界拥有了一台新的最快的计算机The ultimate games machine终极游戏机SPEED fanatics that they are, computer nerds like to check the website of Top500, a collaboration between German and American computer scientists that keeps tabs on which of the world's supercomputers is the fastest.作为速度控,电脑迷们喜欢查看Top500的网站,该网站是由德国和美国的计算机科学家合办,记录世界上最快的超级计算机。
On November 12th the website released its latest list, and unveiled a new champion.11月12日,该网站发布了最新榜单,揭开了新一任冠军的面纱。
The computer in question is called Titan, and it lives at Oak Ridge National Laboratory, in Tennessee.获得冠军的计算机名为泰坦,居于田纳西州的橡树岭国家实验室,It took first place from another American machine, IBM's Sequoia, which is housed at Lawrence Livermore National Laboratory, in California.它是击败了另一台美国的计算机-IBM的红杉而取得冠军的,红杉位于加利福尼亚州的劳伦斯利物莫国家实验室。
【课文背诵】Whatdoyout...
【课⽂背诵】Whatdoyout...Recently,Google's AlphaGO defeated a human GO champion in a series of matches,there has arisen a fear that artificial intelligence will become better than us, and will come to dominate humanity.For me,I don't think we should fear that.On one hand,artificial intelligence is,actually a product of human wisdom.Without the designed programme,it cannot function automatically.AlphaGO's success mainly comes from its thousands of algorithm concerning GO.It's a success of calculation rather than AI itself.On the other hand,the success of AI stimulates the development of human wisdom.Years ago,"Deep blue"and"Deeper blue"had defeated the human Chess world champion ter on,human won over them more and more times.Maybe because human do not give up easily,they develop more machines to challenge themselves.People like to set goals for themselves.During the defeat- succeedprocess,human know more about their potentials.From the above statements,I have the reasons to say we shouldn't fear AI.On the contrary,we should develop more to enrich our lives and to test our abilities.。
《匆匆》朱自清 散文英译版本
匆匆朱自清[1]燕子去了,有再来的时候;杨柳枯了,有再青的时候;桃花谢了,有再开的时候。
但是,聪明的,你告诉我,我们的日子为什么一去不复返呢?——是有人偷了他们罢:那是谁?又藏在何处呢?是他们自己逃走了罢:现在又到了那里呢?[2]我不知道他们给了我多少日子;但我的手确乎是渐渐空虚了。
在默默里算着,八千多日子已经从我手中溜去;像针尖上一滴水滴在大海里,我的日子滴在时间的流里,没有声音,也没有影子。
我不禁头涔涔而泪潸潸了。
[3]去的尽管去了,来的尽管来着;去来的中间,又怎样地匆匆呢?早上我起来的时候,小屋里射进两三方斜斜的太阳。
太阳他有脚啊,轻轻悄悄地挪移了;我也茫茫然跟着旋转。
于是——洗手的时候,日子从水盆里过去;吃饭的时候,日子从饭碗里过去;默默时,便从凝然的双眼前过去。
我觉察他去的匆匆了,伸出手遮挽时,他又从遮挽着的手边过去,天黑时,我躺在床上,他便伶伶俐俐地从我身上跨过,从我脚边飞去了。
等我睁开眼和太阳再见,这算又溜走了一日。
我掩着面叹息。
但是新来的日子的影儿又开始在叹息里闪过了。
[4]在逃去如飞的日子里,在千门万户的世界里的我能做些什么呢?只有徘徊罢了,只有匆匆罢了;在八千多日的匆匆里,除徘徊外,又剩些什么呢?过去的日子如轻烟,被微风吹散了,如薄雾,被初阳蒸融了;我留着些什么痕迹呢?我何曾留着像游丝样的痕迹呢?我赤裸裸来到这世界,转眼间也将赤裸裸的回去罢?但不能平的,为什么偏要白白走这一遭啊?[5]你聪明的,告诉我,我们的日子为什么一去不复返呢?(写于1922年3月18日)朱自清《踪迹》,1924:68-70 上海:亚东图书馆【译文一】Haste[1] The swallows may go, but they will return another day; the willows may wither, but they will turn green again; the peach blossoms may fade and fall, but they will bloom again. Y ou who are wiser than I, tell me, then: why is it that the days, once gone, never again return? Are they stolen by someone? Then, by whom? And where are they hidden? Or do they run away by themselves? Then, where are they now?[2] I do not know how many days I’ve been given, yet slowly but surely my supply is diminishing. Counting silently to myself, I can see that more than 8,000 of them have already slipped through my fingers, each like a drop of water on the head of a pin, falling into the ocean. My days are disappearing into the stream of time, noiselessly and without a trace; uncontrollably, my sweat and tears stream down.[3] What’s gone is gone, and what is coming cannot be halted. From what is gone to what is yet to come, why must it pass so quickly? In the morning when I get up there are two or three rays of sunlight slanting into my small room. The sun, does it have feet? Stealthily it moves along, as I too, unknowingly, follow its progress. Then as I wash up the day passes through my washbasin, and at breakfast through my rice bowl. When I am standing still and quiet my eyes carefully follow its progress past me. I can sense that it is hurrying along, and when I stretch out my hands to cover and hold it, it soon emerges from under my hands and moves along. At night, as I lie on my bed, agilely it strides across my body and flies past my feet. And when I open my eyes to greet the sun again, another day has slipped by. I bury my face in my hands and heave a sigh. But the shadow of the new day begins darting by, even in the midst of my sighing.[4] During these fleeting days what can I, only one among so many, accomplish? Nothing more than to pace irresolutely, nothing more than to hurry along. In these more than 8,000 days of hurrying what have I to show but some irresolute wanderings? The days that are gone are like smoke that has been dissipated by a breeze, like thin mists that have been burned off under the onslaught of the morning sun. What mark will I leave behind? Will the trace I leave behind be so much as a gossamer thread? Naked I came into this world, and in a twinkling still naked I will leave it. But what I cannot accept is: why shouldI make this journey in vain?[5] Y ou who are wiser than I, please tell me why it is that once gone, our days never return. (481 words)(Translated by Howard Goldblatt. Lau & Goldblatt, 1995: 625-626) (Translated by Howard Goldblatt. Joseph S. M. Lau & Howard Goldblatt (eds.). The Columbia Anthology of Modern Chinese Literature. New Y ork: Columbia University Press, 1995: 625-626)【译者简介】Howard Goldblatt, Research Professor of Chinese at the University of Notre Dame, USA., has taught modern Chinese literature and culture for more than a quarter of a century. He obtained his BA from Long Beach State College in 1961, MA from San Francisco State University in 1971, and PhD from Indiana University in 1974. As the foremost translator of modern and contemporary Chinese literature in the West, he has published English translations of over 40 volumes of Chinese fiction in translation to his name, including Mo Y an’s Red Sorghum, as well as several memoirs and a volume of poetry in translation. Goldblatt was awarded the Translation Center Robert Payne A ward (1985) and “Translation of the Y ear”(1999) given by the American Translators Association. He is also the founder and editor of the scholarly journal Modern Chinese Literature, and has contributed essays and articles to The W ashington Post, The Times of London, TIME Magazine,W orld Literature T oday, and The Los Angeles Times.【译文二】Transient Days[1] If swallows go away, they will come back again. If willows wither, they will turn green again. If peach blossoms fade, they will flower again. But, tell me, you the wise, why should our days go by never to return? Perhaps they have been stolen by someone. But who could it be and where could he hide them? Perhaps they have just run away by themselves. But where could they be at the present moment?[2] I don’t know how many days I am entitled to altogether, but my quota of them is und oubtedly wearing away. Counting up silently, I find that more than 8,000 days have already slipped away through my fingers. Like a drop of water falling off a needle point into the ocean, my days are quietly dripping into the stream of time without leaving a trace. At the thought of this, sweat oozes from my forehead and tears trickle down my cheeks.[3] What is gone is gone, what is to come keeps coming. How swift is the transition in between! WhenI get up in the morning, the slanting sun casts two or three squarish patches of light into my small room. The sun has feet too, edging away softly and stealthily. And, without knowing it, I am already caught in its revolution. Thus the day flows away through the sink when I wash my hands; vanishes in the rice bowl when I have my meal; passes away quietly before the fixed gaze of my eyes when I am lost in reverie.A ware of its fleeting presence, I reach out for it only to find it brushing past my outstretched hands. In the evening, when I lie on my bed, it nimbly strides over my body and flits past my feet. By the time when I open my eyes to meet the sun again, another day is already gone. I heave a sigh, my head buried in my hands. But, in the midst of my sighs, a new day is flashing past.[4] Living in this world with its fleeting days and teeming millions, what can I do but waver and wander and live a transient life? What have I been doing during the 8,000 fleeting days except wavering and wandering? The bygone days, like wisps of smoke, have been dispersed by gentle winds, and, like thin mists, have been evaporated by the rising sun. What traces have I left behind? No, nothing, not even gossamer-like traces. I have come to this world stark naked, and in the twinkling of an eye, I am to go back as stark naked as ever. However, I am taking it very much to heart: why should I be made to pass through this world for nothing at all?[5] O you the wise, would you tell me please: why should our days go by never to return? (475 words)(张培基译,1999:75-77) (张培基译,《英译中国现代散文选(汉、英对照)》,上海:上海外语教育出版社,1999:75-77)【译者简介】张培基,毕业于上海圣约翰大学英文系,曾任《上海自由西报》英文记者、《中国评论周报》(英文)特约撰稿人,后赴日本东京远东国际军事法庭任英语翻译,于美国印地安纳大学英国文学系肄业后回国。
译]高德纳(Knuth)谈计算机程序设计艺术(上)
高德纳(D. E. Knuth)教授是备受尊崇的系列巨著《计算机程序设计艺术》(The Art of Computer Programming)和数十篇受到高度赞誉的计算机科学论文的作者。
2011年6月,结束了在英国的书籍研讨和系列演讲的高德纳教授,跟BCS编辑Justin Richards畅谈了自己的人生和工作。
原文链接Elliot 何逸勤译您最广为人知的成就应该是《计算机程序设计艺术》系列著作了。
1999年,这个系列被美国科学家(American Scientist)期刊评选为20世纪最重要的12部理学专著之一。
这个系列最初是如何创作出来的?您是如何看待美国科学家期刊的这一评价呢?这系列书籍大约从1960年代开始创作。
那时候,因为没有合适的资源,所以大家都在重新发明一些已有的东西。
我一直都很喜欢写作,在学校参与报纸和杂志的工作,认为自己是一个作家。
我意识到,需要有人记录下所有已经发表而我们正在遗忘的优秀思想。
这又要回溯到最初的年代,当时真正研究计算技术的人很可能还不到一千个。
我没有把这看作将要影响世界的事情,但仍然觉得这些很酷的资料是值得认真整理的。
那时候,我就考虑还有什么人合适写作这样的书籍。
我能想到的每个人,他们都很可能只会关注自己所研究的那个领域。
在我所知道的人当中,只有我自己是没有发明创造过什么东西的,因此我设想自己能够以中立的立场来担任他们的代言人。
坦白说,那就是初始动机,我认为存在那样的需求。
我写作这样的书,还有一个很自然的理由。
那就是,我要尝试将很多人的不同想法结合起来。
我会看到,A君以某种方式来分析他的方法A,而B君会以另一种方式来分析与之竞争的方法B。
因此,我就要用B君的方式来分析方法A,用A君的方式来分析方法B。
因此,我最终就是以单纯分析以上内容的形式来创作书的雏形。
很快,我认识到,有些被我捆绑使用的科学方法,在我所受的教育中其实是不允许同时出现的。
然而,一次又一次地,我真的看到只有这样的思维方式才可以正确地阐述问题。
Unit 3 Computers 课文翻译
Unit 3 Computers 课文翻译section A我是谁?经过很长时间我已经改变了很多。
1642年我在法国诞生时是一台计算机器。
尽管当时我还年轻,但是我能简化一些复杂的计算题。
我发育缓慢,差不多到了两百年之后,查尔斯·巴比奇才把我制成了一台分析机。
在操作员用穿孔卡为我设计程序之后,我能够进行逻辑“思考”,并且能够比任何人更快地算出答案。
那时,这被当作是一次技术革命,也是我“人工智能”的开始。
在1936年,我真正的父亲,艾伦·图灵写了一本书,讲述了怎样能使我成为一台“通用机器”来解决任何数学难题。
从那时起,我在体积和脑容量方面迅速成长。
到二十世纪四十年代,我已经长得像一间屋子那么大,我不知道是否还会长得更大。
然而,这个现实也使我的设计者很担心。
随着时间的推移,我被做的越来越小。
自二十世纪七十年代以来,我一直被用在办公室和家庭里,先是用作个人电脑,后来又做成便携式。
这些变化只有随着我的存储能力的不断提高才能成为可能。
最初是被存储到电子管,以后是晶体管上,后来是非常小的芯片上。
因此,我已经完全改变了我的形状。
随着我年龄越来越大,我也变得越来越小。
随着时间的推移,我的记忆能力发展的如此之快,就像一头大象一样,我从来不会忘记告诉我的任何事情!我的存储能力变得如此巨大,连我自己都不能相信!但是我总是孤孤单单地站在那里,直到二十世纪六十年代初,人们才给了我一个用网络联成的家庭。
我能够通过万维网和其他人分享我的知识。
从二十世纪七十年代起,我被开发出了很多新的用途。
我在通讯、金融和商业领域变得非常重要。
我还被放在机器人里面,被用来制作移动手机,并且用来帮助做医疗手术。
我还被放置在航空火箭里去探测月球和火星。
不管怎样,我的目标是给人类提供高质量的生活。
现在我充满了幸福感,因为我是人类忠实的朋友并时时给他们提供帮助。
Who Am I我是谁?Over time I have been changed quite a lot.经过一段时间我已经被改变了许多。
Current & Future Issues of High-End Computing,
A Java/Jini Framework Supporting StreamParallel ComputationsM.Danelutto,P.Dazzipublished inParallel Computing:Current&Future Issues of High-End Computing,Proceedings of the International Conference ParCo2005,G.R.Joubert,W.E.Nagel,F.J.Peters,O.Plata,P.Tirado,E.Zapata (Editors),John von Neumann Institute for Computing,J¨ulich,NIC Series,Vol.33,ISBN3-00-017352-8,pp.681-688,2006.c 2006by John von Neumann Institute for ComputingPermission to make digital or hard copies of portions of this work for personal or classroom use is granted provided that the copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on thefirst page.To copy otherwise requires prior specific permission by the publisher mentioned above.http://www.fz-juelich.de/nic-series/volume331A Java/Jini framework supporting stream parallel computationsM.Danelutto a&P.Dazzi b ca puter Science–University of Pisa–Italyb ISTI/CNR–Pisa,Italyc IMT–Lucca Institute for Advanced Studies–ItalyJJPF(the Java/Jini Parallel Framework)is a framework that can run stream parallel applications on several parallel-distributed architectures.JJPF is a distributed execution server,actually.It uses JINI to recruit the computational resources needed to compute parallel applications.Parallel applications can be run on JJPF provided they exploit parallelism accordingly to an arbitrary nesting of task farm and pipeline skeletons/patterns.JJPF achieves almost perfect,fully automatic load balancing in the execution of such kind of applications.It also transparently handles any number of node and network faults.Scalability and efficiency results are shown on workstation networks, both with a synthetic(embarrassingly parallel)image processing application and with a real(not embarrassingly parallel)page ranking application.1.IntroductionIt is generally assessed that real parallel applications usually exploit parallelism according to a limited,well-known set of patterns(or skeletons)[9,21,20,7].With the advent of grids[13,14]and large cluster architectures[25]some of the parallelism exploitation patterns originally proposed in the skeleton framework have been extensively used to implement high performance,parallel grid applications.Indeed,very often parallel applications are programmed exploiting by hand typical grid middleware or operating system/distributed framework mechanisms without even stating they owe to the algorithmic skeleton or parallel design patterns most of the techniques use to exploit par-allelism.An example of parallelism exploitation pattern that is very often used both in grids and in more traditional distributed frameworks is the task farm one.In a task farm,a set,or a stream,of independent tasks are computed to obtain a set of results.A single program or function is used to compute all the results out of the input tasks.Such parallelism exploitation pattern is also referred to as embarrassingly parallel computations[27].All the parameter sweeping applications,that is those applications that“try”input data sets tofind out the best one with respect to some measure function,can be easily programmed exploiting parallelism according to the task farm pattern.Also, most of the grid applications that can be programmed using tools such as Condor[11](basically a batch job scheduler)can be programmed as task farm instances.Another well-known and used parallelism exploitation pattern is the pipeline one.In a pipeline a set of input tasks are processed by a set of stages.Each stage just computes a result out of the result provided by the previous stage and delivers result to the immediately following stage.Task farm and pipeline parallelism exploitation patterns are often referred to as stream parallel(or task parallel)patterns/skeletons[23,20].We al-ready demonstrated that arbitrary compositions of pipeline and task farm patterns can be efficiently implemented,with respect to service time,using their normal form,that is transforming the original skeleton tree/composition into a program that is basically a task farm with sequential workers[3].2682Overall,this allows us to conclude that if we succeed providing a distributed environment efficiently exploiting task parallel computations,this will be very useful to implement different applications in really different applicative and hardware contexts.Our group already published several works related to the implementation of such kind of programming environments[7,4,12,8]and it is cur-rently involved in a large national project(the Italian FIRB project GRID.it[17])aimed at designing and implementing a prototype,high performance,structured programming environment[26,2,1].In this work,we discuss a parallel programming framework(JJPF)built on top of plain Java/Jini that can run stream parallel applications on several parallel/distributed architectures ranging from tightly coupled workstation clusters to generic workstation networks and grids.The framework directly inherits from Lithium and muskel,two skeleton based programming environments we previously developed at our Department[4,12].Both Lithium and muskel exploit plain RMI Java technology to distribute computations across nodes,and rely on NFS(the networkfile system)to distribute user code to the remote processing elements.JJPF,instead,is fully implemented on top of Jini/Java and relies on either Jini/Jeri class loaders or on a brand new,hybrid class loader package,to distribute code across the remote processing nodes involved in stream parallel application computation.JJPF exploits the stream parallel structure of the application in such a way that several distinct goals can be achieved:a)load balancing across the computing elements participating in the computation b) automatic discovering and recruiting of processing elements available to participate to the computa-tion of stream parallel applications exploiting standard Jini mechanisms c)automatic substitution of faulty processing elements by fresh ones(if any).Therefore the stream parallel applications compu-tations resist to both node and network faults.The programmer does not need to add a single line of code in his application to deal with faulty nodes/network,nor he has to take any other kind of action to get advantage of this feature.JJPF has been tested using both synthetic and real applications,on both production workstation networks and on a blade cluster,with very nice and encouraging results, as described in Section3.2.JJPFJJPF has been designed to provide programmers with a user-friendly environment supporting the efficient execution of stream parallel applications on a network of workstations,exploiting plain, state of the art,Java technology.Overall JJPF provides a distributed server providing a stream parallel application computation service.Programmers must write their applications in such a way they just exploit an arbitrary composition of task farm and pipeline patterns.Task farm only appli-cations are directly executed by the distributed server,while applications exploiting composition of task farm and pipeline patterns arefirst processed,in a completely automatic way,to get their normal form[3]and then their normal form is submitted to the distributed server for ing JJPF, programmers can express a parallel computation exploiting the task farm pattern simply using the following code:BasicClient cm=new BasicClient(program,null,input,output);pute(); provided that input(output)is Collection of input(output)tasks and program is an array hosting the worker code of the farm.The worker code is a Class object relative to the user worker code.Such code must implement a ProcessoIf interface.The interface requires the presence of methods to provide the input task data(void setData(Object task)),to retrieve the result data(Object getData())and to compute result out of task data(void run()).This single line of code actually defines the parallel computation to be executed,starts its execution and terminates when the parallel execution is actually terminated.JJPF basic architecture uses two components:clients,that is the user programs,and service,that is distributed server instances that3683 Array Figure1.Simplified state diagram for the generic JJPF client(left)and service(right)actually compute results out of input task data to execute client programs.Figure1sketches the structure of these two components.The client component basically recruits available services and forks a control thread for each one of them.The control thread,in turn,fetches un-computed task items from the task repository,delivers them to the remote service and retrieves the computed results, storing them to the result repository.Service recruiting is performed exploiting JINI.A lookup service is foundfirst,using standard JINI API,then it is queried for available services.Each service descriptor obtained from lookup is passed to a distinct control thread.On the other hand,the service registers to the JINI lookup,and waits for incoming client calls.Once a call is received,it assumes to be recruited by that client,un-registers from the lookup and starts serving task computation requests from the client.This means that clients actually lock the services recruited to their exclusive usage. Therefore,in order to use JJPF on a workstation network,the following steps have to be performed: a)JINI has to be installed and configured(this has to be done once and for all,of course),b)JJPF services has to be started at the machines that will eventually be used to run the JJPF distributed server(this also is to be done once and for all),and c)a JJPF client such as the one sketched above has to be prepared,compiled and run on the user workstation.Nothing else is needed.The key concept in JJPF is that service discovery is automatically performed in the client run time support. Not a single line of code dealing with service discovery or recruiting is to be provided by application programmers.Both these mechanisms rely on the JINI technology.This means that all the power of this technology is exploited but also that some limitations of the technology are inherited.In particular,we worried about the fact that JINI discovery mechanisms cannot pass throughfirewalls, therefore impairing JJPF usability in grid or in large distributed architecture contexts.Indeed,the JINI technology is perfectly suitable to run on workstation clusters within local area networks.JJPF uses two distinct mechanisms to recruit services to clients.It directly requires to the Lookup Service the Service Ids of the available services,i.e.of the nodes currently running the JJPF generic service object,but it also registers to the Lookup Service observer objects that will eventually advise the client of new services becoming available,in such a way they can be recruited.When implementing JJPF we had to face the problem of making available user code(the one computing a result out of the single task)to the remote services.JJPF achieves automatic load balancing among the recruited services,due to the scheduling adopted in the control threads managing the remote services.Each control thread fetches tasks to be delivered to the remote nodes from a centralized,synchronized task repository.JJPF also automatically handles faults in service nodes.That is,it takes care of the tasks assigned to a service node in such a way that in case the node does not respond any more they4 0 5010015020025030035032 16 84 2 1S e c o n d s Recruited hosts Total time (standard)Ideal time (standard)Total time (hybrid)Ideal time (hybrid) 0 50 100 150 200 250 300 350 3216 8 4 2 1S e c o n d s Recruited hosts Total time (statefull)Ideal time (statefull)Total time (stateless)Ideal time (stateless)Figure 2.Scalability of JJPF :scalability on a production workstation network,image processing ap-plication with standard class loader and with hybrid class loader (left);same application application with (stateful)and without (stateless)access to a shared variable (right)can be rescheduled to other service nodes,possibly recruited on the fly after realizing the service node fault.This is only possible because of the kind of parallel applications we are taking into account and that are supported in JJPF ,that is stream parallel computations.In this case,there are natural descheduling points that can be chosen to restart the computation of one of the input tasks,in case of failure of a service node.A trivial one is the start of the computation of the task.Provided that a copy of the task data is kept on the client side (in the control thread,possibly),the task can be rescheduled as soon as the control thread understands that the corresponding service node is death/non responding.This is the choice we actually implemented in JJPF ,inheriting the design from muskel [12].3.ExperimentsIn order to test JJPF features and scalability,we used two kind of applications.Most of the simple scalability measures have actually been performed using a synthetic image processing ap-plication.The application just filtered all the images appearing of an image set,applying a sort of blur image filter.This synthetic application mimics real applications that are used,as an example,to pre-process images coming from satellites,telescopes,etc.just before storing them to disks for further,real processing.The image filtering application is actually an embarrassingly parallel appli-cation,perfectly matching the task farm pattern.After verifying the scalability and efficiency results of JJPF with the synthetic application,we decided to use a complete application.As there are several people in our department working on web applications,we thought to exploit the available knowledge to develop a page ranking application.The goal was to have a real application at hand that can be used to confirm the scalability and efficiency results achieved with the synthetic case study.The page rank application we developed works with an approximate algorithm.In general,the rank vector x is iteratively computed in such a way that x (k )=Ax k −1until ||x (k )−x k −1||>[16].In the approximate algorithm,a pre-processing phase distributes the vector x and the matrix A across a set of services,in such a way that each service can compute a part of the new intermediate rank vector.Once this has been computed,the result is exchanged with the other services in such a way a new iteration (group of iterations)can be computed.The approximate algorithm does not compute the exact page ranking,of course,but the approximation introduced does not impairs the6845 0 24681012140 100 200 300 400500 600 700 800R e c r u i t e d H o s t s Seconds #T:400#T:3#T:4#T:3#T:381#T:1#T:1#T:1#T:1#T:1#T:1#T:1#T:1#T:1#T: tasks computed in the interval 0 2 4 6 8 10 12 14 0 100 200 300 400 500 600 700R e c r u i t e d h o s t s Seconds #T:400#T:7#T:5#T:2#T:3#T:380#T:1#T:1#T:1#T: tasks computed in the interval Figure 3.Effect of discovering/recruiting new resources to (left)or dismissing (faulty)resources from (right)the current computationeffectiveness of the algorithm itself.Most page ranking algorithms refer to similar approximation techniques [24,22].Using these two applications,we run a set of experiments using JJPF to test the feasibility and the efficiency of our approach.We used two distinct kind of distributed architectures:a network of “production”Linux workstations and a cluster of blade PCs,also operated by Linux.The production workstation network was a highly dynamic environment.These workstations are dual boot (Debian Linux and Windows XP),Pentium IV class machines used by the students for their class work.They are often rebooted to switch the operating system and therefore you cannot assume that they stay constantly up and running.Moreover,the users (the students)usually run a variety of tasks on these machines,ranging from WEB browsers to huge compilation and execution tests.The blade machines,on the other side,are based on RLX Pentium III blades,with 3fast Eth-ernet networks interconnecting the blades arranged in a single chassis.We have total control on the blade cluster and therefore we could run the tests on “dedicated”nodes.First of all,we tested the scalability of our distributed computation server.We used the synthetic image filtering application to process a stream of input images.The results are shown in Figure 2.In the left part of the Figure,the completion times achieved when the standard class loader mechanism was exploited are shown.In the right part of the Figure,the completion time achieved using our hybrid class loader mecha-nism is shown.In both cases,we plot the ideal completion time (that is the time spend to compute sequentially all the filtered images divided by the processing elements actually used),the measured completion time and the time actually spent in the computation of the filtered images.Scalability is actually achieved.In all cases,the efficiency was above 90%.Then we measured the efficiency of the recruiting,dismissing mechanisms of JJPF .Therefore we set up two experiments.In the first one,a number of workstations are initially recruited,and further workstations are recruited after that half of the tasks (filtered images)have been already been computed (see Figure 3left).In the second one,after initially recruiting a number of workstations,half of them are lost (verified faulty)after the computation of half of the tasks (filtered images)(see Figure 3left).In both cases,the time spent in computing the half tasks with the half workers available took the double of the time taken to compute the other half of the tasks,as expected.The #T numbers in the plots,refer to the number of tasks computed in that segment of the plot line.As an example,in the left plot of Figure 3the #T:400indicates that using the initially recruited machines,we computed 400tasks (filtered images)before actually starting recruiting other machines.When all the additional machines where recruited and just before starting dismissing machines,we computed a further #T:381tasks.In 6856 0 204060801001201401608 4 21S e c o n d s Recruited hosts Fast Ethernet Gbit Ethernet Fast Ethernet ideal Gbit Ethernet ideal 0 5 10 15 20 25 30 35 84 2 1S e c o n d s Recruited hosts Small link number High link number Small link number: ideal High link number: ideal Figure 4.Scalability of JJPF (page rank application):Fast Ethernet vs.Gbit Ethernet (left)Small vs.high link number per page (right)these experiments,new computing nodes/services are made available for recruitment by running the JJPF run time on new machines and faulty nodes are emulated either by stopping the JJPF support or by unplugging network cables from the switch.Both the experiments whose results are plotted in Figures 2and 3have been performed using the network of production workstations.Eventually,we run an experiment with a real application code,the page rank algorithm described above.This experiment has been performed using the blade cluster.We achieved comfortable results,although the scalability measured is not equal to the one achieved using the synthetic,embarrassingly parallel application.The point here is that we need to exchange data among the workers participating in the computation to take care of the approximation algorithm used in the page rank.The right part of the Figure 4,plots the completion times of the page rank application run on a set of 400K pages (therefore a fairly small set of pages)with each page holding a reasonable,but fairly poor number of links to other pages,as well as the completion times achieved using another set with the same number of pages but with pages that hold a quite larger number of links.This was to point out the effect of computation grain on the scalability.In the former case,we compute less before actually starting exchanging the data needed to compute the page rank approximation.In the latter,we com-pute more.Therefore we pay a smaller (percentage)overhead in the latter case and scalability turns out to be better.In the left part of Figure 4,we point out the differences achieved when using a Gbit Ethernet interconnection between blades instead of a plain Fast Ethernet,100Mbit interconnection.The network shift improved the completion times,although it did not change significantly the shapes of the completion time curves.Overall,we can state that JJPF demonstrated the expected scalability results as well as its ability too dynamically handle new computational resources,when available,and to safely dismiss nodes (without actually loosing any kind of data),in case they stop working.4.Related workOur previous full Java,structured,parallel programming environment muskel already provides automatic discovery of computational resource in the context of a distributed workstation network.muskel was based on plain RMI Java technology,however and the discovery was simply imple-mented using multicast datagrams and proper discovery threads.The muskel environment also introduces the concept of application manager that binds computational resource discovery with autonomic application control in such a way that optimal resource allocation can be dynamically6867687maintained upon specification by the user of a performance contract to be satisfied[12].Several other groups proposed or currently propose environments supporting stream parallel computations on workstation networks and clusters.Among the others,we mention Cole’s eSkel library running on top of MPI[10],Kuchen’s C++/MPI skeleton library[20]and CO2P2S from the University of Alberta[21].The former two environments are libraries designed according to the Cole algo-rithmic skeleton concept.The latter is based on parallel design patterns.None of them allows,at the moment,automatic discovery of computational resources,nor provides fault tolerance features such as those provided by JJPF.The group of Franc¸oise Andr´e is currently trying to address the problem of dynamically varying the computational resources assigned to the execution of an SPMD program[6].This is not actually the same problem we addressed with JJPF,but the techniques used to devise the exact number of resources to be recruited to compute a parallel program are in-teresting and can be reused in JJPF framework to recruit the right number of service nodes among those available.Our group is also introducing dynamicity handling techniques in the ASSIST envi-ronment developed within the GRID.it project[5].Such techniques are partially derived from the muskel/JJPF experience.The kind of task parallel computations natively supported by JJPF is very close to the one supported by Condor.Condor is a”specialized workload management sys-tem for compute-intensive jobs”[11]and”like other full-featured batch systems,it provides a job queuing mechanism,scheduling policy,priority schema,resource monitoring and resource manage-ment”.However,Condor is a actually a batch system,that is it is not a programming environment, nor it is able to provide(as JJPF does through skeletons and normal form)support for other,differ-ent parallelism exploitation patterns/skeletons.Several papers are related to PageRank Algorithm, Haveliwala[18]explores memory-efficient computation,in[19].Kamvar et al.discuss some meth-ods for accelerating PageRank calculation and in[15]Gleich,Zhukov and Berkhin demonstrate that linear system iterations converge faster than the simple power method and are less sensitive to the changes in teleportation.Rungsawang and Manaskasemsak in[24]e[22]evaluate the performance supplied by an approximated PageRank computation on a Cluster of Workstation using a low-level peer-to-peer MPI implementation.5.ConclusionsWe described JJPF a new distributed server supporting the execution of stream parallel applica-tion on workstation networks.The framework exploits plain Java technology,using JINI to address resource discovery.JJPF supports the execution of stream parallel computations using a set of re-mote service nodes,that is,nodes that basically provide a sort Java interpreter capable of computing generic,user-defined tasks implementing a known interface.Service nodes are discovered and re-cruited automatically to support user applications.Fault tolerance features have been included in the framework such that the execution of a parallel program can transparently resist to node or net-work faults.Load balancing is guaranteed across the recruited computational resources,even in case of resources with fairly different computing capabilities.To our knowledge,these features are not present in other distributed parallel programming environments.References[1]M.Aldinucci,S.Campa,P.Ciullo,M.Coppola,S.Magini,P.Pesciullesi,L.Potiti,R.Ravazzoloand M.Torquati,M.Vanneschi,and C.Zoccolo.The Implementation of ASSIST,an Environment for Parallel and Distributed Programming.In Proc.of Intl.Conference EuroPar2003:Parallel and Distri buted Computing,number2790in LNCS.Springer,2003.8688[2]M.Aldinucci,S.Campa,M.Coppola,M.Danelutto,forenza,D.Puppin,L.Scarponi,M.Van-neschi,and ponents for High-Performance Grid Programming in GRID.it.In Compo-nent modes and systems for Grid applications,CoreGRID.Springer,2005.[3]M.Aldinucci and M.Danelutto.Stream parallel skeleton optimisations.In Proc.of the IASTED Inter-national Conference Parallel and Distributed Computing an d Systems,pages955–962.IASTED/ACTA Press,November1999.Boston,USA.[4]M.Aldinucci,M.Danelutto,and P.Teti.An advanced environment supporting structured parallel pro-gramming in Java.Future Generation Computer Systems,19(5):611–626,2003.Elsevier Science. [5]M.Aldinucci,A.Petrocelli,E.Pistoletti,M.Torquati,M.Vanneschi,L.Veraldi,and C.Zoccolo.Dy-namic reconfiguration of grid-aware applications in ASSIST.In11th Intl Euro-Par2005:Parallel and Distributed Computing,LNCS.Springer Verlag,2005.[6] F.Andr´e,J.Buisson,and J.L.Pazat.Dynamic adaptation of Parallel Codes:Toward Self-AdaptableComponents.In Component modes and systems for Grid applications,CoreGRID.Springer,2005. [7] B.Bacci,M.Danelutto,S.Pelagatti,and M.Vanneschi.SkIE:a heterogeneous environment for HPCapplications.Parallel Computing,25:1827–1852,Dec.1999.[8]R.Baraglia,M.Danelutto,forenza,S.Orlando,P.Palmerini,R.Perego,P.Pesciullesi,and M.Van-neschi.AssistConf:A Grid Configuration Tool for the ASSIST Parallel Programming Environment.In 11th Euromicro Conf.on Parallel,Distributed and Network-Based Processing,pp193–200.IEEE,2003.[9]M.Cole.Bringing Skeletons out of the Closet:A Pragmatic Manifesto for Skeletal Parallel Program-ming.Parallel Computing,30(3):389–406,2004.[10]M.Cole and A.Benoit.The eSkel home page,2005./abenoit1/eSkel/.[11]Condor community.The CONDOR home page,2005./condor/.[12]M.Danelutto.QoS in parallel programming through application managers.In Proceedings of the13thEuromicro Conference on Parallel,Distributed and Network-based processing.IEEE,2005.Lugano. [13]I.Foster and C.Kesselman(Editors).The Grid2Blueprint for a New Computing Infrastructure.MorganKaufmann,December2003.[14]Ggf community.The Global Grid Forum home page,2005..[15]D.Gleich,L.Zhukov,and P.Berkhin.Fast Parallel PageRank:A Linear System Approach.Technicalreport,Yahoo research lab,2004.[16]Google community.The Google home page,2005./technology/.[17]Grid.it community.The GRID.it home page,2005.http://www.grid.it.[18]Taher Haveliwala.Efficient Computation of PageRank.TR1999-31,Stanford Univ.,USA,1999.[19]S.Kamvar,T.Haveliwala,C.Manning,and G.Golub.Extrapolation methods for accelerating PageRankcomputations,Proceedings of the Twelfth Int’l WWW Conference,2003.[20]H.Kuchen.A Skeleton Library.In Euro-Par2002,Parallel Processing,number2400in LNCS,pages620–629.”Springer”Verlag,August2002.[21]S.MacDonald,J.Anvik,S.Bromling,J.Scaheffer,D.Szafron,and K.Tan.From patterns to frameworksto parallel programs.Parallel Computing,28(12),2002.[22]Bundit Manaskasemsak and Arnon Rungsawang.Parallel PageRank Computation on a Gigabit PC Clus-ter.In AINA(1),pages273–277,2004.[23]S.Pelagatti.Structured Development of Parallel Programs.Taylor&Francis,1998.[24]Arnon Rungsawang and Bundit Manaskasemsak.PageRank Computation Using PC Cluster.InPVM/MPI,pages152–159,2003.[25]Univ.of Tennesse,Mannheim and NERSC/LBNL.The Top500home ,2005.[26]M.Vanneschi.The Programming Model of ASSIST,an Environment for Parallel and DistributedPortable Applications.Parallel Computing,12,December2002.[27]B.Wilkinson and M.Allen.Parallel Programming:Techniques and Applications Using N etworkedWorkstations and Parallel Computers.Prentice Hall,1999.。
GSL_英语高频词2284个
339 337 337 335 333 333 333 333 332 332 332 331 330 329 328 328 327 327 326 326 324 324 324 323 323 323 321 321 320 315 313 312 312 311 309 308 308 308 307 307 306 306 305 304 302 302 302
92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138
327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373
direct car law industry important girl god several matter usual rather per often kind among white reason action return foot care simple within love human along appear doctor believe speak active student month drive concern best door hope example inform body ever least probable understand reach effect
[讲解]新核心大学英语B版读写教程3第一单元课文翻译
[讲解]新核心大学英语B版读写教程3第一单元课文翻译Unit 1Main Reading ----拥有自己头脑的机器如果任其自由发展,有些机器可以学得更聪明,在一些最需要脑力的任务方面甚至会超越人类。
人类能否建造出可以演变得更好并可以超出人们想象而发明解决方案的机器吗,利用计算蛮力方法,计算机现在可以进行通行的国际象棋游戏。
1997年,IBM的一款名为深蓝的超级计算机击败了卡斯帕罗夫。
世界冠军认为这次经历如同与顶尖的人类挑战者对抗一样艰难。
阿蓝图灵,战时英国谜团破译密码工作背后的数学天才,于20世纪50年代设立了人工智能的标准,而深蓝的行为至少达到了其中的一个。
然而,深蓝的成功并没有给人工智能界留下深刻的印象,那是因为这台机器的创举仅仅在于运算速度快于其他任何以前的计算机。
巨大的处理能力可以使它预测到向前推进的棋步多达30个,而且它聪明的编程可以计算出数百万的可能的棋步中哪一步会加强它的位置。
但就本身而言,深蓝所能做的,而且出色完成的仅仅是数学。
它不能为象棋游戏制定自己的战略。
但是如果深蓝被赋予一种演变的能力,使用反复试验的经历学会完善自身,会怎么样呢,一种名为“演化硬件”的新技术正试图这么做。
和深蓝一样,演化硬件也是通过尝试几十亿个不同的可能,寻求解决方案。
区别在于,和深蓝不同,演化硬件不停地调整和完善它的搜索算法,而这也正是找到解决方案所需的逻辑步骤。
它每次都选择最好的,并加以尝试。
而且,它所作的一切不是根据编好的指令,而都是自动完成的,。
传统观念长期认为一个机器的能力是受限于创造者的想象力。
但是在过去的几年里,演化硬件的前驱已经成功地建造了一些可以自行调整并且表现更佳的设备。
有些情况下,后来出现的机器甚至超出了创造者的能力。
例如,在电路设计领域,对几十年来人类束手无策的一些问题,演化硬件却找出了创造性的解决方案。
演化硬件首先需要硬件可以重新配置。
如果一个设备不能调整形状或调整做事方法,它是不可能演变的。
Computing the Uncertainty of Geometric Primitives and Transformations
计算机科学专业英语词汇(整理版)
计算机科学专业英语词汇(整理版)1. 算法 (Algorithm)算法是指一系列解决问题的明确步骤或规则。
它在计算机科学中起到非常关键的作用,用于解决各种计算问题。
2. 编程 (Programming)编程是指使用计算机语言来创建计算机程序的过程。
它涉及到编写代码、调试程序以及优化程序等步骤。
3. 数据结构 (Data Structure)数据结构是指在计算机中组织和存储数据的方式。
常见的数据结构包括数组、链表、栈和队列等。
4. 网络 (Network)网络是指将多台计算机连接在一起,使它们可以相互通信和共享资源的系统。
常见的网络类型包括局域网(LAN)、广域网(WAN) 和互联网 (Internet)。
5. 数据库 (Database)数据库是指用于存储和管理数据的系统。
它提供了方便的数据访问和数据管理功能,常用于各种应用程序中。
6. 操作系统 (Operating System)操作系统是计算机系统中的核心软件,它负责管理和控制计算机的硬件和软件资源。
常见的操作系统有Windows、Mac OS和Linux等。
编译器是将高级编程语言代码转换为机器语言代码的工具。
它将程序员编写的源代码转化为计算机可以执行的指令集。
8. 虚拟现实 (Virtual Reality)虚拟现实是一种通过计算机生成的仿真环境,使用户可以与虚拟世界进行互动。
常见的虚拟现实技术包括头戴式显示器和手柄控制器等。
9. 人工智能 (Artificial Intelligence)人工智能是计算机科学的一个分支,研究如何使计算机能够模拟和执行智能行为。
它涉及到机器研究、自然语言处理和专家系统等领域。
10. 加密 (Encryption)加密是一种将信息转化为密文的过程,以保护数据的安全性和隐私。
常见的加密算法有AES、RSA和SHA等。
云计算是一种通过互联网提供计算资源和服务的方式。
它可以实现按需访问、灵活扩展和资源共享等功能。
英语--计算机之父冯诺依曼
Along with Edward Teller (爱德华特勒)and Stanislaw Ulam(斯坦尼斯 乌拉姆), von Neumann
worked out key steps in the nuclear physics(核物 理) involved in
thermonuclear reactions(热核反应) and the hydrogen bomb(氢弹).
----
John Von Neumann (约翰.冯.诺依曼)
The eldest of three brothers, von Neumann was born Neumann János Lajos in Budapest, Hungary, to a wealthy Jewish family. His father is a lawyer who worked in a bank.
János, nicknamed “Jancsi” (Johnny), was a child prodigy(奇才) who showed an aptitude for languages, memorization, and mathematics.
Although he attended school at the grade level appropriate to his age, his father hired private tutors (家庭教师) to give him advanced instruction in those areas in which he had displayed an aptitude。 He received his Ph.D. (Philosophiae Doctor) in mathematics from Pázmány Péter University in Budapest at the age of 22.
计算机专业英语参考答案Unit (1)[3页]
Unit One Computer HistorySection One Warming Up1.Bill Gates2.John Vincent Atanasoff3.Steve Paul Jobs4.Douglas C. EngelbartSection Two Real WorldFind informationTask I. 1.The Imitation Game.2.It’s a biography film about Alan Turing.3.It’s an apple with one bite missing.4.Yes, he did.5. He committed suicide.Task II. 1. F 2. T 3. F 4. F 5. TWords BuildingTask I. 1. B 2. A 3. C 4. A 5. DTask II. 1. imitate 2. admirable 3. prefer 4. commitment 5. Suicidal Task III. 1.D 2. F 3. B 4. G 5. A 6. H 7. J 8. E 9. C 10. ICheer up Your EarsTask I. 1. introduced 2. biography; honest 3. character 4. admired; achievement5. decode6. endure; suicide7. tragic8. MemoryTask II. 1. computer 2. documents 3. difference 4. memory 5. quality6. concerned7. screen8. take up9. amount 10. PleasureTask III. 1. C 2. A 3. B 4. D 5. ATable Talk1. hard disc2. capacity3. RAM4. out-of-date5. keep track ofSection Three Brighten Y our Eyes计算机发展史没有东西能够像计算机一样概括现代生活。
c++ 信奥赛 常用英语
c++ 信奥赛常用英语在C++ 信奥赛中(计算机奥林匹克竞赛),常用英语词汇主要包括以下几方面:1. 基本概念:- Algorithm(算法)- Data structure(数据结构)- Programming language(编程语言)- C++(C++ 编程语言)- Object-oriented(面向对象)- Function(函数)- Variable(变量)- Constants(常量)- Loops(循环)- Conditional statements(条件语句)- Operators(运算符)- Control structures(控制结构)- Memory management(内存管理)2. 常用算法与数据结构:- Sorting algorithms(排序算法)- Searching algorithms(搜索算法)- Graph algorithms(图算法)- Tree algorithms(树算法)- Dynamic programming(动态规划)- Backtracking(回溯)- Brute force(暴力破解)- Divide and conquer(分治)- Greedy algorithms(贪心算法)- Integer array(整数数组)- Linked list(链表)- Stack(栈)- Queue(队列)- Tree(树)- Graph(图)3. 编程实践:- Code optimization(代码优化)- Debugging(调试)- Testing(测试)- Time complexity(时间复杂度)- Space complexity(空间复杂度)- Input/output(输入/输出)- File handling(文件处理)- Console output(控制台输出)4. 竞赛相关:- IOI(国际信息学奥林匹克竞赛)- NOI(全国信息学奥林匹克竞赛)- ACM-ICPC(ACM 国际大学生程序设计竞赛)- Codeforces(代码力)- LeetCode(力扣)- HackerRank(黑客排名)这些英语词汇在信奥赛领域具有广泛的应用,掌握这些词汇有助于提高选手之间的交流效率,同时对提升编程能力和竞赛成绩也有很大帮助。
Multi-objective optimization using genetic algorithms A tutorial
least one objective function j. A solution is said to be Pareto optimal if it is not dominated by any other solution in the solution space. A Pareto optimal solution cannot be improved with respect to any objective without worsening at least one other objective. The set of all feasible
Multi-Objective Optimization Using Genetic Algorithms: A Tutorial
Abdullah Konak1, David W. Coit2, Alice E. Smith3
1
Information Sciences and Technology, Penn State Berks-Lehigh Valley Department of Industrial and Systems Engineering, Rutgers University Department of Industrial and Systems Engineering, Auburn University
objective is possible with methods such as utility theory, weighted sum method, etc., but the problem lies in the correct selection of the weights or utility functions to characterize the decision-makers preferences. In practice, it can be very difficult to precisely and accurately select these weights, even for someone very familiar with the problem domain. Unfortunately, small perturbations in the weights can lead to very different solutions. For this reason and others, decision-makers often prefer a set of promising solutions given the multiple objectives. The second general approach is to determine an entire Pareto optimal solution set or a representative subset. A Pareto optimal set is a set of solutions that are nondominated with respect to each other. While moving from one Pareto solution to another, there is always a certain amount of sacrifice in one objective to achieve a certain amount of gain in the other. Pareto optimal solution sets are often preferred to single solutions because they can be practical when considering real-life problems, since the final solution of the decision maker is always a trade-off between crucial parameters. Pareto optimal sets can be of varied sizes, but the size of the Pareto set increases with the increase in the number of objectives. 2. Multi-Objective Optimization Formulation A multi-objective decision problem is defined as follows: Given an n-dimensional decision variable vector x={x1,…,xn} in the solution space X, find a vector x* that minimizes a given set of K objective functions z(x*)={z1(x*),…,zK(x*)}. The solution space X is generally restricted by a series of constraints, such as gj(x*)=bj for j = 1, …, m, and bounds on the decision variables. In many real-life problems, objectives under consideration conflict with each other. Hence, optimizing x with respect to a single objective often results in unacceptable results with respect to the other objectives. Therefore, a perfect multi-objective solution that simultaneously optimizes each objective function is almost impossible. A reasonable solution to a multiobjective problem is to investigate a set of solutions, each of which satisfies the objectives at an acceptable level without being dominated by any other solution. If all objective functions are for minimization, a feasible solution x is said to dominate another feasible solution y ( x
高德纳提出算法及数据结构的概念
高德纳提出算法及数据结构的概念在计算机软件发展的进程中,高德纳对计算机软件中的编译程序、属性文法和运算法则等进行深入的研究,提出了算法及数据结构的概念,写作出版了经典的《计算机编程艺术》,发明了至今仍被广泛地运用于全世界数学刊物的排版系统。
高德纳是计算机软件设计的重要奠基人。
高德纳(Donald E. Knuth )的音译是唐纳德·克努思。
但他与严肃刻板的科学家不同,是个性情中人。
他在1977年请人帮助,起了一个中文名字“高德纳”,还刻了印章,然后在计算机通信和计算机网络上广泛使用。
他的本意在于体现“世界变小了”。
高德纳,1938年1月10日生于美国威斯康辛州密尔沃基市。
他父亲是路德教会学校的教师,也在星期天教堂服务是弹奏风琴。
高德纳的天资很早就显现出来。
中学一年级时,他参加了糖果生产商Ziegler 公司组织的一场比赛。
比赛要求选手使用厂商的品牌“Zieglar ’s Giant Bar ”这一短语中的字母尽可能多地重组单词,最多者获胜。
高德纳假装生病请了两周假,最后完成了4500个单词,比当时竞赛裁判找到的还多2000个。
他在模式辨别方面的才能为学校赢得了一台电视机,还为每个同学赢得了一根棒糖。
1956年,高德纳从密尔沃基市路德教会高级中学毕业,进入俄亥俄州克利夫兰市的凯斯理工学院。
由于学院给他提供的是物理奖学金,所以他放弃了原来想当音乐家的理想。
似乎在奖励高德纳选择了正确的道路,就在他跨进大学的同时,IBM 650计算机在这个学院的计算中心开始运行。
IBM650计算机不仅是最重要的早期计算机之一,而且它还是个人计算机的祖先。
在IBM 650之前,由于计算机太昂贵以致于绝大多数程序员从未进入机房进行直接操作。
他们采取的是沮丧的与计算机不见面的批处理环境,即由你递交记录程序和数据的穿好孔的卡片组,由专门的操作员代你上机运行,第二天你再取回执行结果。
但是,在凯斯理工学院的IBM 650计算机上,每个学生可分到15分钟的上机时间。
Highlight Removal from single image
Highlight Removal from Single ImagePesal Koirala,Markku Hauta-Kasari,and Jussi ParkkinenDepartment of Computer Science and StatisticsUniversity of Joensuu,Finlandpkoirala@cs.joensuu.fiAbstract.The highlight removal method from the single image with-out knowing the illuminant has been presented.The presented method isbased on the Principal Component Analysis(PCA),Histogram equaliza-tion and Second order polynomial transformation.The proposed methoddoes not need color segmentation and normalization of image by illumi-nant.The method has been tested on different types of images,imageswith or without texture and images taken in different unknown lightenvironment.The result shows the feasibility of the method.Implemen-tation of the method is straight forward and computationally fast.Keywords:PCA,Histogram equalization,Dichromatic reflectionmodel,Polynomial transformation.1IntroductionThe aim of this study is to remove the highlight from single color image without having prior illumination information and without doing clustering of color.In this study we tested our method in variety of images with texture,without tex-ture,with one or two color images and multicolored images and the images taken in different environment and the images directly downloaded from the web.The proposed method is based on principal component analysis and histogram equal-ization offirst principal component.Thefirst principal component refers to the principal component corresponding to largest eigen value.Second principal com-ponent was selected or rejected depending on thefidelity ratio of corresponding eigen values as in most of the cases it was found that second PC carries some part of specular component.However still some specular component remains in first PC.The effect of specular component infirst PC is diffused by applying histogram equalization.By setting threshold value in thefidelity ratio of second largest value,it was found that weather second PC is part of specular compo-nent or not.Finally the image was reconstructed using histogram equalizedfirst PC and selected other PC according to threshold.However reconstructed image shifts color value.The original color was achieved applying second order polyno-mial transformation by the basis function calculated from the diffuse detected part to the reconstructed image.The workflow of the method has been shown in Fig.5.J.Blanc-Talon et al.(Eds.):ACIVS2009,LNCS5807,pp.176–187,2009.c Springer-Verlag Berlin Heidelberg2009Highlight Removal from Single Image1771.1Previous WorkThere are a lot of previous works in thatfield.Shafer[9]proposed the dichro-matic reflection model for color image.In that method the diffuse component and specular component was separated based on the parallelogram distribution of colors in RGB space.The method extended by Klinker et al.[6]showed the T-shaped color distribution containing reflectance and illumination color vectors. The experiment was performed in the lab environment and It is very difficult to extract T-shaped cluster for real images due to noise factor.polarizationfilter was used to separate reflection components from gray level images[2].The po-larizationfilter method was extended in RGB color images tofind the highlight depending on the principal that in the dielectric material the specular compo-nent is polarized and diffuse component is not polarized[8].This method is also suitable for multicolored texture image but needs additional polarizationfilter which may not be practical for all the cases.Tan et al.[1]proposes highlight removal method without doing color segmentation and using ploarizationfilter. Tan method is also suitable for multicolored texture images.This method pro-posed specular free image(SF)which is devoid of specular effect but retains exact geometrical information.However there is shift of color value.By using logarith-mic differentiation between specular free image and input image,the highlight free pixels were successfully detected.Then the specular component of each pix-els were removed locally involving a maximum of only two pixels.Similarly Shen et al.[5]proposed chromaticity based specularity removal method in a single im-age even with out any local interactions between neighboring pixels and without color segmentation.This method is based on solving the least square problem of the dichromatic reflection model on a single pixel level.However both of the methods[1][5]require previous information of illumination.The information of illumination can be approximated by using existing color constancy algorithm [4].The accuracy of the result in both methods depends on the accuracy of the estimated illuminant source.Illumination constrained inpainting method was described in reference[14].This method is based on the assumption that the illumination color is uniform through the highlight.This method is suitable for texture surface.But it is computationally slow.The highlight removal method in spectral image was proposed in reference[3].This method does not require illuminant estimation.The method[3]uses mixture model of Probabilistic PCA to detect highlight affected part and diffuse part in the image.Finally the high-lighted detected part mapped across thefirst eigen vector of diffused part was used to remove highlight during reconstruction process by PCA.The specular free spectral image was obtained by using Orthogonal Subspace Projection[10]. In that method the projector maps the radiance spectrum R to the subspace orthogonal to the illumination spectrum.2Highlight DetectionThe highlight detected area in the given image is calculated on the single pixel level,based on the difference between the original image and the modified178P.Koirala,M.Hauta-Kasari,and J.Parkkinenspecular free image(MSF)[5].The MSF is calculated by adding the mean of minimum of RGB color value of original image to the specular free image(SF) as shown in Eq.(2).The specular free image is calculated by subtracting the minimum of RGB color value in each pixel level as shown in Eq.(1).SF i(x,y)=I i(x,y)−min(I1(x,y),I2(x,y),I3(x,y))(1)MSF i(x,y)=SF i(x,y)+I min(2) In Eq.(1)and Eq.(2)SF and MSF are specular free image and modified specular free image.Similarly I i(x,y)is the value of i th color channel at(x,y)pixel position of original image I.The subscript i ranges from1to3corresponding to Red,Green and Blue channel.Threshold value should be set in the difference between original and MSF image to classify each pixel in the group of highlight and highlight free part.pixel=highlight if d i(x,y)>th for all i diffuse otherwiseWhere d i(x,y)=I i(x,y)−MSF i(x,y)The average value of the minimum of the color channel has been used as threshold[5].However to set the accurate threshold value is the challenging task.In this experiment threshold value has been set after visual assessment of classification results of each image.The result of highlight detection of pear is shown in Fig.1.The different threshold values we used have been listed in Table1.Fig.1.Highlight detection(a)Original Image(b)Black is diffuse part and white is highlight part3Highlight Removal3.1Dichromatic Reflection ModelShafer et al.[9]described dichromatic reflection model for modeling the re-flectance of dielectric objects.The model suggests that the reflection can be rep-resented by the linear combination of diffuse and specular components.BasedHighlight Removal from Single Image 179on this model the intensity of the image for single illumination case is defined as in Eq.(3).I (x,y )=K d (x,y ) ωR (λ,x,y )S (λ)q (λ)+K s (x,y )ωS (λ)q (λ)(3)Where I (x,y )is the color vector of red,green and blue colors at pixel position (x,y )of the image.q is the camera sensor sensitivity of each color.K d and K s are the weighting factors for diffuse and specular reflections respectively.The weighting factors are dependent on the geometric structure of the surface.R(λ)is the diffuse reflectance at pixel position (x,y )and S(λ)is the spectral power distribution of the illuminants which is independent of geometry of the surface.λrepresents each wavelength which lies within visible range ω.Here the specular reflection is not considered since it is assumed that specular reflection is equal to the spectral power distribution of light source. ωrepresents the summation with in the visible range ω.Eq.(3)is rewritten in simple form as in Eq.(4).I (x,y )=K d (x,y )I d (x,y )+K s (x,y )I s (x,y )(4)Where I d (x,y )= ωR (λ,x,y )S (λ)q (λ)and I s (x,y )= ωS (λ)q (λ).The I s (x,y )can simply be written as I s since it is independent of surface geometry.As a result it can be said that specular component is the result of scaling of the illuminant color.3.2Principal Component AnalysisHere our goal is to remove specular part (highlight)from the image without using illuminant information.In our work we have used Principal Component Analysis[11]to describe image as the linear combination of different components.RGB image can be represented by three principal components weighted by three eigen vectors since it has three color channels.The eigen vectors were calculated from the correlation matrix of the given image and sorted according to descending order of eigen values.The image representation by Principal component is shown in Eq.(5).I =V P (5)Where I is the given image in matrix form of size L ∗N ,L is number of color channels,here it is 3for RGB image and N is the number of pixels in the image.Similarly V =[V 1V 2V 3]is the matrix of size L ∗L consisting L principal component vectors so each vector is of size L ∗1and P =[P 1;P 2;P 3]is principal component of size L ∗N since L is number of principal component vectors chosen.Each principal component has size 1∗N .The sign ;is used to separate rows.Without loss of generality Eq.(5)is rewritten in the form as in Eq.(6).I =V 1P 1+V 2P 2+V 3P 3(6)Where V i and P i are principal component vectors and corresponding principal components respectively.We assume that one of the principal components may180P.Koirala,M.Hauta-Kasari,and J.Parkkinen(a)(b)(c)Fig.2.Highlight removal(a)Original Image(b)Result without using histogram equal-ization(c)Result with using histogram equalizationcontain the part of specular reflection.In experiment we found that in most of the cases second principal component corresponding to second largest eigen value contains the part or whole of the specular reflection.However it is not always true.The threshold value should be set in thefidelity ratio of second largest eigen value to detect weather second largest principal component contains specular component.Second PC=diffuse component if f2>th specluar component otherwiseWhere f2is thefidelity ratio of second largest eigen value and th is the threshold. Here we have set the value of th equal to2.Thefidelity ratio of the j th eigen value are calculated as the ratio of i th eigen value to the sum of total eigen values as shown in Eq.(7).f j=σjLi=1σi100(7)Whereσj is the j th eigen value.If the second principal component was detected as Specular component it was not used in reconstruction process otherwise it was used.Table1lists thefidelity ratio of each eigen value and selected princi-pal components according to it for reconstruction process for different samples. Nevertheless in experiment it was found that thefirst principal component still contains some specular part.To diffuse the specular affect remained infirst prin-cipal component;histogram equalization[12]was applied in thefirst principal component normalized between0to1.Accordingly the Eq.(6)is rewritten in Eq.(8).˜I=V1P h+V2P2+V3P3(8) Where P h is histogram equalized principal component and˜I is the reconstructed image.The middle part of the right side is included or removed in reconstruction according tofidelity ratio of second largest eigen vector.The effect of histogram equalization infinal result has been illustrated in Fig.2.Highlight Removal from Single Image181 3.3Polynomial TransformationThe reconstructed image by PCA˜I is the image without highlight but the color of the image is changed.The original color of the image without highlight is obtained by using second order polynomial transformation.The main problem is tofind the weight function for polynomial transformation.The weight func-tion is calculated using transformation between detected diffuse part of original image and corresponding part of reconstructed image as shown in Eq.(9).The diffuse part was detected according to the rule as described in section Highlight detection.I d=W˜M d(9) In Eq.(9)I d is diffuse detected part in original image and its size is L∗N,here L is number of channels which is3for RGB image and N is number of diffuse detected pixels.˜M d is the second order polynomial extension of corresponding diffuse detected part˜I d of reconstructed image˜I.Reconstructed diffuse detected part is I d=[R G B]T where R,G and B are the vectors containing red,green and blue color values.Now˜M d=[R G B R∗G R∗B G∗B R∗R G∗G B∗B]T.[]T is transpose of matrix.As a result the size of the basis function W is3∗9and size of˜M d is9∗N.The basis function W can be calculated by using least square pseudo inverse matrix calculation as shown in Eq.(10).W=I d[˜M T d˜M d]−1˜M T d(10) Where[]−1is the inverse of matrix.The calculated basis function W is employed to the whole of the reconstructed image˜I as shown in Eq.(11)to achieve accurate diffuse image D.In Eq.(11)M is the second order polynomial extension of reconstructed image˜I.D=W˜M(11) Fig.3and Fig.4shows the result of polynomial transformation applied to the PCA reconstructed image.The same polynomial method can be directly applied to specular free image to get original color.In that case˜I is the specular free image calculated as described in[1][5].The result of the output depends on the Table1.Fidelity ratio analysis f1,f2and f3arefidelities offirt to third largest eigen value.Fifth column shows the PC selected in reconstruction process.Threshold is the threshold value for specular detection.ImageFidelity ratioPC selected Threshold f1f2f3Pear98.697 1.2250.0781st and3rd0.1 Face99.5520.3920.0571st and3rd0.3 Yellow ball98.049 1.2480.70321st0.3 Green ball99.6060.3800.0151st0.1 Hat83.697312.9819 3.3211st,2nd and3rd0.3 Fish90.64947.4925 1.85811st,2nd and3rd0.3182P.Koirala,M.Hauta-Kasari,and J.Parkkinen(a)(b)(c)(d)(e)(f)(g)Fig.3.Highlight removal(a)Original image(b)Reconstructed image byfirst PC(c) by histogram equalizedfirst PC(d)by second PC(d)by third PC(e)by histogram equalizedfirst PC and second and third PC(f)Image after second order polynomial transformation.Reconstructed results by second and third PC have been scaled to visualize.(a)(b)(c)(d)(e)(f)(g)Fig.4.Highlight removal(a)Original image(b)Reconstructed image byfirst PC(c)by histogram equalizedfirst PC(d)by second PC(d)by third PC(e)by histogram equal-izedfirst PC and third PC(f)Image after second order polynomial transformation. Reconstructed results by second and third PC have been scaled to visualize. specular free image.Sometime specular free image calculated without normal-ization gets the gray or black highlight in the specular detected region even after removing achromatic color.In that case accurate diffuse color can not be achieved.However it does not hamper just to detect highlight so in that case PCA based method is the good solution.4Results and DiscussionsDichromatic reflection model describes the reflection of the surface as a linear combination of specular and diffuse reflection components[9].The scaling factor of the diffuse and specular reflection depends on the geometric properties of the surface therefore each component should be separated pixel wise.So multiple image of the same surface is the easy solution.However practically it might not be feasible since in most of the cases as we should remove highlight or specular part from the given single image.The polarizingfilter might be the solution butHighlight Removal from Single Image183Fig.5.Block diagram describing method to remove highlight from imageFig.6.Face image with highlight and Image reconstructed without highlight extra hardware should be installed in the camera and it might not be feasible in the case provided that we have already image from other sources.Different methods have been described to separate highlight and body reflection.Most of them are based on color segmentation and known light source information.The accuracy of the highlight removal depends on the accuracy of color segmentation. In the case of unknown light source,the chromaticity of light source can be estimated by color constancy algorithm and the result depends on the estimated value and the correct estimation of illuminant color from single image is always the problem.In this research we employed linear principal component to remove highlight since dichromatic reflection model is the linear relationship between diffuse and specular component.As we know that in three channel color image,the image184P.Koirala,M.Hauta-Kasari,and J.ParkkinenFig.7.Hat image with highlight and Image reconstructed without highlightFig.8.Green ball image with highlight and Image reconstructed without highlightFig.9.Pear image with highlight and Image reconstructed without highlight Table2.Average S-CIELAB color differenceΔE between the diffuse detected part of original image andfinal image after polynomial transformation.D is the standard deviation of S-CIELAB color difference.Pear Face Yellow ball Green ball Hat Fish ΔE0.9540.354 2.802 4.0360.4150.445 D 2.8770.801 5.526 4.529 1.275 1.673can be separated to three different components by using PCA.Here we assume that one of the three components might contain the highlight component.Af-ter separating image in three components,we found that in most of the casesHighlight Removal from Single Image 185Fig.10.Yellow ball image with highlight and Image reconstructed without highlightFig.11.Fish image with highlight and Image reconstructed without highlight principal component across second highest variance separates some of the high-light effect of the image.However it might not be true for all the images.Fig.3and Fig.4show both cases without highlight and with highlight in scond principal component respectively.We tested more than twenty different images from uniformly color distributed images to multicolored textured images,we found that by setting the threshold value of fidelity ratio corresponding to sec-ond largest eigen value ,it can be detected weather second principal components carries highlight.We got feasible result setting threshold value equal to 2.The eigen value analysis for some samples has been listed in Table 1.The second principal component lesser than threshold value has not been included in re-construction process.However in most of the cases it was found that still first principal component carries some specular part as well as most of diffuse part.To diffuse the specular effect,the histogram equalization was applied to the first principal component.However the reconstructed result using above described method by PCA gives the image without highlight but there is shift of color.The original color was estimated by using basis function calculated from sec-ond order polynomial transformation of diffuse detected part of reconstructed image and original image.The final result obtained applying the polynomial transformation to the reconstructed image using histogram equalized first prin-cipal component and other component is promising as shown in Fig.6-11.In186P.Koirala,M.Hauta-Kasari,and J.Parkkinenour knowledge there is no ready evaluation method.Here we applied the spatial extension of color difference foumula S-CIELAB color difference forumula[13]in the diffuse detected part of the original image andfinal diffuse image.Average S-CIELAB color difference value and standard deviation of color difference have been shown in Table2.5ConclusionsThe highlight removal technique based on Principal Component Analysis was proposed.In the proposed method,histogram equalization was applied tofirst principal component of the images.In the experiment,it was found that in most of the cases second principal component carries large portion of the highlight. However it was not true for all the cases.So second principal component was included only if thefidelity ratio of second largest eigen vector is greater than threshold value otherwise rejected.The color of the reconstructed image gets shifted due to histogram equalization infirst principal component.However re-constructed image is free of highlight.The color of the reconstructed image was corrected by using second order polynomial transformation.The weight vector of the polynomial transformation was obtained by using the transformation of the pixels of reconstructed image to the original image corresponding to high-light free pixels in original image.The accuracy of the result was evaluated by using S-CIELAB color difference formula in highlight free area.The method was tested in different types of images without considering camera noise and illuminant type.The result obtained is quite promising.References1.Tan,R.T.,Ikeuchi,K.:Seperating Reflection Components of Textured SurfacesUsing a Single Image.IEEE Transactions on Pattern Analysis and Machine Intel-ligence27,178–193(2005)2.Wolff,L.B.,Boult,T.:Constraining object features using polarization reflectancemodel.IEEE Trans.on Pattern Analaysis and Machine Intelligence13(7),635–657 (1991)3.Bochko,V.,Parkkinen,J.:Highlight Analysis Using a Mixture Model of Probabilis-tic PCA.In:Proceedings of the4th WSEAS International Conference on Signal Processing,Robotics and Automation,Salzburg,Austria Article No.15(2005) ISBN:960-8457-09-24.Lee,H.-C.:Method for computing the scene illuminant chromaticity from specularhighlights.J.Opt.Soc.Am.3,1694–1699(1986)5.Shen,H.-L.,Zhang,H.-G.,Shao,S.-J.,Xin,J.H.:Chromaticity-based Seperation ofreflection components in a single image.Pattern Recognition41,2461–2469(2008) 6.Klinker,G.J.,Shafer,S.H.,Kanade,T.:The measurement of Highlights in ColorImages.Int’l puter Vision2,7–32(1990)7.Schluns,K.,Koschan,A.:Global and Local Highlight Analysis in Color Images.In:Proceedings of CGIP2000,First International Conference on Color in Graphics and Image Processing,Saint-Etienne,France,October1-4,pp.300–304(2000)Highlight Removal from Single Image187 8.Nayar,S.K.,Fang,X.S.,Boult,T.:Separation of reflection components using colorand puter Vision21,163–186(1997)9.Shafer,S.A.:Using color to seperate reflection components.Color Res.App.10,210–218(1985)10.Fu,Z.,Tan,R.T.,Caelli,T.M.:Specular Free Spectral Imaging Using Orthogo-nal Subspace Projection.In:Proceedings of the18th International Conference on Pattern Recognition(2006)11.Jolliffe,I.T.:Principal Component Analysis.Springer series of statistics.Springer,Heidelberg(2002)12.Gonzalez,R.C.,Woods,R.E.:Digital Image Processing,2nd edn.Prentice-Hall,Englewood Cliffs(2002)13.Zhang,X.,Wandell,B.:A Spatial Extension of CIELAB for Digital Color ImageReproduction,/~brian/scielab/scielab3/scielab3.pdf14.Tan,P.,Lin,S.,Quan,L.,Shum,H.-Y.:Highlight Removal by Illumination-Constrained Inpainting.In:Proceeding of the ninth IEEE International Conference on Computer Vision(ICCV2003),vol.2(2003)。
计算机之父的英语作文
计算机之父的英语作文As the Father of Computer。
As one of the most influential figures in modern technology, John Vincent Atanasoff is widely regarded as the father of the computer. Born in 1903 in Hamilton, New York, Atanasoff developed a passion for mathematics at an early age. He went on to study electrical engineering at the University of Florida, where he earned his bachelor's and master's degrees.In the early 1930s, Atanasoff began to explore the idea of building a machine that could perform complex mathematical calculations. He believed that such a machine would revolutionize the field of science and engineering, and set out to design and build the first electronicdigital computer.Atanasoff's computer was based on the idea of using binary digits to represent information. This was a radicaldeparture from the traditional decimal system used in most computers of the time. Atanasoff's machine used vacuum tubes to perform calculations, and was capable of performing basic arithmetic operations.Despite the success of his early prototypes, Atanasoff faced numerous challenges in developing a fully functional computer. He struggled to secure funding for his project, and faced stiff competition from other researchers who were also working on similar machines.Despite these obstacles, Atanasoff persevered, and in 1941 he unveiled the first working prototype of his computer. The machine was capable of performing complex calculations in a fraction of the time it would take a human to do the same work.Atanasoff's computer marked a major milestone in the history of computing, and paved the way for the development of modern digital computers. His work laid the foundation for the development of the first electronic computers, and his contributions to the field of computing continue to befelt today.In recognition of his contributions to the field of computing, Atanasoff was awarded numerous honors and awards throughout his career. He was inducted into the National Inventors Hall of Fame in 1990, and was posthumously awarded the National Medal of Technology and Innovation in 2009.In conclusion, John Vincent Atanasoff is widely regarded as the father of the computer, and his contributions to the field of computing continue to be felt today. His work laid the foundation for the development of modern digital computers, and his legacy will continue to inspire future generations of computer scientists and engineers.。
北美拉丁语考试试卷真题
北美拉丁语考试试卷真题一、选择题(每题2分,共20分)1. 下列哪个词是拉丁语中表示“书”的词汇?A. LiberB. AquaC. PuerD. Domus2. 拉丁语中“我爱您”如何表达?A. Amo vosB. Amo teC. Amo nosD. Amo tibi3. 以下哪个句子正确地表达了“我在学习拉丁语”?A. Ego discere linguam Latinam.B. Ego discendo linguam Latinam.C. Ego discendo Latine.D. Ego discere Latine.4. 拉丁语中的动词“写”是:A. ScriboB. LegoC. AudioD. Video5. 下列哪个词是拉丁语中表示“时间”的词汇?A. TempusB. MundusC. CivitasD. Pax二、填空题(每题1分,共10分)6. 拉丁语中的“太阳”是______。
7. “我们去市场”在拉丁语中可以表达为______。
8. “他不在家”可以翻译为拉丁语为______。
9. “他们正在吃饭”的拉丁语表达是______。
10. “她昨天来过”的拉丁语翻译是______。
三、翻译题(每题5分,共30分)11. 将下列拉丁语句子翻译成英语:- "Venite ad me, omnes qui laboratis et onerati estis." - "Ego vos reficiam."12. 将下列英语句子翻译成拉丁语:- "I am a teacher."- "We are going to the library."四、阅读理解(每题3分,共20分)阅读以下拉丁语文本,回答后面的问题:Caesar, vir magnus, in Gallia bellum gerebat. Multas res mirabiles fecit. In uno tempore, cum magnam victoriam cepisset, ad suos milites orationem habuit. "Gratias agite," inquit, "Deo, qui nos ad hanc victoriam perduxit."13. Quae res Caesar in Gallia gessit?14. Quid est mirabile quod Caesar fecit?15. Quis est Deus quod Caesar mentionavit?16. Quid Caesar militibus suis dixit post victoriam?五、写作题(共20分)17. Scribite brevem narrationem de aliquo eventu in historia Romana. Utilizatis formis perfecti temporis et imperfecti temporis.请注意,本试卷仅供练习使用,并非官方考试材料。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
[22] Boris Aronov and Micha Sharir. On translational motion planning in 3-space. In Proc. 10th Annu. ACM Sympos. Comput. Geom., pages 21–30, 1994.
[23] S. Arya, D.M. Mount, N.S. Netanyahu, R. Silverman, and A. Wu. An optimal algorithm for approximate nearest neighbor searching. In Proc. 5th ACM-SIAM Symp. on Discrete Algorithms (SODA), 1994.
[20] B. Aronov and M. Sharir. On translational motion planning of a convex polyhedron in 3-space. SIAM Journal on Computing, 26:1785–1803, 1997.
[21] B. Aronov, M. Sharir, and B. Tagansky. The union of convex polyhedra in three dimension. SIAM Journal on Computing, 26:1670–1688, 1997.
[19] N. Amenta. Bounded boxes, Hausdorff distance, and a new proof of an interesting Helly theorem. In Proc. 10th Annu. ACM Sympos. Comput. Geom., pages 340–347, 1994.
ergeometrie in n-dimensionalen Ra¨umen konstanter Kru¨mmung. Birkhuser Verlag, 1981.
[33] J.-D. Boissonnat and M. Yvinec. Algorithmic Geometry. Cambridge University Press, 1998.
[15] Helmut Alt and Leonidas Guibas. Discrete geometric shapes: Matching, interpolation, and approximation - a survey. In J.-R. Sack and J. Urrutia, editors, Handbook of Computational Geometry, pages 121–154. Elsevier, 1999.
[24] S. Basu. Algorithms in Semi-algebraic Geometry. PhD thesis, Dept. Computer Science, New York University, 1996.
[25] S. Basu, R. Pollack, and M.-F. Roy. A new algorithm to find a point in every cell defined by a family of polynomials. In B. Caviness and J. Johnson, editors, Quantifier Elimination and Cylindrical Algebraic Decomposition. Springer–Verlag, 1995.
[8] Pankaj Kumar Agarwal and Micha Sharir. Arrangements and their applications. In J.-R. Sack and J. Urrutia, editors, Handbook of Computational Geometry, pages 49–120. Elsevier, 1999.
90
[30] Marcel Berger. Geometry II. Universitext. Springer, 1996.
[31] J. Bochnak, M. Coste, and M.-F. Roy. Real Algebraic Geometry, volume 36 of Ergebnisse der Mathematik und ihrer Grenzgebiete. Springer, 1998.
[12] Helmut Alt, Bernd Behrends, and Johannes Blo¨mer. Approximate matching of polygonal shapes. Ann. Math. Artif. Intell., 13:251–266, 1995.
[13] Helmut Alt, Peter Braß, Michael Godau, Christian Knauer, and Carola Wenk. Computing the Hausdorff distance of geometric patterns and shapes. Discrete and Computational Geometry - The Goodman-Pollack-Festschrift, 2002.
[5] Pankaj K. Agarwal and Micha Sharir. Motion planning of a ball amid segments in three dimensions. In Proc. 10th ACM-SIAM Sympos. Discrete Algorithms, pages 21–30, 1999.
Bibliography
[1] P. Agarwal, S. Har-Peled, M. Sharir, and Y. Wang. Hausdorff distance under translation for points, disks, and balls. Manuscript, 2002.
[2] P. Agarwal and M. Sharir. Pipes, cigars, and kreplach: The union of Minkowski sums in three dimensions. Discrete Comput. Geom., 24:645–685, 2000.
[18] Helmut Alt, Christian Knauer, and Carola Wenk. Comparison of distance measures for planar curves. Algorithmica, Special Issue on Shape Algorithmics, 2003. To appear.
[3] P. K. Agarwal, O. Schwarzkopf, and Micha Sharir. The overlay of lower envelopes and its applications. Discrete Comput. Geom., 15:1–13, 1996.
[4] Pankaj K. Agarwal. Efficient techniques in geometric optimization: An overview, 1999. slides.
[6] Pankaj K. Agarwal and Micha Sharir. Pipes, cigars, and kreplach: The union of Minkowski sums in three dimensions. In Proc. 15th Annu. ACM Sympos. Comput. Geom., pages 143–153, 1999.
[27] S. Basu, R. Pollack, and M.-F. Roy. On the combinatorial and algebraic complexity of quantifier elimination. Journal of the ACM, 43(6):1002–1045, 1996.
89
[14] Helmut Alt, Alon Efrat, Gu¨nter Rote, and Carola Wenk. Matching planar maps. In Proc. 14th ACM-SIAM Symp. on Discrete Algorithms (SODA), 2003.
[28] S. Basu, R. Pollack, and M.-F. Roy. Computing roadmaps of semi-algebraic sets on a variety. Journal of the ACM, 13(1):55–82, 1999.
[29] Marcel Berger. Geometry I. Universitext. Springer, 1994.
[7] Pankaj K. Agarwal, Micha Sharir, and Sivan Toledo. Applications of parametric searching in geometric optimization. Journal of Algorithms, 17:292–318, 1994.
[11] H. Alt, K. Mehlhorn, H. Wagener, and Emo Welzl. Congruence, similarity and symmetries of geometric objects. Discrete Comput. Geom., 3:237–256, 1988.