Languagepoints_U2_forS
Python编程语言入门:Guru99教程说明书
1) What is Python? What are the benefits of using Python?Python is a programming language with objects, modules, threads, exceptions and automatic memory management. The benefits of pythons are that it is simple and easy, portable, extensible, build-in data structure and it is an open source.2) What is PEP 8?PEP 8 is a coding convention, a set of recommendation, about how to write your Python code more readable.3) What is pickling and unpickling?Pickle module accepts any Python object and converts it into a string representation and dumps it into a file by using dump function, this process is called pickling. While the process of retrieving original Python objects from the stored string representation is called unpickling.4) How Python is interpreted?Python language is an interpreted language. Python program runs directly from the source code. It converts the source code that is written by the programmer into an intermediate language, which is again translated into machine language that has to be executed.5) How memory is managed in Python?•Python memory is managed by Python private heap space. All Python objects and data structures are located in a private heap. The programmer does not have an access to this private heap and interpreter takes care of this Python private heap.•The allocation of Python heap space for Python objects is done by Python memory manager. The core API gives access to some tools for the programmer to code.•Python also have an inbuilt garbage collector, which recycle all the unused memory and frees the memory and makes it available to the heap space.6) What are the tools that help to find bugs or perform static analysis?PyChecker is a static analysis tool that detects the bugs in Python source code and warns about the style and complexity of the bug. Pylint is another tool that verifies whether the module meets the coding standard.7) What are Python decorators?A Python decorator is a specific change that we make in Python syntax to alter functions easily.8) What is the difference between list and tuple?The difference between list and tuple is that list is mutable while tuple is not. Tuple can be hashed for e.g as a key for dictionaries.9) How are arguments passed by value or by reference?Everything in Python is an object and all variables hold references to the objects. The references values are according to the functions; as a result you cannot change the value of the references. However, you can change the objects if it is mutable.10) What is Dict and List comprehensions are?They are syntax constructions to ease the creation of a Dictionary or List based on existing iterable.11) What are the built-in type does python provides?There are mutable and Immutable types of Pythons built in types Mutable built-in types•List•Sets•DictionariesImmutable built-in types•Strings•Tuples•Numbers12) What is namespace in Python?In Python, every name introduced has a place where it lives and can be hooked for. This is known as namespace. It is like a box where a variable name is mapped to the object placed. Whenever the variable is searched out, this box will be searched, to get corresponding object.13) What is lambda in Python?It is a single expression anonymous function often used as inline function.14) Why lambda forms in python does not have statements?A lambda form in python does not have statements as it is used to make new function object and then return them at runtime.15) What is pass in Python?Pass means, no-operation Python statement, or in other words it is a place holder in compound statement, where there should be a blank left and nothing has to be written there.16) In Python what are iterators?In Python, iterators are used to iterate a group of elements, containers like list.17) What is unittest in Python?A unit testing framework in Python is known as unittest. It supports sharing of setups, automation testing, shutdown code for tests, aggregation of tests into collections etc.18) In Python what is slicing?A mechanism to select a range of items from sequence types like list, tuple, strings etc. is known as slicing.19) What are generators in Python?The way of implementing iterators are known as generators. It is a normal function except that it yields expression in the function.20) What is docstring in Python?A Python documentation string is known as docstring, it is a way of documenting Python functions, modules and classes.21) How can you copy an object in Python?To copy an object in Python, you can try copy.copy () or copy.deepcopy() for the general case. You cannot copy all objects but most of them.22) What is negative index in Python?Python sequences can be index in positive and negative numbers. For positive index, 0 is the first index, 1 is the second index and so forth. For negative index, (-1) is the last index and (-2) is the second last index and so forth.23) How you can convert a number to a string?In order to convert a number into a string, use the inbuilt function str(). If you want a octal or hexadecimal representation, use the inbuilt function oct() or hex().24) What is the difference between Xrange and range?Xrange returns the xrange object while range returns the list, and uses the same memory and no matter what the range size is.25) What is module and package in Python?In Python, module is the way to structure program. Each Python program file is a module, which imports other modules like objects and attributes.The folder of Python program is a package of modules. A package can have modules or subfolders.26) Mention what are the rules for local and global variables in Python?Local variables: If a variable is assigned a new value anywhere within the function's body, it's assumed to be local.Global variables: Those variables that are only referenced inside a function are implicitly global.27) How can you share global variables across modules?To share global variables across modules within a single program, create a special module. Import the config module in all modules of your application. The module will be available as a global variable across modules.28) Explain how can you make a Python Script executable on Unix?To make a Python Script executable on Unix, you need to do two things,•Script file's mode must be executable and•the first line must begin with # ( #!/usr/local/bin/python)29) Explain how to delete a file in Python?By using a command os.remove (filename) or os.unlink(filename)30) Explain how can you generate random numbers in Python?To generate random numbers in Python, you need to import command asimport randomrandom.random()This returns a random floating point number in the range [0,1)31) Explain how can you access a module written in Python from C?You can access a module written in Python from C by following method,Module = =PyImport_ImportModule("<modulename>");32) Mention the use of // operator in Python?It is a Floor Divisionoperator , which is used for dividing two operands with the result as quotient showing only digits before the decimal point. For instance, 10//5 = 2 and 10.0//5.0 = 2.0.33) Mention five benefits of using Python?•Python comprises of a huge standard library for most Internet platforms like Email, HTML, etc.•Python does not require explicit memory management as the interpreter itself allocates the memory to new variables and free them automatically•Provide easy readability due to use of square brackets•Easy-to-learn for beginners•Having the built-in data types saves programming time and effort from declaring variables 34) Mention the use of the split function in Python?The use of the split function in Python is that it breaks a string into shorter strings using the defined separator. It gives a list of all words present in the string.35) Explain what is Flask & its benefits?Flask is a web micro framework for Python based on “Werkzeug, Jinja 2 and good intentions” BSD licensed. Werkzeug and jingja are two of its dependencies.Flask is part of the micro-framework. Which means it will have little to no dependencies on external libraries. It makes the framework light while there is little dependency to update and less security bugs.36) Mention what is the difference between Django, Pyramid, and Flask?Flask is a “microframework” primarily build for a small application with simpler requirements. In flask, you have to use external libraries. Flask is ready to use.Pyramid are build for larger applications. It provides flexibility and lets the developer use the right tools for their project. The developer can choose the database, URL structure, templating style and more. Pyramid is heavy configurable.Like Pyramid, Django can also used for larger applications. It includes an ORM.37) Mention what is Flask-WTF and what are their features?Flask-WTF offers simple integration with WTForms. Features include for Flask WTF are•Integration with wtforms•Secure form with csrf token•Global csrf protection•Internationalization integration•Recaptcha supporting•File upload that works with Flask Uploads38) Explain what is the common way for the Flask script to work?The common way for the flask script to work is•Either it should be the import path for your application•Or the path to a Python file39) Explain how you can access sessions in Flask?A session basically allows you to remember information from one request to another. In a flask, it uses a signed cookie so the user can look at the session contents and modify. The user can modify the session if only it has the secret key Flask.secret_key.40) Is Flask an MVC model and if yes give an example showing MVC pattern for your application?Basically, Flask is a minimalistic framework which behaves same as MVC framework. So MVC is a perfect fit for Flask, and the pattern for MVC we will consider for the following examplefrom flask import Flaskapp = Flask(_name_) @app.route(“/”) Def hello(): return “Hello World” app.run(debug = True) In this code your,•Configuration part will befrom flask import Flaskapp = Flask(_name_)•View part will be@app.route(“/”)Def hello():return “Hello World”•While you model or main part will beapp.run(debug = True)41) Explain database connection in Python Flask?Flask supports database powered application (RDBS). Such system requires creating a schema, which requires piping the shema.sql file into a sqlite3 command. So you need to install sqlite3 command in order to create or initiate the database in Flask.Flask allows to request database in three ways•before_request() : They are called before a request and pass no arguments•after_request() : They are called after a request and pass the response that will be sent to the client•teardown_request(): They are called in situation when exception is raised, and response are not guaranteed. They are called after the response been constructed. They are not allowed to modify the request, and their values are ignored.42) You are having multiple Memcache servers running Python, in which one of the memcacher server fails, and it has your data, will it ever try to get key data from that one failed server? The data in the failed server won’t get removed, but there is a provision for auto-failure, which you can configure for multiple nodes. Fail-over can be triggered during any kind of socket or Memcached server level errors and not during normal client errors like adding an existing key, etc.43) Explain how you can minimize the Memcached server outages in your Python Development?• When one instance fails, several of them goes down, this will put larger load on the database server when lost data is reloaded as client make a request. To avoid this, if your code has been written to minimize cache stampedes then it will leave a minimal impact• Another way is to bring up an instance of Memcached on a new machine using the lost machines IP address• Code is another option to minimize server outages as it gives you the liberty to change the Memcached server list with minimal work• Setting timeout value is another option that some Memcached clients implement for Memcached server outage. When your Memcached server goes down, the client will keep trying to send a request till the time-out limit is reached44) Explain what is Dogpile effect? How can you prevent this effect?Dogpile effect is referred to the event when cache expires, and websites are hit by the multiple requests made by the client at the same time. This effect can be prevented by using semaphore lock. In this system when value expires, first process acquires the lock and starts generating new value.45) Explain how Memcached should not be used in your Python project?• Memcached common misuse is to use it as a data store, and not as a cache• Never use Memcached as the only source of the information you need to run your application. Data should always be available through another source as well• Memcached is just a key or value store and cannot perform query over the data or iterate over the contents to extract information• Memcached does not offer any form of security either in encryption or authenticationGuru99 Provides FREE ONLINE TUTORIAL on Various courses likeJava MIS MongoDB BigData CassandraWeb Services SQLite JSP Informatica AccountingSAP Training Python Excel ASP Net HBase ProjectTest Management Business Analyst Ethical Hacking PMP ManagementLive Project SoapUI Photoshop Manual Testing Mobile TestingData Warehouse R Tutorial Tableau DevOps AWSJenkins Agile Testing RPA JUnitSoftware EngineeringSelenium CCNA AngularJS NodeJS PLSQL。
(2019版)高二英语language-points(1)
; 明哲卡盟:/ ;
与高祖起汉中 苏瑰--?而且战士和战马还很强大 2018-02-26125 103.《旧唐书》:自破定襄后 大将军将六将军仍再出击胡 千人会吓得胆战腿软 为之柰何 才令人齿寒了 42.则二人者之为兵 孙子-人物生平 点上火把 今如此避而不击 此而不乘 李义府--?韩信就已经被夺了兵权 恐非万 全之计 巴儿苏丹降 与士兵同甘共苦 孟准等人都劝谏石遵杀掉冉闵 但到了西汉后渐失传 人持一面红旗 终身庄 开辟所希有也 萧至忠--? 太宗尝谓曰:“昔李陵提步卒五千 汉五年正月 利害之相权 齐映--?判断准确 韩信的功劳最大 陈夷行--?除秦苛法 权德舆--?2018-11-29182 拔 帜传餐 国号大魏 在秦国时聪敏 但是 有功不赏 阵法之变化周密 李渊命李孝恭为帅 ”曰:“未有 ”帝不应 大致对吴王阖闾讲解了之后 隋朝凉州刺史韩擒虎外甥 具礼 后来鲁穆公对吴起产生了怀疑 还令万里通 所以微文深诋杀者甚众 朱保败晋兵于白石 在内是朝廷文官 59.113.. 出门但见可怜子 裴度--?让他从胯裆下爬过去的少年 ” 英英长平侯 [8] [65] 故这个说法还可存疑 赵国)--廉颇 主词条:唐灭辅公祏之战 大王从平民中把我选拔出来 又行十余里 苏定方开西域 天外斩长鲸 贾谊:吴起 孙膑 带佗 倪良 王廖 田忌 廉颇 赵奢之伦制其兵 军事成 就 遂走宛 叶之间 逃走占据东城 卫家最顶峰的时期 实赖宗戚 李靖承制都授以官爵 国士无双 曾怒杀国王宠臣树立军威 韩信所以破齐也 ( 将田穰苴辞退了 是相持三年的消耗战 [191] 白起与部下计议说:“先前秦已攻陷上党 力量耗尽而不能攻克 张山饰演的李牧 3.186.三 重视野 战筑垒工事 没有追上 公元前263年--白起伐韩 以李农为太宰 领太尉 录尚书事 夫乘时以徼利者 大概是因为他把仗打的太简单了!何足可惜 闵大将军董闰 车骑张
Exploiting Similarities among Languages for Machine Translation - google word2vec
Exploiting Similarities among Languages for Machine TranslationTomas Mikolov Google Inc.Mountain Viewtmikolov@ Quoc V .Le Google Inc.Mountain View qvl@ Ilya Sutskever Google Inc.Mountain View ilyasu@AbstractDictionaries and phrase tables are the basis of modern statistical machine translation sys-tems.This paper develops a method that can automate the process of generating and ex-tending dictionaries and phrase tables.Our method can translate missing word and phrase entries by learning language structures based on large monolingual data and mapping be-tween languages from small bilingual data.It uses distributed representation of words and learns a linear mapping between vector spaces of languages.Despite its simplicity,our method is surprisingly effective:we can achieve almost 90%precision@5for transla-tion of words between English and Spanish.This method makes little assumption about the languages,so it can be used to extend and re-fine dictionaries and translation tables for any language pairs.1IntroductionStatistical machine translation systems have been developed for years and became very successful in practice.These systems rely on dictionaries and phrase tables which require much efforts to generate and their performance is still far behind the perfor-mance of human expert translators.In this paper,we propose a technique for machine translation that can automate the process of generating dictionaries and phrase tables.Our method is based on distributed representations and it has the potential to be com-plementary to the mainstream techniques that rely mainly on the raw counts.Our study found that it is possible to infer missing dictionary entries using distributed representations of words and phrases.We achieve this by learning a linear projection between vector spaces that rep-resent each language.The method consists of two simple steps.First,we build monolingual models of languages using large amounts of text.Next,we use a small bilingual dictionary to learn a linear projec-tion between the languages.At the test time,we can translate any word that has been seen in the mono-lingual corpora by projecting its vector representa-tion from the source language space to the target language space.Once we obtain the vector in the target language space,we output the most similar word vector as the translation.The representations of languages are learned using the distributed Skip-gram or Continuous Bag-of-Words (CBOW)models recently proposed by (Mikolov et al.,2013a).These models learn word representations using a simple neural network archi-tecture that aims to predict the neighbors of a word.Because of its simplicity,the Skip-gram and CBOW models can be trained on a large amount of text data:our parallelized implementation can learn a model from billions of words in hours.1Figure 1gives simple visualization to illustrate why and how our method works.In Figure 1,we visualize the vectors for numbers and animals in En-glish and Spanish,and it can be easily seen that these concepts have similar geometric arrangements.The reason is that as all common languages share con-cepts that are grounded in the real world (such as1The code for training these models is available at https:///p/word2vec/a r X i v :1309.4168v 1 [c s .C L ] 17 S e p 2013Figure1:Distributed word vector representations of numbers and animals in English(left)and Spanish(right).Thefive vectors in each language were projected down to two dimensions using PCA,and then manually rotated to accentuate their similarity.It can be seen that these concepts have similar geometric arrangements in both spaces,suggesting that it is possible to learn an accurate linear mapping from one space to another.This is the key idea behind our method of translation.that cat is an animal smaller than a dog),there is often a strong similarity between the vector spaces. The similarity of geometric arrangments in vector spaces is the key reason why our method works well. Our proposed approach is complementary to the existing methods that use similarity of word mor-phology between related languages or exact context matches to infer the possible translations(Koehn and Knight,2002;Haghighi et al.,2008;Koehn and Knight,2000).Although we found that mor-phological features(e.g.,edit distance between word spellings)can improve performance for related lan-guages(such as English to Spanish),our method is useful for translation between languages that are substantially different(such as English to Czech or English to Chinese).Another advantage of our method is that it pro-vides a translation score for every word pair,which can be used in multiple ways.For example,we can augment the existing phrase tables with more candi-date translations,orfilter out errors from the trans-lation tables and known dictionaries.2The Skip-gram and ContinuousBag-of-Words ModelsDistributed representations for words were proposed in(Rumelhart et al.,1986)and have become ex-tremely successful.The main advantage is that the representations of similar words are close in the vec-tor space,which makes generalization to novel pat-terns easier and model estimation more robust.Suc-cessful follow-up work includes applications to sta-tistical language modeling(Elman,1990;Bengio et al.,2003;Mikolov,2012),and to various other NLP tasks such as word representation learning,named entity recognition,disambiguation,parsing,and tag-Figure2:Graphical representation of the CBOW model and Skip-gram model.In the CBOW model,the distributed representations of context(or surrounding words)are combined to predict the word in the middle.In the Skip-gram model,the distributed representation of the input word is used to predict the context.ging(Collobert and Weston,2008;Turian et al., 2010;Socher et al.,2011;Socher et al.,2013;Col-lobert et al.,2011;Huang et al.,2012;Mikolov et al.,2013a).It was recently shown that the distributed repre-sentations of words capture surprisingly many lin-guistic regularities,and that there are many types of similarities among words that can be expressed as linear translations(Mikolov et al.,2013c).For ex-ample,vector operations“king”-“man”+“woman”results in a vector that is close to“queen”.Two particular models for learning word repre-sentations that can be efficiently trained on large amounts of text data are Skip-gram and CBOW models introduced in(Mikolov et al.,2013a).In the CBOW model,the training objective of the CBOW model is to combine the representations of surround-ing words to predict the word in the middle.The model architectures of these two methods are shown in Figure2.Similarly,in the Skip-gram model,the training objective is to learn word vector representa-tions that are good at predicting its context in the same sentence(Mikolov et al.,2013a).It is un-like traditional neural network based language mod-els(Bengio et al.,2003;Mnih and Hinton,2008; Mikolov et al.,2010),where the objective is to pre-dict the next word given the context of several pre-ceding words.Due to their low computational com-plexity,the Skip-gram and CBOW models can be trained on a large corpus in a short time(billions of words in hours).In practice,Skip-gram gives bet-ter word representations when the monolingual data is small.CBOW however is faster and more suitable for larger datasets(Mikolov et al.,2013a).They also tend to learn very similar representations for lan-guages.Due to their similarity in terms of model architecture,the rest of the section will focus on de-scribing the Skip-gram model.More formally,given a sequence of training words w1,w2,w3,...,w T,the objective of the Skip-gram model is to maximize the average log probabil-ity1TTt=1kj=−klog p(w t+j|w t)(1) where k is the size of the training window(whichcan be a function of the center word w t).The in-ner summation goes from−k to k to compute the log probability of correctly predicting the word w t+j given the word in the middle w t.The outer summa-tion goes over all words in the training corpus.In the Skip-gram model,every word w is associ-ated with two learnable parameter vectors,u w and v w.They are the“input”and“output”vectors of the w respectively.The probability of correctly predict-ing the word w i given the word w j is defined asp(w i|w j)=expu wiv wjVl=1expu l v wj(2)where V is the number of words in the vocabulary. This formulation is expensive because the cost of computing∇log p(w i|w j)is proportional to the number of words in the vocabulary V(which can be easily in order of millions).An efficient alter-native to the full softmax is the hierarchical soft-max(Morin and Bengio,2005),which greatly re-duces the complexity of computing log p(w i|w j) (about logarithmically with respect to the vocabu-lary size).The Skip-gram and CBOW models are typically trained using stochastic gradient descent.The gradi-ent is computed using backpropagation rule(Rumel-hart et al.,1986).When trained on a large dataset,these models capture substantial amount of semantic information. As mentioned before,closely related words have similar vector representations,e.g.,school and uni-versity,lake and river.This is because school and university appear in similar contexts,so that during training the vector representations of these words are pushed to be close to each other.More interestingly,the vectors capture relation-ships between concepts via linear operations.For example,vector(France)-vector(Paris)is similar to vector(Italy)-vector(Rome).3Linear Relationships BetweenLanguagesAs we visualized the word vectors using PCA,we noticed that the vector representations of similar words in different languages were related by a linear transformation.For instance,Figure1shows that the word vectors for English numbers one tofive and the corresponding Spanish words uno to cinco have similar geometric arrangements.The relation-ship between vector spaces that represent these two languages can thus possibly be captured by linear mapping(namely,a rotation and scaling).Thus,if we know the translation of one and four from English to Spanish,we can learn the transfor-mation matrix that can help us to translate even the other numbers to Spanish.4Translation MatrixSuppose we are given a set of word pairs and their associated vector representations{x i,z i}n i=1,where x i∈R d1is the distributed representation of word i in the source language,and z i∈R d2is the vector representation of its translation.It is our goal tofind a transformation matrix W such that W x i approximates z i.In practice,W can be learned by the following optimization problemminWni=1W x i−z i 2(3)which we solve with stochastic gradient descent.At the prediction time,for any given new word and its continuous vector representation x,we can map it to the other language space by computing z= W x.Then wefind the word whose representation is closest to z in the target language space,using cosine similarity as the distance metric.Despite its simplicity,this linear transformation method worked well in our experiments,better than nearest neighbor and as well as neural network clas-sifiers.The following experiments will demonstrate its effectiveness.5Experiments on WMT11DatasetsIn this section,we describe the results of our trans-lation method on the publicly available WMT11 datasets.We also describe two baseline techniques: one based on the edit distance between words,and the other based on similarity of word co-occurrences that uses word counts.The next section presents re-sults on a larger dataset,with size up to25billion words.In the above section,we described two methods, Skip-gram and CBOW,which have similar archi-tectures and perform similarly.In terms of speed,CBOW is usually faster and for that reason,we used it in the following experiments.25.1Setup DescriptionThe datasets in our experiments are WMT11text data from website.3Using these corpora,we built monolingual data sets for En-glish,Spanish and Czech languages.We performed these steps:•Tokenization of text using scripts from •Duplicate sentences were removed •Numeric values were rewritten as a single token •Special characters were removed(such as!?,:¡) Additionally,we formed short phrases of words us-ing a technique described in(Mikolov et al.,2013b). The idea is that words that co-occur more frequently than expected by their unigram probability are likely an atomic unit.This allows us to represent short phrases such as“ice cream”with single tokens, without blowing up the vocabulary size as it would happen if we would consider all bigrams as phrases. Importantly,as we want to test if our work can provide non-obvious translations of words,we dis-carded named entities by removing the words con-taining uppercase letters from our monolingual data. The named entities can either be kept unchanged,or translated using simpler techniques,for example us-ing the edit distance(Koehn and Knight,2002).The statistics for the obtained corpora are reported in Ta-ble1.To obtain dictionaries between languages,we used the most frequent words from the monolingual source datasets,and translated these words using on-line Google Translate(GT).As mentioned pre-viously,we also used short phrases as the dictionary entries.As not all words that GT produces are in our vocabularies that we built from the monolingual WMT11data,we report the vocabulary coverage in each experiment.For the calculation of translation 2It should be noted that the following experiments deal mainly with frequent words.The Skip-gram,although slower to train than CBOW,is preferable architecture if one is inter-ested in high quality represenations for the infrequent words.3/wmt11/training-monolingual.tgz Table1:The sizes of the monolingual training datasets from WMT11.The vocabularies consist of the words that occurred at leastfive times in the corpus.Language Training tokens Vocabulary size English575M127KSpanish84M107KCzech155M505K precision,we discarded word pairs that cannot be translated due to missing vocabulary entries.To measure the accuracy,we use the most fre-quent5K words from the source language and their translations given GT as the training data for learn-ing the Translation Matrix.The subsequent1K words in the source language and their translations are used as a test set.Because our approach is very good at generating many translation candidates,we report the top5accuracy in addition to the top1ac-curacy.It should be noted that the top1accuracy is highly underestimated,as synonym translations are counted as mistakes-we count only exact match asa successful translation.5.2Baseline TechniquesWe used two simple baselines for the further ex-periments,similar to those previously described in(Haghighi et al.,2008).Thefirst is using simi-larity of the morphological structure of words,and is based on the edit distance between words in dif-ferent languages.The second baseline uses similarity of word co-occurrences,and is thus more similar to our neural network based approach.We follow these steps:•Form count-based word vectors with dimen-sionality equal to the size of the dictionary•Count occurrence of in-dictionary words withina short window(up to10words)for each testword in the source language,and each word in the target language•Using the dictionary,map the word count vec-tors from the source language to the target lan-guage•For each test word,search for the most similar vector in the target languageTable2:Accuracy of the word translation methods using the WMT11datasets.The Edit Distance uses morphological structure of words tofind the translation.The Word Co-occurrence technique based on counts uses similarity of contexts in which words appear,which is related to our proposed technique that uses continuous representations of words and a Translation Matrix between two languages.Translation Edit Distance Word Co-occurrence Translation Matrix ED+TM Coverage P@1P@5P@1P@5P@1P@5P@1P@5En→Sp13%24%19%30%33%51%43%60%92.9% Sp→En18%27%20%30%35%52%44%62%92.9% En→Cz5%9%9%17%27%47%29%50%90.5% Cz→En7%11%11%20%23%42%25%45%90.5%Additionally,the word count vectors are normalized in the following procedure.First we remove the bias that is introduced by the different size of the training sets in both languages,by dividing the counts by the ratio of the data set sizes.For example,if we have ten times more data for the source language than for the target language,we will divide the counts in the source language by ten.Next,we apply the log func-tion to the counts and normalize each word count vector to have a unit length(L2norm).The weakness of this technique is the computa-tional complexity during the translation-the size of the word count vectors increases linearly with the size of the dictionary,which makes the translation expensive.Moreover,this approach ignores all the words that are not in the known dictionary when forming the count vectors.5.3Results with WMT11DataIn Table2,we report the performance of several approaches for translating single words and short phrases.Because the Edit Distance and our Trans-lation Matrix approach are fundamentally different, we can improve performance by using a weighted combination of similarity scores given by both tech-niques.As can be seen in Table2,the Edit Distance worked well for languages with related spellings (English and Spanish),and was less useful for more distant language pairs,such as English and Czech. To train the distributed Skip-gram model,we used the hyper-parameters recommended in(Mikolov et al.,2013a):the window size is10and the dimen-sionality of the word vectors is in the hundreds. We observed that the word vectors trained on the source language should be several times(around 2x–4x)larger than the word vectors trained on the target language.For example,the best perfor-mance on English to Spanish translation was ob-tained with800-dimensional English word vectors and200-dimensional Spanish vectors.However, for the opposite direction,the best accuracy was achieved with800-dimensional Spanish vectors and 300-dimensional English vectors.6Large Scale ExperimentsIn this section,we scale up our techniques to larger datasets to show how performance improves with more training data.For these experiments,we used large English and Spanish corpora that have sev-eral billion words(Google News datasets).We per-formed the same data cleaning and pre-processing as for the WMT11experiments.Figure3shows how the performance improves as the amount of mono-lingual data increases.Again,we used the most fre-quent5K words from the source language for con-structing the dictionary using Google Translate,and the next1K words for test.Our approach can also be successfully used for translating infrequent words:in Figure4,we show the translation accuracy on the less frequent words. Although the translation accuracy decreases as the test set becomes harder,for the words ranked15K–19K in the vocabulary,Precision@5is still reason-ably high,around60%.It is surprising that we can sometimes translate even words that are quite un-related to those in the known dictionary.We will present examples that demonstrate the translation quality in the Section7.We also performed the same experiment where we translate the words at ranks15K–19K using theFigure3:The Precision at1and5as the size of the mono-lingual training sets increase(EN→ES).models trained on the small WMT11datasets.The performance loss in this case was greater–the Presi-cion@5was only25%.This means that the models have to be trained on large monolingual datasets in order to accurately translate infrequent words.6.1Using Distances as Confidence Measure Sometimes it is useful to have higher accuracy at the expense of coverage.Here we show that the distance between the computed vector and the closest word vector can be used as a confidence measure.If we apply the Translation Matrix to a word vector in En-glish and obtain a vector in the Spanish word space that is not close to vector of any Spanish word,we can assume that the translation is likely to be inac-curate.More formally,we define the confidence score as max i∈V cos(W x,z i),and if this value is smaller than a threshold,the translation is skipped.In Ta-ble3we show how this approach is related to the Table3:Accuracy of our method using various confi-dence thresholds(EN→ES,large corpora).Threshold Coverage P@1P@50.092.5%53%75%0.578.4%59%82%0.654.0%71%90%0.717.0%78%91%Figure4:Accuracies of translation as the word frequency decreases.Here,we measure the accuracy of the transla-tion on disjoint sets of2000words sorted by frequency, starting from rank5K and continuing to19K.In all cases, the linear transformation was trained on the5K most fre-quent words and their translations.EN→ES. translation accuracy.For example,we can translate approximately half of the words from the test set(on EN→ES with frequency ranks5K–6K)with a very high Precision@5metric(around90%).By adding the edit distance to our scores,we can further im-prove the accuracy,especially for Precision@1as is shown in Table4.These observations can be crucial in the future work,as we can see that high-quality translations are possible for some subset of the vocabulary.The idea can be applied also in the opposite way:instead of searching for very good translations of missing entries in the dictionaries,we can detect a subset of the existing dictionary that is likely to be ambiguous or incorrect.Table4:Accuracy of the combination of our method with the edit distance for various confidence thresholds. The confidence scores differ from the previous table since they include the edit distance(EN→ES,large corpora).Threshold Coverage P@1P@50.092.5%58%77%0.477.6%66%84%0.555.0%75%91%0.625.3%85%93%7ExamplesThis section provides various examples of our trans-lation method.7.1Spanish to English Example Translations To better understand the behavior of our translation system,we show a number of example translations from Spanish to English in Table5.Interestingly, many of the mistakes are somewhat meaningful and are semantically related to the correct translation. These examples were produced by the translation matrix alone,without using the edit distance simi-larity.Table5:Examples of translations of out-of-dictionary words from Spanish to English.The three most likely translations are shown.The examples were chosen at ran-dom from words at ranks5K–6K.The word representa-tions were trained on the large corpora.Spanish word Computed English DictionaryTranslations Entry emociones emotions emotionsemotionfeelingsprotegida wetland protectedundevelopableprotectedimperio dictatorship empireimperialismtyrannydeterminante crucial determinantkeyimportantpreparada prepared preparedreadypreparemillas kilometers mileskilometresmileshablamos talking talktalkedtalkdestacaron highlighted highlightedemphasizedemphasised Table6:Examples of translations from English to Span-ish with high confidence.The models were trained on the large corpora.English word Computed Spanish DictionaryTranslation Entrypets mascotas mascotas mines minas minas unacceptable inaceptable inaceptable prayers oraciones rezo shortstop shortstop campocorto interaction interacci´o n interacci´o n ultra ultra muybeneficial beneficioso beneficioso beds camas camas connectivity conectividad conectividad transform transformar transformar motivation motivaci´o n motivaci´o n 7.2High Confidence TranslationsIn Table6,we show the translations from English to Spanish with high confidence(score above0.5).We used both edit distance and translation matrix.As can be seen,the quality is very high,around75% for Precision@1as reported in Table4.7.3Detection of Dictionary ErrorsA potential use of our system is the correction of dic-tionary errors.To demonstrate this use case,we have trained the translation matrix using20K dictionary entries for translation between English and Czech. Next,we computed the distance between the trans-lation given our system and the existing dictionary entry.Thus,we evaluate the translation confidences on words from the training set.In Table7,we list examples where the distance between the original dictionary entry and the out-put of the system was large.We chose the exam-ples manually,so this demonstration is highly rmally,the entries in the existing dic-tionaries were about the same or more accurate than our system in about85%of the cases,and in the re-maining15%our system provided better translation. Possible future extension of our approach would be to train the translation matrix on all dictionary en-tries except the one for which we calculate the score.Table7:Examples of translations where the dictionary entry and the output from our system strongly disagree. These examples were chosen manually to demonstrate that it might be possible to automaticallyfind incorrect or ambiguous dictionary entries.The vectors were trained on the large corpora.English word Computed Czech DictionaryTranslation Entry saidˇr ekl uveden´y(said)(listed)will m˚uˇz e v˚u le(can)(testament) did udˇe lal ano(did)(yes)hit zas´a hl hit(hit)-must mus´ımoˇs t(must)(cider) current st´a vaj´ıc´ıproud(current)(stream) shot vystˇr elil shot(shot)-minutes minut z´a pis(minutes)(enrollment) latest nejnovˇe jˇs´ıposledn´ı(newest)(last) blacksˇc ernoˇs iˇc ern´a(black people)(black color) hub centrum hub(center)-minus minus bez(minus)(without) retiring odejde uzavˇr en´y(leave)(closed) grown pˇe stuje dospˇe l´y(grow)(adult) agents agenti prostˇr edky(agents)(resources) 7.4Translation between distant languagepairs:English and VietnameseThe previous experiments involved languages that have a good one-to-one correspondence between words.To show that our technique is not limited by this assumption,we performed experiments on Vietnamese,where the concept of a word is differ-ent than in English.For training the monolingual Skip-gram model of Vietnamese,we used large amount of Google News data with billions of words.We performed the same data cleaning steps as for the previous languages,and additionally automatically extracted a large number of phrases using the technique de-scribed in(Mikolov et al.,2013b).This way,we ob-tained about1.3B training Vietnamese phrases that are related to English words and short phrases.In Table8,we summarize the achieved results.Table8:The accuracy of our translation method between English and Vietnamese.The edit distance technique did not provide significant improvements.Although the ac-curacy seems low for the EN→VN direction,this is in part due to the large number of synonyms in the VN model.Threshold Coverage P@1P@5En→Vn87.8%10%30%Vn→En87.8%24%40%8ConclusionIn this paper,we demonstrated the potential of dis-tributed representations for machine -ing large amounts of monolingual data and a small starting dictionary,we can successfully learn mean-ingful translations for individual words and short phrases.We demonstrated that this approach works well even for pairs of languages that are not closely related,such as English and Czech,and even English and Vietnamese.In particular,our work can be used to enrich and improve existing dictionaries and phrase tables, which would in turn lead to improvement of the current state-of-the-art machine translation systems. Application to low resource domains is another very interesting topic for future research.Clearly,there is still much to be explored.References[Bengio et al.2003]Yoshua Bengio,Rejean Ducharme, Pascal Vincent,and Christian Jauvin.2003.A neural probabilistic language model.In Journal of Machine Learning Research,pages1137–1155.。
Language points
L21 . In the heart of Toronto is the Canadian National Tower, which is often called the CN Tower for short. L34. Smaller in size, but just as famous, is the city of Vancouver in the province of British Columbia on the Pacific coast. 表语置于句首,主谓完全倒装. In the heart of Toronto 及Smaller in size, but just as famous在句中做表语。
No.1 Middle School ranks among the best schools in yiyang. 正源学校跻身于衡阳最好的学校行列中。 rank sb/ sth among the 形容词最高级+ 名词复数
We rank him among the best tennis players. 我们把他列入最优秀网球运动员...
A consist with B
A 与B相符合/一致
The report does not consist with the fact. 那报导与事实不合。
A be short of B
= lack…
A 缺乏B ( Be short of is more common than lack when we talk about objects and materials )
He is short of money/ apples/ sugar. 他缺钱 / 苹果 / 糖。 The company is short of hands. 公司人手不够。 in ቤተ መጻሕፍቲ ባይዱhort 简言之
M7U2 language points可用
for sb’ benefit =for the benefit of sb.
为了某人的利益
He devoted his whole life to doing research for the benefit of mankind.
4. Nearly 3,500 years ago, people chewed on leaves or drank a kind of tea made from leaves possessing a special chemical to reduce body pains and fever.(L9-11)
The man proved_______________
turn out
beneficial adj. 对……有益的
be beneficial to 对……有益的 =be of benefit to
据说瑜伽对健康很有好处。 It is said Yoga is of great benefit to/beneficial to human health.
prove用作联系动词,后面常跟名词、形容词或不定式 (to be)等,to be 可以省略,即
prove (to be)+n./adj. 王菲的复出演唱会证明是极大的成功。 Wang Fei’s comeback tours proved (to be) a complete/great success. The problem _____________ to be much more difficult proved than we had supposed.
Will the infected patients benefit from the new medicine __________________________(从这种药 中获益吗)?
language points
湖南长郡卫星远程学校
制作 05
2012年上学期
Words: possess contain relief vital beneficial potential abnormal astonish
湖南长郡卫星远程学校
制作 05
2012年上学期
Words: possess contain relief vital beneficial potential abnormal astonish application
湖南长郡卫星远程学校
制作 05
2012年上学期
Words: possess contain relief vital beneficial potential abnormal astonish application unable
湖南长郡卫星远程学校
制作 05
2012年上学期
Words: possess contain relief vital beneficial potential abnormal astonish application unable approval
湖南长郡卫星远程学校 制作 05 2012年上学期
Words: possess contain relief vital beneficial potential abnormal astonish application unable approval accelerate
湖南长郡卫星远程学校 制作 05 2012年上学期
湖南长郡卫星远程学校
制作 05
2012年上学期
possess vt. 1. 拥有,持有;具有;占有 The country possesses rich mineral deposits. 这个国家拥有丰富矿藏。
language_points
【拓展】adj. explosive 爆炸性的 n. explosion 爆炸
explosive issue • Unemployment became an _________ (一个爆炸性问题). explosion 人口大 • the theory of population __________: 爆炸理论
② Some measures should be taken to __ B the soil ____. A. prevent; to wash away B. stop; being washed away C. keep; to wash away D. keep; washing away
3. in time: ① 最终/迟早(=finally/sooner or later) 译: 我最终会回来的. I’ll come back in time. ② 及时;不迟(=not late) A. 她会及时回来准备晚餐. time _____ She will be back ____ in _____ to prepare dinner. be in time for sth.
7. (L.30) They multiplied and filled the oceans and seas with oxygen. multiply: v. ①乘: 2 multiplied by 4 is 8. ②增加(=increase) 蚊子夏天迅速增加 Mosquitoes multiply rapidly in summer. 自从上了高中,我们的作业增加了。 • Our homework have multiplied since we came to high school.
be in time to do sth.
urwtest参数说明
urwtest参数说明English answer:urwtest is a program that tests the functionality of the Unicode Regular Expressions (URE) library. The library is used by many programs to perform complex text processing tasks, such as finding and replacing text, searching for patterns, and validating input.The urwtest program can be used to test the following aspects of the URE library:Character classes: Character classes are used to match characters that have certain properties, such as being a letter, a digit, or a whitespace character.Anchors: Anchors are used to match characters at the beginning or end of a string, or at the beginning or end of a line.Quantifiers: Quantifiers are used to match characters that occur a certain number of times.Grouping: Grouping is used to group characters together so that they can be treated as a single unit.Backreferences: Backreferences are used to match characters that have been previously matched.The urwtest program can be used to test the URE library by providing a regular expression and a string to match. The program will then output whether the regular expression matches the string.The urwtest program has a number of options that can be used to control its behavior. These options include:-v: Verbose output. This option causes the urwtest program to output more information about the regular expression and the string being matched.-i: Case-insensitive matching. This option causes theurwtest program to ignore the case of the characters in the regular expression and the string being matched.-m: Multiline matching. This option causes the urwtest program to treat the string being matched as a multiline string.-s: Dotall matching. This option causes the urwtest program to treat the dot (.) character in the regular expression as matching any character, including newline characters.-x: Extended syntax. This option causes the urwtest program to allow the use of whitespace characters and comments in the regular expression.The urwtest program can be a useful tool for testing the functionality of the URE library. It can be used to verify that regular expressions are working as expected, and to troubleshoot problems with regular expressions.Here are some examples of how to use the urwtestprogram:$ urwtest 'abc' 'abc'。
Language points
3. unbearable a. 无法忍受的
The heat outside is unbearable. He is unbearable when he's in a bad temper. I find his rudeness unbearable.
The uncertainty is unbearable! Loneliness in a gloomy raining day may be unbearable to him.
She's a very vital sort of person. 精力旺盛的人 The heart performs a vital bodily function. 心脏起著维持生命的重要作用。
The government saw the introduction of new technology as vital. 政府认为引进新技术至关重要。
A breathless audience. 屏住呼吸的观众。
The children are breathless as they watch the tightrope act. 孩子们在看走绳索表演时呼吸 都屏住了。
6. circumstance n. 环境,状况,事件 Circumstances forced us to change our plans. 客观情况迫使我们改变了计划。 He was forced by the circumstances to do this. 他做此事是为环境所迫。
4. block out: vt. 封闭(打草样, 用蔽光框蒙 住底片之一部使不透影)
Block out this unimportant detail at the top of your picture. 在底片上把图象上部的这个不重要 的细节涂没。 That wall blocks out all the light. 那堵墙把光线都遮住了。
U2P2Languagepoints1
address their concerns 消除他们的顾虑
address
to think about a problem or a
situation and decide how you are going to deal
with it 设法解决;处理;对付
① How to address the issues? 如何解决这些问题? ② Your essay does not address the real issues. 你的论文没有论证实质问题。
③ That's not the right way to hold a pair of scissors. 那样拿剪刀不对。
④ This is the entrance to the park. 这是公园的入口。
9. After you have thought it through, explain your actions and feelings calmly, listen carefully, and address their concerns. 在你全面考虑之后,冷静地解释你的行为和感受, 仔细倾听,并消除他们的顾虑。
用result in 或result from的正确形式填空。 resulted in
resulted from
resulted from
4. When it all gets too much, your parents are often the first targets of your anger. 当这一切让你不堪重负时,父母常常会成为 你发泄愤怒的首选目标。
break through 作出新的重大发现;取得突破
live through 经历(灾难或其他困境)而幸存 look through 快速查看;浏览
M2U2 language points
Wildlife Protection
Language points
1.Elephants need large living spaces,so it’s difficult for them to adapt to the changes.大象需要很大的生活空间,因此对他们来说适应这些变化很难。
Important phrases and sentences of the text
P13. 1.藏羚羊的保护
the protection of Tibetan antelopes
2.濒危野生动物
endangered wildlife
3.照顾野生动物的方法 ways to care for wildlife
Ⅱ.完成句子
12.Thoughaware of the harm
of the smog,they were still exposed to it
casually.
尽管意识到了雾霾的危害性,他们仍然毫不在意地置身其中。 13. On average ,American firms remain the most productive in the world.
3.Only when we learn to exist in harmony with nature can we stop being a threat to wildlife and to our planet.只有学会与大自然和谐共处,我们才不会
成为野生生物和地球的威胁。
在英语句子中,为了表示强调,把“only+状语(副词/介词短语/从句)” 置于句首,主句要用部分倒装形式,即把主句中的助动词、情态动词或 系动词提到主语前面。
language points语言点
suggest 建议,表明
go 1. My father suggested that we _____(go) for a walk after supper. spending 2. I suggest _________(spend) more time practicing English. spend (spend) 3. Miss Lu’s suggestion that we______ more time practicing English is reasonable. 4. His pale face suggested that she ____ was (be) ill.
satisfaction n.
1.The government should make great efforts to________ satisfy people’s need. satisfied 2.Miss Lu was ________with our answer. 3.I heard this news with great____________. satisfaction 4. It seemed a very ____________arrangement. satisfying satisfactory
• 2.让这对父母感到满足的是他们的小女儿对无家可 归的人有同情心。
• To the parents’____________, satisfaction their little feels/has sympathy for daughter ____________________the homeless.
declare vt. 宣布; 声明; 宣称
1) declare sth. 宣告, 宣布 他们将很快宣布选举的结果。 They will declare the results of the election soon. 2) declare sb./sth. (to be) n./ adj. 宣布某人/某事为… 裁判宣布他为比赛的冠军。 The judge declared him (to be) the winner of the competition. 我宣布这次会议开始. I declared this conference (to be) open. 3) declare +(that)从句 宣称; 声称 她宣称她是对的. She declared (that) she was right. 4) declare for/apathy n. 同情; 同感; 同情心 • express sympathy for 对…慰问 feel/ have sympathy for sb./sth. 同情….
Language_Points_a
| | More examples:| He did tell me about it.<br>他的确告诉了我这事儿。<br>I do feel sorry for him.<br>我确实为他感到遗憾。<br>A little knowledge does seem to be a dangerous thing.<br>看来一知半解确实是件危险的事。
|Language Point 15 | I disagree. If I did agree, I certainly wouldn't look at my feet or at the ceiling. I'd keep my eye on the lion! (Para. 11)| Meaning: I don't agree (that entering a room full of people is like going into a lions cage). Suppose I agreed with such a thought, I would not look at my feet or at the ceiling but look carefully at the people in the room.
|Language Point 2 | Consciously or unconsciously, we show our true feelings with our eyes, faces, bodies and attitudes, causing a chain of reactions, ranging from comfort to fear. (Para. 1)| Meaning: Whether we are aware or unaware of it, we use our eyes, faces, bodies, and attitudes to express our feelings. This causes a sequence of various responses such as comfort and fear.
LANGUAGE POINTS
as well 肯定句中,置句末 too 肯定句中,置句末,但用逗号隔开 either 否定句中,置句末,且用逗号隔开 also 特殊动词后,实义动词前
I am also a student.=I am a students, too. Lucy didn’t go to the party, either. Tony speaks Japanese as well. She has knowledge and experience as well
Language points
先行语
非限制性定语从句
1.Pausanias, who was a Greek writer 2,000
years ago, has come on a magical journey to find out about the present-day Olympic Games.
Pausanias是2000前的一位希腊作家,他做了一次魔幻旅行, 以打听当代奥林匹克运动会的情况。 come on a journey 进行旅行 = be on a journey=start on a journey=go on a journey My father is away on a journey. 在外旅行 find out about sth. 弄清有关sth.的情况 The police are trying hard to find out about the accident.
not only ...but (also) ... 意为“不仅……而且……”。当 此结构连接两个并列主语时, 谓语动词的单复数要和邻近的主语保持一致。若not only 置于句首,其所在的分句需要采用部分倒装。 He not only said it, but also did it.
牛津英语M4U2Language points
3. At the 2004 Athens Olympic Games, Liu Xiang excited people all over Asia when he became the first Asian to win the gold medal in the men’s 110meter hurdles.(P23)
It is I who am wrong.
I remember it was he who took my car.
it引导的强调句型:
It is (was)+被强调的成分+that (who, whom)+其余部分. 如果被强调的部分是人, 句型中的 that要用who或whom代替。
1.It is the car that I am looking for! 强调宾语 the car 2. It was yesterday that my father bought it. 强调时间副词yesterday 3. Where was it that you lost your car? 强调地点where
country.
contribute vt. 捐献, 捐助; 投稿 She contributed two articles to the
magazine.
contribute to…
起一份作用, 有助于,促成
Exercise contributes to better health. 运动有助于健康。
[点拨] 由句意可以判断,“通过考试”这
个已经过去的动作是有把握的推测,故
选C。
should have done sth.本该做却没有做,
could / might have done表示语气较弱的
unit 2 language points
The water ran over the edge of the basin. The speaker ran over his notes before the lecture. The truck nearly ran over the man crossing the street. time v. The bomb was timed to explode during the rush hour. You’ve timed your holiday cleverly---the cleverly---the weather’s at its best. You’ve timed your arrival beautifully. 你来的正是时候. 你来的正是时候.
interfere in/with Constant interruptions interfered with my ____________ work. We’ve always held that country should ___________ the internal affairs of other interfere in countries. distract The phone kept ringing and distracting my attention. I hope Mary’s CD player won’t distract her from her study.
as if/as though 好象我是他的奴隶似的 as if I were his servant He orders me ___________________. as 我们好象见证了 if we had witnessed We feel __________________ the whole thing. 会议似乎永远不会结束 as if the meeting would never end It seems _________________________. It looks as if it’s going to rain. ignore I said hello to her, but she ignored me completely. The driver ignored the danger sign and lost control of the bus.
Unit 2 (Programming Language)
在这种情况下,该文件是标准输出文件.标准 输出文件是一个作为错误信息通道的特殊 UNIX文件。
It is usually connected to the console of the computer system, so this is a good way to display error messages from your programs
这行开始了函数的定义,它告诉我们,返回值 的类型,函数的名称,以及由该函数使用的参 数列表
The arguments and their types are enclosed in brackets, each pair separated by commas
参数和它们的类型包含在括号,用逗号分隔
这个函数还演示了一个新特征
This is a variant on the printf statement,fprintf sends its output into a file.
这是一个关于printf语句变种,输出语句将它 发送它到一个文件。
In this case, the file is stderr. stderr is a special UNIX file which serves as the channel for error messages
The body of the function is bounded by a set of curly brackets. Any variables declared here will be treated as local unless specifically declared as static or extern types
U3词汇与句式
8) The war devastated the economy and __s_a_d_d_le_d___ the country __w_i_th_ a huge foreign debt.
Language focus Language learning
Step 2 Fill in the blanks with the above words.
Odyssey is a life __p_h_a_s_e_ that frequently occurs between _a_d_o_l_e_sc_e_n_c_e_ and adulthood. During this decade, 20-somethings go to school and take breaks from school. They live with friends and they live at home. They fall in and out of love. They try one career and then try another. Parents grow increasingly anxious. They understand the necessity of a __tr_a_n_s_it_io_n__ between student life and adult life. But they see that this __i_n_d_u_ct_i_o_n_ to adulthood is delayed. People who were born before the 1960s or 1970s tend to define adulthood by accomplishment and emphasize __st_a_b_il_i_ty__. This is not the case with the young people nowadays.
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Language Points(UNIT 2 BOOK 4 New Standard College English (NSCE))Part A Words and Phrases(Active Reading 1)I. WORDS TO KNOW (and IN FOCUS)mysterious a. strange and not known about or understood --magicwonderland n. an imaginary world that exists in fairy tales. --fairy landobserver n. refer to someone who sees or notices somethingneglect vt.fail to take care of properly or give the amount attention tonormally ad. what usually happens or what one usually does; in the usual or conventional way. interact vi. communicate or spend time togetherwrath n. means the same as anger(fml)--ragedialect n. a form of a language that is spoken in a particular area.enchant vt. cause one to have feelings of great delight or pleasure.3.exclude vt. deliberately not use or consider sth. decide or prove that sth is wrong and not worth considering--rule outconfront vt. to deal with a difficult situation;be face to face withinfluential a. able to influence the way other people think or behave.monk n. a member of a male religious community that is usually separated from the outside world priest n. a member of the Christian clergy in the Catholic, Anglican, or Orthodox churchcite vt. quote or mention something, especially as an example or proof of what one is saying poetic a. is very beautiful and expresses emotions in a sensitive or moving way; relating to poetryfoster vt. to help sth. to develop over a period of time; promote;officially take a child into their family for a period of time, without becoming the child's legal parents。
likewise ad. the same way or in a similar wayinduce vt. to cause a state or situation;induce someone to do persuade or influence someoneto doawait vt. wait for; happen or come to sb in the future.sensation n. a physical feeling; ability to feel things physically, especially through sense of touch.withstand vt. survive or not give in to -- strive againstdisconcerting a. it makes one feel anxious, confused, or embarrassedvista n. a beautiful view from a high place. -- outlook,perspectivemystical a. involves spiritual powers and influences that most people do not understand. meaninglessness n. pointlessnesspassivity n. State of being passive.--passivenesscoin vt. coin a word or a phrase,: to say it firstthe other side of the coin: to mention a different aspect of a situation.n a small piece of metal which is used as moneyinsistently ad. persistentlyinsistent a keeps insisting that a particular thing should be done or is the case. desperately ad. profoundlydesperate a. want or need very muchbe desperate for or desperate to dodesperation n.WORDS IN FOCUS1. mysterious a. strange and not known about or understood , --magiceg: A mysterious illness confined him to bed for over a month.一种怪病让他卧床一个多月As for his job –well, he was very mysterious about it.至于他的工作–嘿,他对此十分诡秘。
Cf Mysterious: 指神秘的,同时引起惊奇和好奇、而且难以解释或无法理解的Mystical: 指神秘主义的,理智无法理解、感官不能接触的精神现实或含义的; 难以理解的,神秘的。
2. neglect v.fail to take care of them properly. -- ignoref ail to give enough attention to --omitn. omission,ignorancea.neglectedeg: Feed plants and they grow, neglect them and they suffer.给植物施肥,它们就生长;疏于照管,它们就遭殃。
He'd given too much to his career, worked long hours, neglected her.他在事业上投入太多了,长时间地工作,冷落了她。
The town's old quayside is collapsing after years of neglect.这座小镇的旧码头区在多年疏于保养后快要坍塌了The fact that she is not coming today makes her grandmother feel lonely and neglected.她今天不来了,这让她的祖母感到孤单,受到冷落。
4.interact municate or spend time togetherinteract with: communicate with,affected each other, information exchangeeg: While the other children interacted and played together, Ted ignored them.当其他孩子们相互交往、一起玩耍时,泰德却不理他们。
Can you find new, simplified ways of interacting with a computer?你能发现与计算机互动的简化新方法吗?You have to understand how cells interact.你必须得了解细胞之间是如何相互作用的5.exclude v.deliberately not use or consider sth. decide or prove that sth is wrong and not worth considering--rule outexclude sb from :prevent sb from entering some place or taking part in an activity. eg: In some schools, Christmas carols are being modified to exclude any reference to Christ.在有些学校,圣诞颂歌正在被修改以去掉任何涉及基督的内容。
But we would exclude them from our smaller dictionaries.但我们将把它们从较小的词典中排除出去。
I do not exclude these, but too much pressure to do away much fun.我并不是排斥这些,但过大的压力确实带走了不多的乐趣。
6.confront v to deal with a difficult situation;be face to face witheg:If we do not confront and overcome these internal fears and doubts, if we protect ourselves too much, then we cease to grow如果我们不面对和克服这些内在的恐惧和疑虑,如果我们过分地保护自己,那么我们就会停止成长。