文献综述翻译(下)

合集下载

英语笔译文献综述

英语笔译文献综述

英语笔译文献综述Here is an essay on the topic of "A Literature Review on English-Chinese Translation" with over 1000 words, written in English without any additional punctuation marks in the body of the text.The field of English-Chinese translation has been a subject of extensive research and study for decades. As the world becomes increasingly interconnected, the demand for accurate and effective translation services has grown significantly. This literature review aims to provide an overview of the current state of research in this field, highlighting key themes, methodologies, and areas for further exploration.One of the primary focuses in English-Chinese translation research has been on the linguistic and cultural challenges inherent in the process. Scholars have examined the differences in grammatical structures, idioms, and cultural references between the two languages, and how these disparities can impact the quality and accuracy of translations. Studies have explored strategies for navigating these challenges, such as the use of machine translation, the role of human translators, and the importance of cultural awareness and adaptation.Another area of research has been the impact of technological advancements on the translation industry. The rise of machine translation, computer-assisted translation tools, and cloud-based platforms has transformed the way translation services are delivered. Researchers have investigated the strengths and limitations of these technologies, as well as the implications for the role of human translators in the future. Additionally, studies have examined the ethical considerations surrounding the use of artificial intelligence in translation, such as issues of data privacy, bias, and the potential displacement of human workers.The quality and evaluation of English-Chinese translations have also been the subject of extensive research. Scholars have developed frameworks and methodologies for assessing the accuracy, fluency, and overall effectiveness of translations, taking into account factors such as linguistic equivalence, cultural appropriateness, and the intended purpose of the translation. These evaluation methods have been applied to a wide range of translation contexts, including literary works, technical documents, and business communications.Another area of focus in the field of English-Chinese translation is the training and professional development of translators. Researchers have explored the skills, knowledge, and competencies required for effective translation, and have investigated the bestpractices for educating and training translators. This includes the use of translation-specific curricula, the integration of technology into the learning process, and the importance of ongoing professional development and certification.In addition to these core areas of research, scholars have also examined the role of English-Chinese translation in various domains, such as international business, education, and healthcare. These studies have explored the unique challenges and considerations that arise in these specialized contexts, and have provided insights into the strategies and approaches that can lead to successful translations.Despite the significant progress that has been made in the field of English-Chinese translation, there are still many areas that require further research and exploration. For example, the impact of globalization and the increasing use of English as a lingua franca on translation practices is an emerging area of interest. Additionally, the ethical implications of translation, such as the potential for misrepresentation or the perpetuation of cultural biases, warrant deeper examination.Furthermore, the field of English-Chinese translation would benefit from more interdisciplinary collaboration, drawing on insights from fields such as linguistics, cognitive science, and cultural studies. By fostering a more holistic and multifaceted approach to translationresearch, scholars can gain a deeper understanding of the complexities and nuances involved in the process.In conclusion, the field of English-Chinese translation has been the subject of extensive research and study, yielding valuable insights into the linguistic, cultural, and technological challenges involved in the translation process. As the demand for translation services continues to grow, it is crucial that researchers and practitioners work together to advance the field, develop innovative solutions, and ensure the highest quality of translation services. By doing so, they can contribute to greater cross-cultural understanding and effective communication in an increasingly globalized world.。

文献综述及外文文献翻译

文献综述及外文文献翻译

⽂献综述及外⽂⽂献翻译华中科技⼤学⽂华学院毕业设计(论⽂)外⽂⽂献翻译(本科学⽣⽤)题⽬:Plc based control system for the music fountain 学⽣姓名:_周训⽅___学号:060108011117 学部(系): 信息学部专业年级: _06⾃动化(1)班_指导教师:张晓丹___职称或学位:助教__20 年⽉⽇外⽂⽂献翻译(译成中⽂1000字左右):【主要阅读⽂献不少于5篇,译⽂后附注⽂献信息,包括:作者、书名(或论⽂题⽬)、出版社(或刊物名称)、出版时间(或刊号)、页码。

提供所译外⽂资料附件(印刷类含封⾯、封底、⽬录、翻译部分的复印件等,⽹站类的请附⽹址及原⽂】英⽂节选原⽂:Central Processing Unit (CPU) is the brain of a PLC controller. CPU itself is usually one of the microcontrollers. Aforetime these were 8-bit microcontrollers such as 8051, and now these are 16-and 32-bit microcontrollers. Unspoken rule is that you’ll find mostly Hitachi and Fujicu microcontrollers in PLC controllers by Japanese makers, Siemens in European controllers, and Motorola microcontrollers in American ones. CPU also takes care of communication, interconnectedness among other parts of PLC controllers, program execution, memory operation, overseeing input and setting up of an output. PLC controllers have complex routines for memory checkup in order to ensure that PLC memory was not damaged (memory checkup is done for safety reasons).Generally speaking, CPU unit makes a great number of check-ups of the PLC controller itself so eventual errors would be discovered early. You can simply look at any PLC controller and see that there are several indicators in the form. of light diodes for error signalization.System memory (today mostly implemented in FLASH technology) is used by a PLC for a process control system. Aside form. this operating system it also contains a user program translated forma ladder diagram to a binary form. FLASH memory contents can be changed only in case where user program is being changed. PLC controllers were used earlier instead of PLASH memory and have had EPROM memory instead of FLASH memory which had to be erased with UV lamp and programmed on programmers. With the use of FLASH technology this process was greatly shortened. Reprogramming a program memory is done through a serial cable in a program for application development.User memory is divided into blocks having special functions. Some parts of a memory are used for storing input and output status. The real status of an input is stored either as “1”or as “0”in a specific memory bit/ each input or output has one corresponding b it in memory. Other parts of memory are used to store variable contents for variables used in used program. For example, time value, or counter value would be stored in this part of the memory.PLC controller can be reprogrammed through a computer (usual way), but also through manual programmers (consoles). This practically means that each PLC controller can programmed through a computer if you have the software needed for programming. Today’s transmission computers are ideal for reprogramming a PLC cont roller in factory itself. This is of great importance to industry. Once the system is corrected, it is also important to read the right program into a PLC again. It is also good to check from time to time whether program in a PLC has not changed. This helps to avoid hazardous situations in factory rooms (some automakers have established communication networks which regularly check programs in PLC controllers to ensure execution only of good programs). Almost every program for programming a PLC controller possesses various useful options such as: forced switching on and off of the system input/outputs (I/O lines),program follow up in real time as well as documenting a diagram. This documenting is necessary to understand and define failures and malfunctions. Programmer can add remarks, names of input or output devices, and comments that can be useful when finding errors, or with system maintenance. Adding comments and remarks enables any technician (and not just a person who developed the system) to understand a ladder diagram right away. Comments and remarks can even quote precisely part numbers if replacements would be needed. This would speed up a repair of any problems that come up due to bad parts. The old way was such that a person who developed a system had protection on the program, so nobody aside from this person could understand how it was done. Correctly documented ladder diagram allows any technician to understand thoroughly how system functions.Electrical supply is used in bringing electrical energy to central processing unit. Most PLC controllers work either at 24 VDC or 220 VAC. On some PLC controllers you’ll find electrical supply as a separate module. Those are usually bigger PLC controllers, while small and medium series already contain the supply module. User has to determine how much current to take from I/O module to ensure that electrical supply provides appropriate amount of current. Different types of modules use different amounts of electrical current. This electrical supply is usually not used to start external input or output. User has to provide separate supplies in starting PLC controller inputs because then you can ensure so called “pure” supply for the PLC controller. With pure supply we mean supply where industrial environment can not affect it damagingly. Some of the smaller PLC controllers supply their inputs with voltage from a small supply source already incorporated into a PLC.中⽂翻译:从结构上分,PLC分为固定式和组合式(模块式)两种。

文献综述英文格式

文献综述英文格式

文献综述英文格式The Impact of Artificial Intelligence on Healthcare。

Introduction。

Artificial intelligence (AI) has revolutionized various industries, and healthcare is no exception. With its ability to analyze vast amounts of data, AI has the potential to transform healthcare delivery, improve patient outcomes, and enhance the efficiency of healthcare systems. This article aims to provide a comprehensive review of the impact of AI on healthcare, covering various aspects such as diagnosis, treatment, patient monitoring, and healthcare management.1. AI in Diagnosis。

AI has shown great promise in improving the accuracy and efficiency of medical diagnosis. Machine learning algorithms can analyze medical images, such as X-rays, CT scans, and MRIs, to detect abnormalities and assist radiologists in making accurate diagnoses. For example, a deep learning algorithm developed by researchers at Stanford University achieved a level of accuracy comparable to human dermatologists in identifying skin cancer from images. AI-powered diagnostic tools can help reduce diagnostic errors, speed up the diagnosis process, and enable early detection of diseases.2. AI in Treatment。

花生脱壳机文献综述及外文翻译

花生脱壳机文献综述及外文翻译

郑州科技学院毕业论文(设计)文献综述题目名称花生去壳机的设计题目类别毕业设计系别机械工程系专业班级10机制本x班学生姓名xxx指导教师xxx完成日期 2014年3月10日花生脱壳机的设计摘要:我国加入WTO以来,国内外关于花生脱壳机机械的开发与推广应用日益增多,针对现有花生脱壳机机械存在的优点与不足,在未来的发展的过程中,对花生脱壳机机械在生产应用中的经验进行总结,不断完善其功能,使其呈现良好的发展势头。

关键词:箱体;机架;工艺;一,花生脱壳机的发展现状我国花生脱壳机的研制自1963年原八机部下达花生脱壳机的研制课题以来已有几十种花生脱壳机问世。

只进行单一脱壳功能的花生脱壳机结构简单、价格便宜、以小型家用为主的花生脱壳机在我国一些地区广泛应用,能够完成脱壳、分离、清选和分级功能的较大型花生脱壳机在一些大批量花生加工的企业中应用较为普遍。

国内现有的花生脱壳机种类很多,如6HB-6型花生剥壳机,6HB-20型花生剥壳机,6HB-2型花生脱壳机等,技术参数见附表,其作业效率为人工作业效率的20-60倍以上。

6HB-180型花生脱壳机械采用了三轧辊混合脱壳结构,能够进行二次脱壳。

而随着我国花生产业的进一步调整,花生产量逐年增加,花生的机械化脱壳程度将大幅提高,花生脱壳机械将拥有广阔的发展前景。

花生剥壳的原理很多,因此产生了很多种不同的花生剥壳机械。

花生剥壳部件是花生剥壳机的关键工作部件.剥壳部件的技术水平决定了机具作业刚花生仁破碎率、花生果一次剥净率及生产效率等重要的经济指标。

在目前的生产销售中,花生仁破碎率是社会最为关心的主要指标。

八十年代以前的花生剥壳机械,破碎率一般都大于8%有时高达15%以上。

加工出的花生仁,只能用来榨油,不能作种用.也达到出口标准。

为了降低破碎率而探讨新的剥壳原理,研制新式剥壳部件,便成为花生剥壳机械的重要研究课题。

从六十年代初,开始在我国出现了封闭式纹杆滚筒,栅条凹板式花生剥壳机。

文献综述翻译[最新]

文献综述翻译[最新]

曼德尔布鲁特的大测试概念,,,,,欢迎这项伟大的测试概念,现在美国乃至世界各地的教师和学生产生了良好的数学资源。

我们相信,解决问题是探索数学世界的最有效手段之一,比赛为我们要做的事情提供了一个令人振奋和激励的环境。

,,,,,大测试概念在高中阶段提供了两场比赛。

首先是曼德尔布鲁特比赛,比赛遵守一个简短的回答方式,并采取在一个学年内进行五轮的比赛。

比赛问题和主题的设计要求是国家中那些具有一般背景的学生在解决时能像最好的学生那样全力以赴。

第二的曼德尔布鲁特团队赛,强调数学的写作技巧和有效的团队合作。

本次比赛在冬季举行,共进行3轮比赛,是为了帮助先进的学生准备像是usamo这样的大事。

想进一步了解比赛描述和网站概况,请访问简介页面。

今年时间的信息在信息页面会有显示,可以在“常见问题解答”页面中找到您的所有问题的答案。

在资源页面提供比赛范例和其他准备材料。

本学年注册已经结束,但在明年2012年7月初将重新开始。

最后,当你浏览网站,不要犹豫,给我们意见或问题的对话页。

,,,,,公告,,,,,..,,,,,11年11月17日该网站目前正在学校接受来自学校已注册过mandebort比赛的学生在2012年团队比赛的注册。

学校协调员可以通过竞赛的材料页注册后登录到他们的帐户。

,,,,,11年10月30日2011-12曼德尔布鲁特比赛的注册已结束。

祝所有的学生在即将开始的比赛中有好运气!,,,,,11年6月1日去年夏天,我们提出了一些显着变化,曼德尔布鲁特比赛需要的地方,包括转会至五个地理区域和一个新的电子交付选项的方式。

寻找关于这些变化在注册过程中的细节。

,,,,,介绍,,,,,作者简介。

•.曼德尔布鲁特的产生,,,,,这页作为Mandelbrot团队比赛的概述。

你可以熟悉他们的一般格式,知道比赛的作者或阅读更多的测试概念产生背后的故事。

,,,,,..概观,,,,,这一伟大的测试概念的目的是提供一个具有挑战性的,引人入胜的数学经验,这既是竞争力和教育。

毕业论文-文献综述-译文-模版

毕业论文-文献综述-译文-模版

毕业设计(论文)译文题目名称:(写译文题目)院系名称:班级:学号:学生姓名:指导教师:200×年×月风险导向审计研究(译文题目)Pie Pierce,B。

, Sweeney, B。

近几年来,审计行业面临的风险正在日益增大,国内外相继出现了一些会计舞弊事件,如:美国的“安然”、“施乐”,国内的“原野”、“琼民源”、“红光”等,审计人员由此涉及的案件也成千上万,赔偿的金额更难以估计,甚至有人称审计职业界进入了“诉讼爆炸"时代,一些会计师事务所因诉讼而倒闭或陷入困境,这使审计人员清醒地认识到,在商业竞争十分激烈的市场经济中,审计职业界所面临的商业风险已越来越大,审计职业必须设法降低这种商业风险水平。

为此,以美国为首的西方国家从20世纪80年代开始大规模地修订审计准则,在实践中产生了一种以风险评价为中心的审计模式,并在实务中得到广泛应用.一、风险导向审计产生的社会基础(一)高风险的审计执业环境是风险导向审计产生的直接原因。

众多企业的倒闭,已审报表的使用者将经营失败等同于审计失败。

他们认为企业濒临破产,注册会计师进行财务报表审计时就应提前发出警告,这样审计人员与公众期望的差距越来越大。

期望差距的加大,表明社会公众对审计的需求日益增加,为弥补审计期望差距就得寻找途径,主动出击,迎合这种需求,须承担一定的法律责任——即承担更大的查错防弊责任。

随着企业规模的不断扩大,业务的复杂化和计算机的应用,会计、审计业务趋于复杂;险恶的企业经营环境必然意味着严峻的、高风险的审计执业环境,因此迫切需要一种新的审计技术——风险导向审计。

(二)严格的法律环境是风险导向审计产生的外部驱动力。

现代社会在某种程度上是一种契约经济,各种契约界定人与人之间的关系,法律保护契约双方,一切纠纷的处理需通过法律的手段解决。

证券法对保护投资人利益的责任意识越来越强,因而当投资人的利益受到伤害时,被投资企业破产倒闭,投资人无力投资,债务人收回债务无望时,极有可能诉讼注册会计师。

文献综述文化翻译观下《卧虎藏龙》字幕英译.doc

文献综述文化翻译观下《卧虎藏龙》字幕英译.doc

Literature Review for Film Subtitle Translation of Crouching Tiger, Hidden Dragon under Culture Translation TheoryZhang Fengyi Foreign Language DepartmentSubtitling can defined as the process of providing synchronized captions for film and television dialogue. Subtitles, sometimes referred to as captions, are transcriptions of film or TV dialogue, presented simultaneously on the screen. A subtitle is a printed statement or fragment of dialogue appearing on the screen between the scenes of a silent motion picture or appearing as a translation at the bottom of the screen during the scenes of a motion picture or television show in a foreign language.In recent years, film subtitle translation has received wide attention. This part provides a relatively comprehensive review in theories of the translation of Chinese film subtitling into English. Many scholars and experts have noticed the cultural factors that affect film subtitle translation, so they have attached much importance to cultural factors in the process of translating film subtitle.At the turn of the nineteenth century, the study of cultural anthropology suggested that the linguistic barrier were insuperable and language was entirely the product of culture. For the translation of cultural factors, “i f attempted to translate it at all, translation must be as literal as possible” (Newmark 45). Eugene. A Nida, a very famous American translation theorist, emphasizes “closest natural equivalence” and “naturalness of expression”. That requires “the translated versions should be idiomatic, natural, and easy to be accepted by readers, which tend to language is just a tool to communicate. The more readers understand the better. In this way the culture of the source language will surrender to the smooth” (Nida 118). For that, the cultural factors hidden in the original text will be ignored. However, he also says: “translation consists in reproducing the receptor language the closest natural equivalent of the source of language, first in terms of meaning and secon dly in terms of style” (Nida and Taber 12). The cultural meaning is one important part of Chinese film subtitling, which cannot be ignored and depreciated and needs to be translated out.In Quality Control of Subtitles: Review or Preview, Heulwen James illustrates the different expectations of different clients namely the scriptwriter, the [producer, the broadcaster and the viewer. Of all clients, viewers and their expectations are the most important. James claims that subtitles should be correct, clear, and credible. In order to achieve these aims, subtitles need to respect the guidelines provided by subtitling conventions and principles. The conventions include time-coding, duration of subtitles, shot cuts and formatting etc, and the principles ofsubtitling include reduction of original dialogue, simplification of language, character portrayal, cultural adaptation and so on(James,2001:152).Friedriched Schleiermacher, a translation theorist of German, puts forward the theoretical concept of domestication and foreignization at first. However, Lawrence Venuti first uses “domestication” and “foreignization” in his book The Translator’s Invisibility. He harbors that the translator would take a choice during his translating progress, concerning about the value, the belief and the representation preexisted in the target language. He defines the foreignization and domestication as follows: “Schleiermacher allowed the translator to choose between a domesticating method, an ethnocentric reduction of foreign text to target language cultural values, bring the author back home, and a foreigning method, an ethodeviant pressure on those values to register the linguistic and culture difference of the foreign text, sending the readers abroad” (Venuti 20). The prevalence of d omestication may cause the narcissism and ethnocentrism in the country of target language. If the Chinese film subtitling is translated like that, the Chinese culture hided in the Chinese film subtitling will not be delivered to the foreigners. Therefore, the foreignization in Chinese film subtitling translation seems more suitable and becomes the better principle to obey.Though subtitle translation is a new research field, it has been studied systematically at home.When it comes to film subtitling translation, there is no doubt that the cultural elements should be taken into consideration. “Translation is not only an activity of lingual exchange, information transfer, but also a kind of cultural communication between different countries and nations. In translation research and practice, source language and target language have cultural similarities and differences, so cultural factors are always the first thing to be considered” (Song 352). For that, when it comes to the Chinese film subtitling translation, the key representative of Chinese culture, the cultural element cannot be ignored in the progress of translation. The binding of film subtitle translationQian Shaochang said “film and television’s language were written literary language is different fr om the five features: listen, comprehensive, in an instant, popularity and no note.” A well-translated subtitle can not only attract more audiences, highlight and enhance the spirit and identity of Chinese culture.Li Yunxing (2001:30-32) in the first professor who discusses the strategies of subtitle translation in Chinese Translators Journal. He analyzes the features of subtitle in terms of time-space constraints, informative function and culture factors and then puts forward corresponding strategies for translation, with concrete examples for illustration.In Linguistic Dimensions of Subtitling-Perspectives from Taiwan, Sheng-jie Chen (2004) investigates the linguistic dimension of the subtitling of English films into Chinese. He indicatesthat several factors affect the quality of subtitling, including film pirating, uncontrollable outsourced projects, economic factors, and linguistic factors. The linguistic dimension of subtitling discussed consist of the followings sub-topics: Brevity and clarity, double-lined subtitling, omission, punctuation, structural discrepancy, and swearwords. In the Chinese subtitles, swearwords are toned down, some punctuation marks and inessential information are omitted, and despite the source-language register, literary Chinese in used for brevity.Chen Chapman (2004) in his On the Hong Kong Chinese subtitling of English swearwords, illustrates how American English swearwords are under-translated in Hong Kong; explains why English swearwords are inadequately translated in Hong Kong in terms of patronage, illocutionary strategies, and socio-linguistics. In order to help the audience become more involved in the film-watching experience, he suggests that English swearwords should be subtitled with their Cantonese dynamic equivalents, as Cantonese is the mother-tongue of most Hong Kong people. Furthermore, when it comes to subtitling English swearwords, attention must be paid to certain subtitle but important linguistic, psychosexual, and religious differences between Western and Chinese cultures.Long Qianhong (2006) analyzes the Chinese film in the mood of love, and thinks the English subtitle of this film is a success linguistically, culturally and technically. It is succinct and lucid, and conveys the most relevant information to the intended audience in the most effective way. The strategies for subtitle of this film are mainly domestication and foreignization according to the culture information and space and time constraints.Film subtitle translation not only means a conversion process of language, but also concerns the transferring of cultures. It is obvious that film subtitle translation is not a simple two-way street between two languages. And one of the main difficulties in film subtitle translation comes from the fact that some words in original language cannot always have their target language. During the process of translation the problem of how to deal with cultural differences is a meaningful task for the study.Bassnett, Susan&Lfvere, Andre. Constructing Culture-Essays on Literary Translation, Shanghai: Shanghai Foreign Language Education Press, 2001.Eugene A Nida Toward a Science of Translation Leiden E J Brill, 1964.James, Heulewen.2001.Quality Control of Subtitles: Review or Preview? In Gambier, Yves&Henrik Gottlieb(ed.). (Multi)Media Translation: Concepts, Practices and Research. Amsterdam\Philadelphia: John Benjamins Publishing Company, 151.Ma Huijuan.7 A Study on Nida’s Translation Theory. Beijing: Foreign LanguageTeaching and Research Press, 2003.Nida,Eugene A. and C.Taber. The Science and Practice of Translating. Leiden: Brill, 1969.Nida,Eugene nguage,Culture and Translating.Shanghai:Shanghai Foreign Language Education Press,1993.李运兴. 字幕翻译的策略[J].中国翻译(4):30-32, 2001.龙千红. 《花样年华》的英文字幕翻译策略研究[J].西安外国语学院学报,2006.钱绍昌. 影视翻译-翻译园地中愈来愈重要的领域[J].中国翻译(1)61-65,2001.孙致礼. 中国文学翻译:从归化趋向异化.中国翻译,2002(1)40-44.王度庐. 《卧虎藏龙》北岳文艺出版社.2015.许渊冲. 翻译的艺术[M].北京:中国对外翻译出版公司, 1984.。

综述文献及英文翻译

综述文献及英文翻译

教学互动系统的设计与开发摘要:人类进入到二十一世纪,计算机网络技术和信息技术正在飞速发展,这种发展所带来的震撼迅速波及到教育界。

近年来,随着素质教育和教育信息化的发展,信息技术教育成为我国目前基础教育改革的重要课题,而在线教学因其图、文、声、像并茂的特点已经被广大教师认可,并广泛应用到课堂教学,而且产生了深远的影响[1],[2]。

在线教学是依靠着发达的互联网技术,提供基于Web Service的支持和管理教学过程,实行教学分离,以学生为主体的自主学习、交互式答疑和讨论环境,以扩大教育规模的一种新型教育方式。

在线教学以其独特的教育方式受到原来越多的关注,而相应的在线教育平台发展越来越多样化,然而如今众多的平台总有相应的不足,本文相信在取其精华去其糟粕,再经过技术方面改进后,会具有更加广阔的前景,受到用户的欢迎。

关键字:网络技术;在线教学;Web Service1.前言1.1主要目的在旧的教学方式中,难免会出现许多不尽人意的的地方,例如在教师授课过程中,学生对于有的知识点不能很好地把握,还有课堂对学生的吸引力减弱而导致分心等等问题。

而教学互动系统在一定程度上对这些存在于课堂上的问题有了解决的方法。

1.2相关概念综述范围从形式上看,互动的最基本形式,就是双方之间,一方向另一方发出信号或行动,另一方对此给出相应反馈的过程。

这个反馈可以是信号回应,也可以是行动回应。

这是最简单最基本的一次互动回合。

复杂的互动是一方发起后,双方对对方的信号和行动不断的给予反馈,产生多个回合的信息或行动的交流回合的过程。

[3]成为互动双方的资格在于:双方都要有对对方发出的信息或行动产生反馈的能力[4]综上所述,教学互动系统的概念是集视频、音频和数据通讯于一体,支持丰富的多媒体教学和课件制作管理,可实现互动、直播、点播等多种教学模式,是一套专门面向教育主管单位、学校、培训机构和企业推出的学习管理、知识管理的互动网络学习平台系统。

翻译报告英文文献综述

翻译报告英文文献综述

翻译报告英文文献综述Introduction:In recent years, there has been a growing interest in the study of [topic]. Several research articles have been published, investigating various aspects of this field. This literature review aims to provide a comprehensive overview of the current research in [topic], highlighting the key findings and methodologies employed.Methodology:To conduct this literature review, an extensive search was carried out across various academic databases, including PubMed, Google Scholar, and Web of Science. The search strategy involved using a combination of relevant keywords and search terms, such as [keywords]. The inclusion criteria for selecting articles included relevance to the topic, publication in peer-reviewed journals, and availability of full-text articles in English.Literature Review:1. Study 1: [Title]The first study by [Author] explored [specific aspect of topic]. The researchers conducted [brief description of research methodology], and their findings revealed [summary of key findings]. This study highlights the importance of [topic] in [relevant field] and suggests that [implications of findings].2. Study 2: [Title]In the second study by [Author], the focus was on [specific aspect of topic]. The researchers adopted [brief description of research methodology], and their results indicated [summary of key findings]. This study contributes to the understanding of [topic] by providing insights into [implications of findings].3. Study 3: [Title]Another study conducted by [Author] examined [specific aspect of topic]. The research design involved [brief description of research methodology]. The findings demonstrated [summary of key findings]. This study adds to the existing literature by [implications of findings].4. Study 4: [Title][Author] conducted a study investigating [specific aspect of topic]. The researchers employed [brief description of research methodology], and the results indicated [summary of key findings]. This study offers valuable insights into [implications of findings] and suggests further avenues for research in [topic].Discussion:The reviewed studies collectively contribute to the understanding of [topic]. They provide evidence regarding [aspects of topic], highlighting [implications of findings]. However, it is important to note that there are still gaps in the current knowledge, suggesting the need for further research in [topic].Conclusion:In conclusion, this literature review provides an overview of the recent research conducted in the field of [topic]. The reviewed studies contribute to the existing literature by offering valuable insights into [aspects of topic]. Based on the findings, it is evident that [implications of findings]. Future research should focus on addressing the existing gaps and expanding the knowledge base in [topic].References:1. [List of references cited in the literature review]。

Literature Review (翻译实践型论文文献综述示例)

Literature  Review (翻译实践型论文文献综述示例)

功能对等理论谈E.B.Whites散文汉译中的风格对等The Style Equivalence in the Translation of Essays by E.B.White Based on the Theoryof Functional EquivalenceNo one can deny the difficulties in the literary prose translation from English to Chinese. And essay, generally can be seen as literary prose, with its huge varieties in form, content, and style etc., is hard to single out the translation of it as a whole for evaluation. Discussions surround the translation of essay never die. Scholars, home and aboard, have done a great body of researches on it, some of them stand out for their original and comprehensive achievements. Now let’s have a check on some extraordinary theories in essay translation built by them.First comes Hilaire Belloc, he points out that the essence of translating is the resurrection of an alien thing in a native body, which has something of the opinion of “reaching the acme of perfection” by Mr. Qian Zhongshu in his work On the translation by Lin Shu, and laid down six general rules for prose translation, which give relative clear guidance for the translation of prose text.Then Burton Raffel, argues, in his book The Art of Translating Prose, that the strict translation of prose should reveal the inner structure of the original syntax. In his opinion, the syntactic structures of prose represent the style of the author, and “the style is the man”. And he further puts forward that only when the syntactic structures of the original message is kept or retained, can the style of the original be successfully reproduced or transposed. He takes translation as an art rather than a science, and views the prose translation more from the perspective of stylistics.As for domestic scholars on the studies of essay translation, Professor Gao Jin holds the idea that the tone and style are to a large extent translatable, and gives definitions for the translatableness of language in general and translability in particular cases. And if the essence of the thought and idea of the original are fully grasped, tone and style of author are likely to be retained.And Liu Shicong with its “artistic flavor” theory. According to Professor Liu, the “artistic flavor” contains textual atmosphere, sound and rhythm, individualized artistic recreation. He reaches to a deep level of prose translation with the recreation of the artistic flavor as the very core. While his theory is hard to operate, and stands the test of time.Among all the theories, Functional equivalence theory is of highest importance. The Functional equivalence, originally called dynamic equivalence, raised by Dr. Eugene A.Nida as “the closest natural equivalence” of the source language text, is taken as a better and relative operative way to evaluate and handle problems in translation, that the traditional translation theory cannot well manage. Before the theory came, there is no practical method of keeping balance between literary translation and free translation. Though it is not straightly stick to prose translation, it still guides a lot to the translation of essay.This paper tries to analyse the equivalence of style in the translation of essay based on the Functional equivalence theory, taking some essays by E.B.White for example.。

英语文献综述的范文

英语文献综述的范文

Title: English Literature Review: AComprehensive PerspectiveIn the realm of academic research, the literature review serves as a critical component, particularly in the field of English literature. This paper aims to provide an extensive overview of the significant developments and trends within the discipline, drawing upon a diverse range of sources and perspectives.Firstly, it is essential to recognize the evolving nature of English literature, which has been shaped by various historical, cultural, and societal influences. The early works of Shakespeare, for instance, have been extensively analyzed for their thematic depth andlinguistic intricacies. Modern scholars continue to delve into these classics, offering fresh interpretations that resonate with contemporary audiences.Moreover, the emergence of new literary genres and movements has significantly broadened the scope of English literature. Postmodernism, for example, has challenged traditional narrative structures and perspectives, introducing elements of ambiguity and fragmentation. Thistrend has been explored in numerous studies, highlighting the diverse ways in which authors have responded to and shaped the postmodern era.Furthermore, the intersection of English literature with other disciplines, such as psychology, sociology, and anthropology, has opened up new avenues for research. The exploration of character psychology in literary texts, or the analysis of societal norms and values reflected in literature, are just a few examples of thisinterdisciplinary approach.In terms of methodologies, the literature review has also undergone significant transformations. With the advent of digital technologies and online databases, scholars now have access to vast repositories of information, enabling them to conduct more comprehensive and rigorous reviews. However, the challenge lies in effectively synthesizing and evaluating this vast amount of data.One notable trend in recent years has been the increasing focus on global perspectives in English literature. With the growth of international literary movements and the rise of multiculturalism, scholars arenow more inclined to explore the global dimensions of literary works. This approach not only broadens our understanding of English literature but also promotes cross-cultural understanding and exchange.Moreover, the impact of gender and race on English literature has also been a topic of increasing interest. The examination of how gender roles and racial identities are represented and constructed in literary texts has provided valuable insights into the complex intersections of identity, power, and representation.In conclusion, the literature review in English literature is a dynamic and evolving field that continues to shape our understanding of the discipline. By exploring diverse themes, genres, and methodologies, scholars are able to delve deeper into the rich tapestry of English literature, revealing new meanings and perspectives that resonate with our contemporary world.**英语文献综述:全面视角**在学术研究领域,文献综述是一个至关重要的组成部分,尤其在英语文学领域更是如此。

文献综述英文翻译

文献综述英文翻译

文献综述英文翻译Literature Review English TranslationA literature review is a process of reviewing the available literature related to a particular subject or topic. It is a comprehensive summary and evaluation of existing research that has been conducted on the topic, which may be organized in different ways such as thematically, methodologically, or chronologically.The purpose of a literature review is to provide an overview of the existing body of research on a given topic. It serves to identify gaps in the existing literature, assess the quality of research findings, and point out areas forfuture research. It is also used to establish the theoretical framework for a research project, and can be used to inform decision-making or policy development.When translating a literature review from English to another language, it is important to consider the target audience and the context of thetranslation. The translator should have a good understanding of both the source language and the target language, as well as the relevant cultural, historical, and political contexts.In addition to having a good understanding of the source language, the translator must also have an in-depth knowledge of the subject matter being discussed in the literature review. This includes understanding the terminology used in the original text, as well as the key concepts and theories that are discussed. The translator must also be familiar with the writing style and conventions of thetarget language, and should ensure that the translated text follows the same conventions.When translating a literature review, the translator should pay attention to the structure and organization of the original text. If the original text was divided into sections or subsections, these should also be followed in the translation. The translator should also pay close attention to the language used in the originaltext, and ensure that the translated text is written in the same style.The translator should also ensure that all citations and references in the original text are accurately translated. This includes making sure that any foreign names or terms are correctly transliterated, and that any links to online sources are still valid after the translation. Finally, the translator should make sure that the translated text reads fluently and logically, and that the ideas expressed in the original text are faithfully rendered in the translated version.。

翻译过程理论文献综述

翻译过程理论文献综述

翻译过程理论的文献综述华北电力大学研英1222班 XXX翻译是一项历史悠久的活动,人们对翻译的思考同样也有着悠久的历史,几乎是伴随着翻译活动的产生而产生。

在很长的历史时期内,人们认为,翻译的话题一直都是翻译家的事。

而翻译家主要把精力都集中在翻译的实践上,因此在讨论翻译的时候,总是局限在操作技术层面上,纠结于是该直译还是意译。

随着人类社会的发展进步,人们对翻译的思考也逐步加深。

近几个世纪以来,人们纷纷结合自己的研究专业,从不同的角度去研究翻译活动,进而提出了不少具有深刻意义的研究理论。

笔者查询了近几个世纪以来中外极具代表性的翻译过程理论家25人,其中中国的有3人,外国的有22人。

根据专家学者较为普遍的研究分类方法,把这些翻译理论家提出的林林总总的翻译理论,进行了梳理归类。

其中包括有语文学派、语言学派、功能主义学派、认知学派、描写学派、文化学派、哲学取向派、实证研究学派、艺术学派以及释义学派。

语文学派语文学派的代表人物有三个:德莱顿、泰特勒和赛弗瑞。

德莱顿:诗歌翻译三分法约翰·德莱顿(John Dryden)(1631-1700),在英国被封为"桂冠诗人约翰·德莱顿是英国古典主义时期重要的批评家和戏剧家。

他在《翻译的三种类型》一文中提出了诗歌翻译的三个术语,分别是逐字译、释译和拟作。

泰特勒:翻译三原则泰特勒(A·F·Tytler),英国人,翻译家。

他于1790年著《论翻译的原理》一书,提出了著名的翻译三原则:第一,译文应完全复写出原作的思想。

第二,译文的风格和笔调应与原文的性质相同。

第三,译文与原作同样流畅。

赛弗瑞:翻译要体现作品风格赛弗瑞在谈及翻译艺术时谈到:“风格是作品的基本特征,是作者个性和当时情感的产物。

文章的任何段落,无不在一定程度上显现作者的风格。

作者如此,译者亦然。

作者的风格,或自然形成,或借用模仿,决定了他的选词用字……译者只有在供选择的词语中做出取舍。

冷链物流外文翻译文献综述

冷链物流外文翻译文献综述

冷链物流外文翻译文献综述(文档含中英文对照即英文原文和中文翻译)(AbstractQuality control and monitoring of perishable goods during transportation and delivery services is an increasing concern for producers, suppliers, transport decision makers and consumers. The major challenge is to ensure a continuou s …cold chain‟ from producer to consumer in order to guaranty prime condition of goods. In this framework, the suitability of ZigBee protocol for monitoring refrigerated transportation has been proposed by several authors. However, up to date there was not any experimental work performed under real conditions. Thus, the main objective of our experiment was to test wireless sensor motes based in the ZigBee/IEEE 802.15.4 protocol during a real shipment. The experiment was conducted in a refrigerated truck traveling through two countries (Spain and France) which means a journey of 1,051 kilometers. The paper illustrates the great potential of this type of motes, providing information about several parameters such as temperature, relative humidity, door openings and truck stops. Psychrometric charts have also been developed for improving the knowledge about water loss and condensation on the product during shipments.1. IntroductionPerishable food products such as vegetables, fruit, meat or fish require refrigerated transportation. For all these products, Temperature (T) is the most important factor for extending shelf life, being essential to ensure that temperatures along the cold chain are adequate. However, local temperature deviations can be present in almost any transport situation. Reports from the literature indicate gradients of 5 °C or more, when deviations of only a few degrees can lead to spoiled goods and thousands of Euros in damages. A recent study shows that refrigerated shipments rise above the optimum temperature in 30% of trips from the supplier to the distribution centre, and in 15% of trips from the distribution centre to the stores. Roy et al. analyzed the supply of fresh tomato in Japan and quantified product losses of 5% during transportation and distribution. Thermal variations during transoceanic shipments have also been studied. The results showed that there was a significant temperature variability both spatially across the width of the container as well as temporally along the trip, and that it was out of the specification more than 30% of the time. In those experiments monitoring was achieved by means of the installation of hundreds of wired sensors in a single container, which makes this system architecture commercially unfeasible.Transport is often done by refrigerated road vehicles and containers equipped with embedded cooling systems. In such environments, temperatures rise very quickly if a reefer unit fails. Commercial systems are presently available for monitoring containers and trucks, but they do not give complete information about the cargo, because they typically measure only temperature and at just one point.Apart from temperature, water loss is one of the main causes of deterioration that reduces the marketability of perishable food products. Transpiration is the loss of moisture from living tissues. Most weight loss of stored fruit is caused by this process. Relative humidity (RH), T of the product, T of the surrounding atmosphere, and air velocity all affect the amount of water lost in food commodities. Free water or condensation is also a problem as it encourages microbial infection and growth, and it can also reduce the strength of packagingmaterials.Parties involved need better quality assurance methods to satisfy customer demands and to create a competitive point of difference. Successful transport in food logistics calls for automated and efficient monitoring and control of shipments. The challenge is to ensure a continuous …cold chain‟ from producer to consumer in order to guaranty prime condition of goods .The use of wireless sensors in refrigerated vehicles was proposed by Qingshan et al. as a new way of monitoring. Specialized WSN (Wireless Sensor Network) monitoring devices promise to revolutionize the shipping and handling of a wide range of perishable products giving suppliers and distributors continuous and accurate readings throughout the distribution process. In this framework, ZigBee was developed as a very promising WSN protocol due to its low energy consumption and advanced network capabilities. Its potential for monitoring the cold chain has been addressed by several authors but without real experimentation, only theoretical approaches. For this reason, in our work real experimentation with the aim of exploring the limits of this technology was a priority.The main objective of this project is to explore the potential of wireless ZigBee/IEEE 802.15.4 motes for their application in commercial refrigerated shipments by road. A secondary objective was to improve the knowledge about the conditions that affect the perishable food products during transportation, through the study of relevant parameters like temperature, relative humidity, light, shocking and psychrometric properties.2. Materials and Methods2.1. ZigBee MotesFour ZigBee/IEEE 802.15.4 motes (transmitters) and one base station (receiver) were used. All of them were manufactured by Crossbow. The motes consist of a microcontroller board (Micaz) together with an independent transducer board (MTS400) attached by means of a 52 pin connector. The Micaz mote hosts an Atmel ATMEGA103/128L CPU running the Tiny Operating System (TinyOS) that enables it to execute programs developed using the nesC language. The Micaz has a radio device Chipcon CC2420 2.4 GHz 250 Kbps IEEE 802.15.4. Power is supplied by two AA lithium batteries.The transducer board hosts a variety of sensors: T and RH (Sensirion SHT11), T and barometric pressure (Intersema MS5534B), light intensity (TAOS TSL2550D) and a two-axis accelerometer (ADXL202JE). A laptop computer is used as the receiver, and communicates with the nodes through a Micaz mounted on the MIB520 ZigBee/USB gateway board.Each Sensirion SHT11 is individually calibrated in a precision humidity chamber. The calibration coefficients are used internally during measurements to calibrate the signals from the sensors. The accuracies for T and RH are ±0.5 °C (at 25 °C) and ±3.5% respectively.The Intersema MS5534B is a SMD-hybrid device that includes a piezoresistive pressure sensor and an ADC-Interface IC. It provides a 16 bit data word from a pressure and T (−40 to +125°C) dependent voltage. Additionally the module contains six readable coefficients for a highly accurate software calibration of the sensor.The TSL2550 is a digital-output light sensor with a two-wire, SMBus serial interface. It combines two photodiodes and an analog-to digital converter (ADC) on a single CMOS integrated circuit to provide light measurements over a 12-bit dynamic range. The ADXL202E measures accelerations with a full-scale range of ±2 g. The ADXL202E can measure both dynamic acceleration (e.g., vibration) and static acceleration (e.g., gravity).2.2. Experimental Set UpThe experiment was conducted in a refrigerated truck traveling during 23 h 41 m 21 s from Murcia (Spain) to Avignon (France), a distance of 1,051 km. The truck transported approx.14,000 kg of lettuce var. Little Gem in 28 pallets of 1,000 × 1,200 mm . The lettuce was packed in cardboard boxes with openings for air circulation.The length of the semi-trailer was 15 m with a Carrier Vector 1800 refrigeration unit mounted to the front of the semi-trailer. For this shipment the set point was 0 °C.The truck was outfitted with the wireless system, covering different heights and lengths from the cooling equipment, which was at the front of the semi-trailer. Four motes were mounted with the cargo (see Figure 1): mote 1 was at the bottom of the pallets in the front side of the semi-trailer, mote 2 was in the middle of the semi-trailer, mote 3 was in the rear at the top of the pallet, and mote 4 was located as shown in Figure 1, about a third of the distance between the front and the rear of the trailer. Motes 1, 2 and 3 were inside the boxes beside the lettuce. The program installed in the motes collects data from all the sensors at a fixed sample rate (7.2 s), with each transmission referred to as a “packet”. The RF power in the Micaz can be set from −24 dBm to 0 dBm. D uring the experiment, the RF power was set to the maximum, 0dBm (1mW approximately).2.3. Data AnalysisA specialized MATLAB program has been developed for assessing the percentage of lost packets (%) in transmission, by means of computing the number of multiple sending failures for a given sample rate (SR). A multiple failure of m messages occurs whenever the elapsed time between two messages lies between 1.5 ×m ×SR and 2.5 ×m ×SR. For example, with a sample rate of 11 s, a single failure (m = 1) occurs whenever the time period between consecutives packets is longer than 16.5 s (1.5 × 1 × 11) and shorter than 27.5 s (2.5 × 1 × 11). The total number of lost packets is computed based on the frequency of each failure type. Accordingly, the total percentage of lost packets is calculated as the ratio between the total number of lost packets and the number of sent packets.The standard error (SE) associated to the ratio of lost packets is computed based on a binomial distribution as expressed in Equation 1, where n is the total number of packets sent,and p is the ratio of lost packets in the experiment.2.4. Analysis of VarianceFactorial Analysis of Variance (ANOV A) was performed in order to evaluate the effect of the type of sensor in the registered measurements, including T (by means of Sensirion and Intersema), RH, barometric pressure, light intensity and acceleration module. ANOV A allows partitioning of the observed variance into components due to different explanatory variables. The STATISTICA software (StatSoft, Inc.) was used for this purpose [14]. The Fishers‟s F ratio compares the variance within sample groups (“inherent variance”) with the variance between groups (factors). We use this statistic for knowing which factor has more influence in the variability of the measurements.2.5. Psychrometric DataPsychrometry studies the thermodynamic properties of moist air and the use of these properties to analyze conditions and processes involving moist air. Psychrometric chartsshow a graphical representation of the relationship between T, RH and water vapor pressure in moist air. They can be used for the detection of water loss and condensation over the product.In our study, the ASAE standard D271.2 was used for computing the psychrometric properties of air. Equations 2–5 and Table 1 enable the calculation of all psychrometric data of air whenever two independent psychrometric properties of an air-water vapour mixture are known in addition to the atmospheric pressure:where Ps stands for saturation vapor pressure (Pa), T is the temperature (K), Pv is the vapor pressure (Pa), H the absolute humidity (g/kg dry air), Patm is atmospheric pressure (Pa) and A, B, C, D, E, F, G and R are a series of coefficients used to compute Ps, according to Equation 3.3.Results and Discussion3.1. Reliability of TransmissionSignal propagation through the lettuce lead to absorption of radio signals, resulting in great attenuations in RF signal strength and link quality at the receiver. During the experiment, only motes 3 and 4 were able to transmit to the coordinator. No signals were received from mote number 1, at the bottom of the first pallet, and number 2, in the middle of the pallet. Mote 3 was closer to the coordinator than mote 4, but mote 3 was surrounded by lettuce which blocks the RF signal. However between mote 4 and the coordinator there was free space for transmission. Thus, the maximum ratio of lost packets found was 100% for two of the motes and the minimum 4.5% ± 0.1%, for mote 4.Similar ratios were reported by several authors who performed experiments with WSN under real conditions, like for example in monitoring vineyards. Also, Baggio and Haneveld, after one year of experimentation in a potato field using motes operating at the band of 868/916MHz, reported that 98% of data packets were lost. However, during the second year the total amount of data gathered was 51%, which represents a clear improvement. Ipema et al. monitored cows with Crossbow motes, and found that the base station directly received less than 50% of temperature measurements stored in the mote buffer. Nadimi et al., who also monitored cows with this type of motes, showed packet loss rates of about 25% for wireless sensor data from cows in a pasture even the distance to the receiver (gateway) was less than 12.5 m away.Radio propagation can be influenced by two main factors: the properties of propagation media and the heterogeneous properties of devices. In a commercial shipment, if the motes are embedded within the cargo, a significant portion of the Fresnel zone is obstructed. This is a big challenge in our application. Changing the motes‟ location, for example the one at the bottom of the pallets (mote 1, at the front of the semitrailer) or the one in the middle of the compartment (mote 2), might have yielded in better data reception rates but would have resulted in a loss of spatial information near the floor or at mid-height. The sensors should be as close as possible to the products transported; otherwise the measurements would not give precise information. Thus, one solution, if the same motes are to be used, could be to includeintermediates motes that allow peer to peer communication to the base station. Another solution could be to use lower frequencies; however this is not possible using ZigBee, because the only radio frequency band available for ZigBee worldwide is the 2.4 GHz one. The other ISM (Industrial, Scientific and Medical) bands (868 MHz and 915 MHz) differ from USA to Europe. Other options include developing motes with more RF power that can achieve longer radio ranges. The transmission could also be improved by optimizing antenna orientation, shape and configuration. The standard antenna mounted in the Micaz is a 3 cm long 1/2 wavelength dipole antenna. The communications could be enhanced using ceramic collinear antennas or with use of a simple reflecting screen to supplement a primary antenna, which can provide a 9dB improvement. Link asymmetry and an irregular radio range can be caused by the antenna position. In a real environment, the pattern of radio transmitted at the antenna is neither a circular nor a spherical shape. Radio irregularity affects the motes performance and degrades their ability to maintain connection to other nodes in the network. However, in our experiment Micaz motes were deployed in its best position according to a recent study. Another issue is the received signal strength indicator (RSSI), it should be recorded in further experiments in order to detect network problems and estimate the radio link quality. RSSI is a way for the radio to report the strength of the radio signal that it is receiving from the transmitting unit.Sample rates configured in the motes were very short in order to get the maximum amount of data about the ambient conditions. In practice, a reduction in the sampling frequency of recording and transmission should be configured in order to extend battery life. According to Thiemjarus and Yang this also provides opportunities for data reduction at the mote level. It is expected that future wireless sensor motes will have on-board features to analyze recorded data and detect certain deviations. The level of a deviation determines whether the recording or transmitting frequency should be adapted .One important feature in the motes came from the miniaturized sensors mounted on the motes that allow, in a small space (2.5 ×5 ×5 cm), to provide data not just about temperature, but also RH, acceleration and light, according to the proposal of Wang and Li. Those variables were also measured and analyzed.3.2. Transport ConditionsFor the analysis of T conditions, the average value of the two sensors mounted in each mote is considered. The set-point of the transport trailer‟s cooling system was 0 °C, but the average temperature registered during the shipment was 5.33 °C, with a maximum of 8.52 °C and a minimum of −3.0 °C. On average, 98% of the time the temperature was outside of the industry recommended range (set-point ± 0.5 °C).Figure 2 shows the temperature fluctuations registered during the shipment, where four different markers are used corresponding to two T sensors per mote. There are large differences between the temperatures recorded with each sensor on the same mote even thought individual calibration curves were used. The SHT11 measures consistently higher temperatures than the Intersema. This behaviour could be due to the closer location of the SHT11 to the microcontroller, causing sensor self-heating effects.In other studies, like for example Tanner and Amos, it was observed that the cargo was within the industry recommended T interval for approximately 58% of the shipment duration. Rodriguez-Bermejo et pared two different cooling modes in a 20‟ reefer container. For modulated cooling the percentage of time within the recommendation ranged between 44% and 52% of the shipment duration, whereas for off/on control cooling it ranged between 9.6% and 0%. In those experiments, lower percentages of time within industry recommended intervals are found for high T set points.The analysis of variance of the T data shows that the variability in temperature depended both in the type of sensor and on the mote used. The interaction between these two factors also has an impact on the T measurements. The critical value of F at 95% probability level is much lower than the observed values of F, which means that the null hypothesis is false. The mote is the factor that has most influence on the variability of the measurements (highest Fishers‟s F); this fact seems to be due to the location of the node. Mote 4 is closer to the cooling equipment which results in lower temperature measurements.The node is a very significant factor in the measurements registered. In the case of RH, pressure, light and acceleration, the node location has great influence in data variability . However, node location has more impact on the measured RH than on the other variables.Inside the semi-trailer RH ranged from 55 to 95% (see Figure 3). The optimal RH forlettuce is 95%. Humidity was always higher at mote 4 (at the top middle of the semi-trailer; average RH 74.9%) than at mote 3 (located at the rear; average RH 62.1%).摘要生产商、供应商、运输决策者和消费者越来越关心易腐货物在运输和交付服务中对质量的掌握和把控。

文献综述和英文翻译

文献综述和英文翻译

英文翻译Extreme 1.1Web Deployment ProjectsWhen ASP was first released, Web programming was more difficult because you needed IIS to serve your ASP pages. Later, 2.0 and Visual Studio® 2005 made everything easier by introducing the Web site model of development. Instead of creating a new project inside Visual Studio, the Web site model lets you point to a directory and start writing pages and code. Furthermore, you can quickly test your site with the built-in Development Server, which hosts in a local process and obviates the need to install IIS to begin developing. The beauty of the Web site model is that you can develop your Web application without thinking about packaging and deployment. Need another class? Add a .cs file to the App_Code directory and start writing. Want to store localizable strings in a resource file? Add a .resx file to the App_GlobalResources directory and type in the strings. Everything just works; you don't have to think about the compilation and deployment aspect at all.When you are ready to deploy, you have several options. The simplest choice is to copy your files to a live server and let everything be compiled on-demand (as it was in your test environment). The second option is to use the aspnet_compiler.exe utility and precompile the application into a binary release, which leaves you nothing but a collection of assemblies, static content, and configuration files to push to the server.The third option is to again use aspnet_compiler.exe, but to create an updateable binary deployment where your .as*x files remain intact (and modifiable) and all of your code files are compiled into binary assemblies.This seems to cover every possible scenario, leaving the developer to focus simply on writing the Web application, with packaging and deployment decisions to be made later when the application is actually deployed. There was a fair amount of backlash against this model, however, especially from developers who were used to their Web projects being real projects, specified in real project files, that let you inject pre-and post-build functions, exclude files from the build process, move between debug and release builds with a command-line switch, and so on. In response, Microsoft quickly introduced the Web Application Project or WAP, initially released as an add-in to Visual Studio 2005, and now included in Visual Studio 2005 Service available for download from /vstudio/support/vs2005sp1.WAP provides an alternative to the Web site model that is much closer to the Visual Studio .NET 2005 Web Project model. The new WAP model compiles all of the source code files during the build process and generates a single assembly in the local /bin directory for deployment. WAP also makes it much easier to incrementally adopt the new partial class codebehind model introduced in 2.0 because you can now open a Visual Studio .NET 2003 project and only your .sln and .csproj (or .vbproj) files will be modified during the conversion. You can then convert each file and its codebehind class to the new partial class model independently of any other file in the project (by right-clicking on the file in the Solution Explorer and selecting Convert to Web Application), or just leave them using the old model. This is in contrast to converting a Visual Studio .NET 2003 Web project to the Web site model, which converts all files at once and does not support incremental adoption.Finally, there is a new project type called Web Deployment Projects (the main topic of this column), which introduces myriad additional deployment options for both Web site projects and Web Application Projects. Web Deployment Projects fill the remaining holes in the deployment options for both Web site apps and Web Application Projects and make it possible to implement practically any deployment scenario in a simple and extensible way. To understand exactly what this new project type adds, let's first review what we had before Web Deployment Projects were available.When you build an application using the Web site model, you have the option of precompiling the site for deployment. You can access the precompilation utility through the Build | Publish menu in Visual Studio 2005 or directly through the command-line utility aspnet_compiler.exe. Figure 1 shows the interface to this tool exposed by Visual Studio.The first decision you have to make when using the publish utility is whether you want your .as*x files to be updatable once deployed (use the "Allow this precompiled site to be updatable" option of -u switch in the aspnet_compiler.exe command-line utility). This decision hinges on whether you want to be able to make minor changes to your pages once deployed without having to go through the entire deployment process again. You may, in fact, want to explicitly disallow any modifications to the deployed pages and require that all modifications go through the standard deployment (and hopefully testing) process, in which case publishing the site as not updatable is the proper choice.When a site is published as not updatable, it is possible to completely remove all .as*x files and publish only binary assemblies (plus configuration files and static content). However, without the physical files in place, it is impossible for to tell which classes to use for which endpoint requests. For example, if a request comes into your application for Page1.aspx and you have used non-updatable binary deployment, there very well may not be any Page1.aspx file on disk, and there is nothing in the existing configuration files to indicate which class in the collection of assemblies deployed to the /bin directory should actually be the handler for this request. To remedy this, the compilation process will also generate a collection of .compiled files that contain endpoint-to-type mapping and file dependency information in a simple XML format, and these files must be published along with the binary assemblies in the /bin directory of the deployed site. As an example, if you did have a page named Page1.aspx in your application, the aspnet_compiler.exe utility would generate a file named piled (with the hash code varying) that contained the following XML:<?xml version="1.0" encoding="utf-8"?><preserve resultType="3"virtualPath="/SampleWebSite/Page1.aspx"hash="8a8da6c5a" filehash="42c4a74221152888"flags="110000" assembly="App_Web_aq9bt8mj"type="ASP.page1_aspx"><filedeps><filedep name="/SampleWebSite/Page1.aspx" /><filedep name="/SampleWebSite/Page1.aspx.cs" /></filedeps></preserve>The other major decision you have to make when publishing a Web site with this utility is the granularity of the packaging of the generated assemblies. You can either create a separate assembly for each directory in your site or create a separate assembly for each compilable file in your site (.aspx, .ascx, .asax, and so on.) by checking the Use fixed naming and single page assemblies (or -fixednames in the aspnet_compiler.exe command-line utility). This decision is not as obvious as you might think, as each option has its own potential issues. If you elect to not use the -fixednames option, then every time you publish your application a completely new set of assemblies will be generated, with completely different names from the ones published earlier. This means that deployment is trickier because you must take care to delete all of the previously published assemblies on the live server before deploying the new assemblies or you'll generate redundant class definition errors on the next request. Using the -fixednames option will resolve this problem as each file will correspond to a distinctly named assembly that will not change from one compilation to the next. If you have a large site, however, generating a separate assembly for each page, control, and Master Page can easily mean managing the publication of hundreds of assemblies. It is this problem of assembly granularity in deployment that Web Deployment Projects solve in a much more satisfying way, as you will see.You can also introduce assembly signing into the compilation process to create strong-named, versioned assemblies, suitable for deployment in the Global Assembly Cache (GAC) if needed. You can mark the generated assemblies with the assembly-level attribute AllowPartiallyTrustedCallers using the -aptca option, which would be necessary if you did deploy any assemblies to the GAC and were running at a low or medium level of trust. (Keep in mind that this attribute should only be applied to assemblies that have been shown not to expose any security vulnerabilities, as using it with a vulnerability could expose a luring attack.)One other detail about publishing your site is that if you do elect to use Web Application Projects instead of the Web site model, the Build | Publish dialog box will look quite different, as shown in Figure 2. Web Application Projects assume that you want to publish the application as updatable .as*x files and precompiled source files (the same model it uses in development), so the binary-only deployment options are not available. This utility is really closer in nature to the Copy Web site utility available with Web sites than it is to the Publish Web Site utility since it involves copying files produced by the standard build process.Technically you are not restricted from using binary-only (non-updatable) deployment, even if you are using Web Application Projects. If you think about it, the output of the build of a WAP is a valid Web site, which you can then pass through the aspnet_compiler.exe utility to generate create a binary deployment. You just can't invoke it from the Visual Studio 2005 interface which, fortunately, Web Deployment Projects rectify.So what's missing from the existing compilation and deployment options presented so far? Primarily two things: the ability to control the naming of assemblies, especially for deployment purposes, and the ability to consolidate all of the output assemblies into a single assembly for simplified deployment. Web Deployment Projects solve both of these problems. Perhaps even more significantly, however, they also tie up a lot of loose ends in the deployment story that existed with Web site applications and Web Application Projects.At their core, Web Deployment Projects (available for download at /aa336619.aspx) represent just another type of project you add to your solution. Like all Visual Studio project files, Web deployment projects are MSBuild scripts that can be compiled directly in the IDE or run from the command line. Instead of specifying a collection of source code files to compile, however, Web Deployment Projects contain build commands to compile and package Web sites (or Web Application Projects). This means that they will invoke the aspnet_compiler.exe utility (among others) to create a deployment of a particular Web application. Web Deployment Projects are shipped as a Visual Studio add-in package that includes an easy-to-use menu item for injecting new projects and a complete set of property pages to control all of the available settings. To add one to an existing application, right-click on an existing Web site (or Web Application Project) and select the Add Web Deployment Project item as shown in Figure 3. This will add a new .wdproj filecontaining an MSBuild script to your solution, which will generate a deployment of the application you created it from.Once the Web Deployment Project is part of your solution, you can access the property pages of the project file to control exactly what the project does for you, as shown in Figure 4. The default setting for a new deployment project is to deploy the application in updatable mode, with all the .as*x files intact, and the source files compiled into a single assembly deployed in the top-level /bin directory. These deployment projects work the same regardless of whether the source application is using the Web site model or the Web Application Project model, which means that you can now select either development model without impacting your deployment options. One of the most significant features of Web Deployment Projects is the ability to configure the deployment to be all binary (not updatable) in the form of a single assembly, the name of which you can choose. Using this model of deployment means that you can update your entire site merely by pushing a single assembly to the /bin directory of your live site, and not concern yourself with deleting existing assemblies prior to deploying or dealing with a partially deployed site causing errors. It is still necessary to deploy the .compiled files for the endpoint mappings, but these files only change when you add, delete, or move pages in your site.Web Deployment Projects provide flexibility in deployment and let you make packaging and deployment decisions independently of how you actually built your Web applications. This independence between development and deployment was partially achieved in the original release of 2.0 with the aspnet_compiler.exe utility, but never fully realized because of the constraints imposed when performing the deployment. With Web Deployment Projects, the separation between development and deployment is now complete, and your decision about how to build your applications will no longer impact your deployment choices. Merging AssembliesMuch of what Web Deployment Projects provide is just a repackaging of existing utilities exposed via MSBuild tasks and a new interface, but there are also a couple of completely new features included. The most intriguing is the ability to merge assemblies.When you install Web Deployment Projects, you will find an executable called aspnet_merge.exe in the installation directory. This executable is capable of taking the multi-assembly output of a precompiled site and merging the assemblies into one.This is the utility that is incorporated into your build script if you select the merge option in a Web Deployment Project. As an example of what is possible with this utility, consider the output of a precompiled Web site, run without the updatable switch, shown in Figure 5. The source application for this output contained two subdirectories, a top-level global.asax file, a class defined in App_Code, and a user control. The end result of the compilation is five different assemblies and a collection of .compiled files. If you run the aspnet_merge.exe utility on this directory with the -o switch to request a single assembly output, shown at the bottom of Figure 5, the result is a much more manageable single assembly named whatever you specify.Although the aspnet_merge.exe utility and the corresponding MSBuild task that ship with Web Deployment Projects are new, the underlying technology for merging assemblies has actually been around since the Microsoft® .NET Framework 1.1 in the form of a utility made available from Microsoft Research called ILMerge, the latest version of which is available for download from /~mbarnett/ILMerge.aspx. This utility is directly incorporated into aspnet_merge.exe and does all the heavy lifting involved with merging assemblies. If you think about it, the merging of assemblies is a rather complicated task. You need to take into consideration signing, versioning, and other assembly-level attributes, embedded resources, and XML documentation, as well as manage the details of clashing type names, and so on. The ILMerge utility manages all of these details for you, with switches to control various decisions about the process. It also gives you the ability to transform .exe assemblies into .dll assemblies for packaging purposes. As an example, suppose you have three assemblies: a.dll, b.dll, and c.exe which you would like to merge into a single library assembly. As long as there were no conflicts in typenames, the following command line would generate a new library, d.dll with all of the types defined in a.dll, b.dll, and c.exe:ilmerge.exe /t:library /ndebug /out:d.dll a.dll b.dll c.exePluggable Configuration FilesThe other completely new feature that comes with Web Deployment Projects is the ability to create pluggable configuration files. It is a common problem when deploying Web applications to find a way to manage the differences in your configuration files between development and deployment. For example, you may have a local test database to run your site, have another database used by a staging server, and yet another used by the live server. If you are storing your connectionstrings in web.config (typically in the connectionStrings section), then you need some way of modifying those strings when the application is pushed out to a staging server or to a production machine. Web Deployment Projects offer a clean solution to this problem with a new MSBuild task called ReplaceConfigSections.This task allows you to specify independent files that store the contents of a particular configuration section independently based on solution configurations. For example, you might create a debugconnectionstrings.config file to store the debug version of our connectionStrings configuration section that looked like this: <connectionStrings><add connectionString="server=localhost;database=sales;trusted_connection=yes" name="sales_dsn"/> </connectionStrings>Similarly, you would then create separate files for each of the solution configurations defined (release, stage, and so on) and populate them with the desired connection strings for their respective deployment environments. For the release configuration, you might name the file releaseconnectionstrings.config and populate it as follows:<connectionStrings><add connectionString="server=livedbserver;database=sales;trusted_connection=yes"name="sales_dsn"/></connectionStrings>Next, you would configure the MSBuild script added by Web Deployment Projects to describe which configuration sections in the main web.config file should be replaced, and the source files that will supply the content for the replacement. You could modify the script by hand, but there is a nice interface exposed through the property pages of the build script in Visual Studio that will do it for you, as Figure 6 shows. In this case, you are setting the properties for the debug solution configuration, so check the Enable Web.config file section replacement option and specify the section to be replaced along with the file with the contents to replace it: You would use this same dialog page to set the configuration replacement for the Release solution configuration (and any others we had defined) with the corresponding files.When you then run the build script, the ReplaceConfigSections task extracts the contents from any associated config files and replaces the contents of the corresponding configuration section, creating a new web.config file that is pushed to the deployment directory. This configuration file replacement feature means that you can maintain configuration differences between deployment environments in a manageable way with text files that can be versioned under source control, and you don't have to resort to referring to that sticky note reminding you to change the connection string when you deploy. It should be emphasized that this feature works with any section of the configuration file, even custom sections, so if you have differences in other configuration sections (for example, appSettings) you can easily specify those differences with this build task as well.Creating Reusable User ControlsThere is an interesting side application of Web deployment projects that solves a problem that has plagued developers for years-how to create reusable user controls to share across applications. User controls are fundamentally just composite custom controls whose child controls are laid out in an .ascx file. The ability to use the designer for laying out controls and adding handlers is a huge benefit for most developers since it feels almost identical to building a page, except that the resulting .ascx file can be included as a control in any page. The disadvantage has always been that you need the physical .ascx file in the application's directory to actually use it. Techniques for making .ascx controls shareable across applications are available, but they usually involve chores like creating shared virtual directories between applications or harvesting temporary assemblies generated by at request time, and they've never been satisfactory.The introduction of the aspnet_compiler.exe utility in version 2.0 brought us much closer to a decent solution. With the compiler, you can create a Web site consisting of only user controls and publish the site in non-updateable mode using the compiler to generate reusable assemblies. Once you have the resulting assembly (or assemblies), you can then deploy to any Web application and reference the user control just as you would a custom control (not by using the src attribute as you would for .ascx files). The only disadvantage to this technique is that you either have to accept the randomly named assembly produced by the compilation process or select the fixednames option in the compiler to generate a fixed named assembly for each Master Page in the site (not a single assembly for the entire collection).Web Deployment Projects provide the final step to create truly reusable user control assemblies. You can take the same Web site consisting exclusively of user controls and add a Web Deployment Project to create a single output assembly with the name of your choice. It's even straightforward to create a signed assembly to deploy to the GAC for sharing controls across multiple applications without redeploying the assembly in each /bin directory.ConclusionThe release of Web Deployment Projects completes the set of tools for deploying applications in a very satisfying way. It is now possible to deploy your applications in any manner ranging from all source to all binary, with complete control over the generation, packaging, and naming of the binary assemblies. In addition, Web Deployment Projects provide a solution for replacing sections of your configuration files based on your target build, and they solve the problem of distributing reusable user controls. Anyone who is building and deploying applications will undoubtedly find some aspect of Web Deployment Projects compelling enough to begin using them today.2.1 Client-Side Web Service Calls with AJAX ExtensionsSince its inception, has fundamentally been a server-side technology. There were certainly places where would generate client-side JavaScript, most notably in the validation controls and more recently with the Web Part infrastructure, but it was rarely more than a simple translation of server-side properties into client-side behavior-you as the developer didn't have to think about interacting with the client until you received the next POST request. Developers needing to build more interactive pages with client-side JavaScript and DHTML were left to do it on their own, with some help from the 2.0 script callbacks feature. This has changed completely in the last year.At the Microsoft Professional Developer's Conference in September 2005, Microsoft unveiled a new add-on to , code-named "Atlas," which was focused entirely on leveraging client-side JavaScript, DHTML, and the XMLHttpRequest object. The goal was to aid developers in creating more interactive AJAX-enabled Web applications. This framework, which has since been renamed with the official titles of Microsoft® AJAX Library and the 2.0 AJAX Extensions, provides a number of compelling features ranging from client-side data binding to DHTML animations and behaviors to sophisticated interception of clientPOST backs using an UpdatePanel. Underlying many of these features is the ability to retrieve data from the server asynchronously in a form that is easy to parse and interact with from client-side JavaScript calls. The topic for this month's column is this new and incredibly useful ability to call server-side Web services from client-side JavaScript in an 2.0 AJAX Extensions-enabled page.Calling Web Services with AJAXIf you have ever consumed a Web service in the Microsoft .NET Framework, either by creating a proxy using the wsel.exe utility or by using the Add Web Reference feature of Visual Studio®, you are accustomed to working with .NET types to call Web services. In fact, invoking a Web service method through a .NET proxy is exactly like calling methods on any other class. The proxy takes care of preparing the XML based on the parameters you pass, and it carefully translates the XML response it receives into the .NET type specified by the proxy method. The ease with which developers can use the .NET Framework to consume Web service endpoints is incredibly enabling, and is one of the pillars that make service-oriented applications feasible today.The 2.0 AJAX Extensions enable this exact same experience of seamless proxy generation for Web services for client-side JavaScript that will run in the browser. You can author an .asmx file hosted on your server and make calls to methods on that service through a client-side JavaScript class. For example, Figure 1 shows a simple .asmx service that implements a faux stock quote retrieval (with random data).In addition to the standard .asmx Web service attributes, this service is adorned with the ScriptService attribute that makes it available to JavaScript clients as well. If this .asmx file is deployed in an AJAX-Enabled Web application, you can invoke methods of the service from JavaScript by adding a ServiceReference to the ScriptManager control in your .aspx file (this control is added automatically to your default.aspx page when you create a Web site in Visual Studio using the AJAX-enabled Web site template):<asp:ScriptManager ID="_scriptManager" runat="server"><Services><asp:ServiceReference Path="StockQuoteService.asmx" /></Services></asp:ScriptManager>Now from any client-side JavaScript routine, you can use the MsdnMagazine.StockQuoteService class to call any methods on the service. Because the underlying mechanism for invocation is intrinsically asynchronous, there are no synchronous methods available. Instead, each proxy method takes one extra parameter (beyond the standard input parameters)- a reference to another client-side JavaScript function that will be called asynchronously when the method completes. The example page shown in Figure 2 uses client-side JavaScript to print the result of calling the stock quote Web service to a label (span) on the page.If something goes wrong with a client-side Web service call, you definitely want to let the client know, so it's usually wise to pass in another method that can be invoked if an error, abort, or timeout occurs. For example, you might change the OnLookup method shown previously as follows, and add an additional OnError method to display any problems:function OnLookup(){var stb = document.getElementById("_symbolTextBox");MsdnMagazine.StockQuoteService.GetStockQuote(stb.value, OnLookupComplete, OnError);}function OnError(result){alert("Error: " + result.get_message());}This way if the Web service call fails, you will notify the client with an alert box. You can also include a userContext parameter with any Web service calls made from the client, which is an arbitrary string passed in as the last parameter to the Web method, and it will be propagated to both the success and failure methods as an additional parameter. In this case, it might make sense to pass the actual symbol of the stock requested as the userContext so you can display it in the OnLookupComplete method:function OnLookup(){var stb = document.getElementById("_symbolTextBox");。

研究生学术英语翻译4

研究生学术英语翻译4

Writing a literature review撰写文献综述A literature review is a very important part of the research project.It may be a self-contained review or a part of the introduction to an academic essay.In either case,its purpose is to demonstrate a clear understanding of the topic being ly,your literature review must tell what has been done on the topic,what different scholars have said about their own research,what major findings have been published,and what controversial area exist.A good literature review can enhance the credibility of your research by indicating that your present study is based on a thorough and critical knowledge of what has been done in the field.Before writing,to be critical,you must ask questions like these about each book or article you read:文献综述是研究项目的一个重要组成部分。

它可以是一个独立的综述或是一个学术论文引言的一部分。

在每一种情况下,它的目的都是表明对所研究的课题有一个清晰的认识。

中文文献综述【范本模板】

中文文献综述【范本模板】

一、研究背景:翻译,作为一种信息转换与传播行为,在跨文化交际中起到至关重要的作用。

意大利著名翻译学家玛提欧·利奇(Matteo Ricci,1552)作为中西方文化交流的先驱,他翻译的《几何原本》带给了当时中国许多先进的科学知识和哲学思想。

瑞典汉学家,诺贝尔文学奖18位终身评委之一马悦然(Goran Malmqvist,1924)曾经说过:“没有翻译就没有世界文学."由此可见翻译对于跨文化交际的重要性。

翻译作为一门学科,具体可分为笔译和口译。

笔译的发展历史由来已久,而口译作为一门新兴的学科,自上世纪50年代出现以来迅速发展。

尤其是进入21世纪以来,随着全球经济的发展和中国加入世界贸易组织,世界一体化程度不断加深,口译活动日趋频繁,在这种新形势下,口译研究的重要性也日渐凸显.在过去的几十年里,口译成为了翻译专家们的研究重点,并且取得了大量研究成果,其中之一就是法国巴黎释意派理论,其核心假说“脱离语言外壳”对实战口译有着重要指导意义。

与此同时,通过研究释意学派理论对于会议口译的指导意义,从而延伸至高校课堂教学,提升口译教学质量,同时具有可行性和实用性。

二、研究现状及不足释意学派理论认为,“口译是翻译的基本形式(勒代雷,1990),因而应该是翻译的首要对象。

”因此,口译,尤其是会议口译,一直是国内外研究者关注的焦点之一。

下面先以国外的相关研究和观点为例。

西方口译研究以会议口译的研究最为系统(肖晓燕,2002),其发展过程呈现出四个明显的阶段性:20 世纪50 年代至60 年代初的初级研究阶段,20 世纪60 年代到70 年代初期的实验心理学研究阶段,20 世纪70 年代初到80 年代中期的从业人员研究阶段,20 世纪80 年代后半期开始的蓬勃发展阶段。

口译研究主要围绕五大主题,即口译训练、语言问题、认知问题、质量问题和从业问题,产生了四种很有影响的研究视角,即信息处理范式、释意学派理论、口译神经生理学研究、对口译进行跨学科实证研究。

英文文献综述的范文

英文文献综述的范文

英⽂⽂献综述的范⽂ 下⾯是店铺为⼤家整理的⼀些关于“英⽂⽂献综述的范⽂”的资料,供⼤家参阅。

英⽂⽂献综述范⽂ How to Write a Literature Review ? I. The definition of Literature Review ⽂献综述(Literautre Review)是科研论⽂中重要的⽂体之⼀。

它以作者对各种⽂献资料的整理、归纳、分析和⽐较为基础,就某个专题的历史背景、前⼈的⼯作、研究现状、争论的焦点及发展前景等⽅⾯进⾏综合、总结和评论。

通过阅读⽂献综述,科研⼯作者可花费较少的时间获得较多的关于某⼀专题系统⽽具体的信息,了解其研究现状、存在的问题和未来的发展⽅向。

II. The purposes of literature review And Its Components A. The Purposes On the one hand, it helps you broaden the view and perspective of the topic for your graduation thesis. On the other hand, it helps you narrow down the topic and arrive at a focused research question. B. Its Components There are six parts in a complete Literature Review. 标题与作者(title and author) 摘要与关键词(abstract and key words) 引⾔(introduction) 述评(review) 结论(conclusion) 参考⽂献(references) III. Classification of Source Materials How can we locate the materials relevant to our topics better and faster? Basically, all these source materials may be classified into four majors of sources. A: Background sources: Basic information which can usually be found in dictionaries and encyclopedia complied by major scholars or founders of the field. Three very good and commonly recommenced encyclopedias are encyclopedias ABC, namely, Encyclopedia Americana, Encyclopedia Britannica, and Collier’s Encyclopedia. There are also reference works more specialized, such as The Encyclopedia of Language and Linguistics for linguistics and TEFL studies. Moreover, you may also find Encyclopedia on the web. B: Primary sources Those providing direct evidence, such as works of scholars of the field, biographies or autobiographies, memoirs, speeches, lectures, diaries, collection of letters, interviews, case studies, approaches, etc. Primary sources come in various shapes and sizes, and often you have to do a little bit of research about the source to make sure you have correctly identified it. When a first search yields too few results, try searching by broader topic; when a search yields too many results, refine your search by narrowing down your search. C: Secondary sources Those providing indirect evidence, such as research articles or papers, book reviews, assays, journal articles by experts in a given field, studies on authors or writers and their works, etc. Secondary sources will inform most of your writing in college. You will often be asked to research your topic using primary sources, but secondary sources will tell you which primary sources you should use and will help you interpret those primary sources. To use theme well, however, you need to think critically them. There are two parts of a source that you need to analyze: the text itself and the argument within the text. D: Web sources The sources or information from websites. Web serves as an excellent resource for your materials. However, you need to select and evaluate Web sources with special care for very often Web sources lack quality control. You may start with search engines, such as Google, Yahoo, Ask, Excite, etc. It’s a good idea to try more than one search engine, since each locates sources in its own way. When using websites for information, be sure to take care for the authorship and sponsorship. If they are both unclear, be critical when you use information. The currency of website information should also be taken into account. Don’t use too out information dated for your purpose. IV. Major strategies of Selecting Materials for literature review A. Choosing primary sources rather than secondary sources If you have two sources, one of them summarizing or explaining a work and the other the work itself, choose the work itself. Never attempt to write a paper on a topic without reading the original source. B. Choosing sources that give a variety of viewpoints on your thesis Remember that good argument essays take into account counter arguments. Do not reject a source because it makes an argument against you thesis. C. Choosing sources that cover the topic in depth Probably most books on Communicative Language Teaching mention William Littlewood, but if this your topic, you will find that few sources cover the topic in depth. Choose those. D. Choosing sources written by acknowledged experts If you have a choice between an article written by a freelance journalist on Task-based Teaching and one written by a recognized expert like David Nunan, Choose the article by the expert. E. Choosing the most current sources If your topic involves a current issue or social problem or development in a scientific field, it is essential to find the latest possible information. If all the books on these topics are rather old, you probably need to look for information in periodicals. V. Writing a literature Review A. When you review related literature, the major review focuses should be: 1. The prevailing and current theories which underlie the research problem. 2. The main controversies about the issue, and about the problem. 3. The major findings in the area, by whom and when. 4. The studies which can be considered the better ones, and why. 5. Description of the types of research studies which can provide the basis for the current theories and controversies. 6. Criticism of the work in the area. B. When you write literature review, the two principles to follow are: 1. Review the sources that are most relevant to your to your thesis. 2. Describe or write your review as clear and objective as you can. C. Some tips for writing the review: 1. Define key terms or concepts clearly and relevant to your topic. 2. Discuss the least-related references to your question first and the most related references last. 3. Conclude your review with a brief summary. 4. Start writing your review early. VI. ⽂献综述主要部分的细节性提⽰和注意事项 英⽂⽂献主要部分细节提⽰: 引⾔(Introduction) 引⾔是⽂献综述正⽂的开始部分,主要包括两个内容:⼀是提出问题;⼆是介绍综述的范围 和内容。

文献综述例文英文翻译

文献综述例文英文翻译

文献综述例文英文翻译Literature Review ExampleIntroduction:This literature review explores the concept of mindfulness and its relationship with mental health. Mindfulness is commonly defined as the awareness that arises from paying attention, on purpose, to the present moment, without judgment. In recent years, there has been a surge of interest in mindfulness as a tool for improving mental health. The aim of this review is to provide an overview of the current literature on mindfulness and mental health, and to highlight some of the key findings from this research.Method:To conduct this literature review, a comprehensive search of electronic databases including PubMed, CINAHL, and PsycINFO was undertaken. The search terms included “mindfulness”, “mental health”, “anxiety”, “depression”, “stress”, “psychological distress” and “well-being”. The articles included in this review were limited to those published in peer-reviewed journals between the years 2000 and 2020, in English.Results:The results of this literature review indicate that mindfulness-based interventions are effective in reducing symptoms of anxiety, depression, and psychological distress. Studies have found that mindfulness training can lead to significant improvements in individual well-being, reduced stress, and increased positive affect. One study found that mindfulness meditation was more effective than cognitive-behavioral therapy (CBT) in reducing symptoms of depression and anxiety in individuals with major depressive disorder. Other studies have found that mindfulness-based interventions can improve cognitive functioning, attention, and emotion regulation.Discussion:The findings of this literature review suggest that mindfulness-based interventions hold promise as effective tools for improving mental health. The results indicate that mindfulness training can provide individuals with the tools they need to manage symptoms of anxiety, depression, and psychological distress. Given the effectiveness of mindfulness-based interventions, it appears that these practices have the potential to become an important addition to conventional treatments for mental health disorders.Conclusion:In conclusion, this literature review highlights the importance of mindfulness-based interventions in improving mental health. The results of this review suggest that mindfulness training canlead to significant improvements in individual well-being, reduced stress, and increased positive affect. Further research is needed to elucidate the underlying mechanisms by which mindfulness-based interventions work. Nonetheless, the practical applications of mindfulness practices in treating anxiety, depression, and psychological distress suggest that these interventions have the potential to become an important component of mental health treatments.。

文献综述和外文翻译

文献综述和外文翻译
二、发展趋势
近年来,政府转型和政府角色定位成为我国学术界探讨的重点,而与此同时社会问题的频发也为政府执政提出更为严峻的考验。由西方发端的公共治理理论在政府与社会关系的讨论方面为政府职能改革提供了新的参考,现在及今后一段时期内将对政府角色定位产生深刻影响。
三、存在问题
公共治理理论在我国发展时间不久,虽然很多学者对其进行了研究与论述,但是普遍的社会认知尚未达到。我国目前正处于社会转型的关键时期,贫富差距拉大趋势明显,人民内部矛盾凸显,虽然我国政府一直在寻求很好的解决这些问题,但是收效甚微。公共治理理论可以为转型期我国政府角色定位提供理论参考,但是具体结合仍是问题根结所在,本文力图找到两者的结合点并提出建议。
摘要:在公共行政的讨论中,治理的概念被更为常见的使用,但这一词的含义并不总是很清楚。有越来越多的可以被归纳为"没有政府的治理“的欧洲文献强调:网络、伙伴关系和市场(尤其是国际市场)具有的重要性。这一机构的文献与相关的新的公共管理相关;然而,它有许多独特的元素。这篇文章讨论了它的优缺点和其在美国的公共京:社会科学文献出版社2000
《治理与善治》是国内出版的第一本有关治理问题的译著,它的最大价值就是将治理的国际思考首次介绍到中国来,为有兴趣研究治理问题的人提供了一本有益的参考书。这本书收录了目前西方治理理论研究中几位代表人物的文章,它们分别从不同的学科、不同的国别表达了对治理与善治的不同观点。书末所附的若干案例,是有关国家善治竞赛中的获奖项目,从中可以形象地看到不同国家的善治实践。
河北师范大学本科生毕业论文(设计)翻译文章
参阅外文篇目及文献翻译:
Governance Without Government? Rethinking Public Administration
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

带有负电荷的粘土矿物可借由阳离子饱和浸润技术(同离子化)进行修饰调理,进而在细菌细胞壁负电位点与粘土矿物间增强架桥作用。

此外,粘土矿物中的主要元素或许对于细菌细胞官能团的成键作用有帮助。

例如C-O-Na-Si型键就有可能存在于细菌高岭土混合物中。

粘土矿物对于多环芳烃和有机挥发物的吸收作用与目标化合物的极性以及粘土矿物微孔关系密切。

例如一种严格非极性的多环芳烃——菲能够渗透入粘土矿物的疏水支配位点。

在水性介质中,由于不具备良好的热力学反应条件而导致多环芳烃在与各类电荷特性不同的粘土矿物作用时都难以取得较好的吸收效果(例如用蒙脱石吸收多环芳烃菲)。

尽管如此,在利用二价阳离子对于粘土进行强化调理的过程中,我们发现二价阳离子能够有效减小分子双电层的斥力并由此提高有机污染物与粘土矿物的亲和度。

由于这些阳离子的作用,多环芳烃分子能够在毛细收缩作用下被粘土微孔更有效的捕集,其作用机理是通过强烈的交联键作用形成多环芳烃半晶体。

研究表明,相较于没有价层电子的硬阳离子而言,拥有大量自由价电子的软阳离子能够实现更为强力的吸收作用。

由此,当阳离子置于膨胀粘土中时,多环芳烃芳环上的π电子就能与阳离子成键。

尽管如此,这种成键方式依然受制于碳氢化合物中芳环的提供π电子的能力以及被镶嵌于粘土矿物表面的阳离子类型。

此外,芳环结构上的取代基同样对于烃类的粘土吸附有影响,例如,六氟苯及与其近似的非极性化合物(取代基除电子定位基团外由吸电子作用决定)。

另一方面,不论是在天然的抑或人工调理的粘土中,许多极性有机挥发性化合物的反应都受电荷的支配,例如阳离子饱和浸润粘土中发生的阳离子偶极反应。

尽管如此,在具体反应及机理作用程度上的实验证据还很匮乏,这些作用也使得降解多环芳烃及有机挥发物的微生物的存在及作用特性变得更加扑朔迷离。

粘土矿物、烃类及微生物间的相互作用受到固相、生物相及液相的联合影响,因此,除了传统的DLVO理论(阐释范德华力与层间偶极作用的综合影响)外,进一步发展的DLVO 理论及非DLVO理论均可以解释粘土微生物反应的作用机理。

进一步发展的DLVO理论包含路易斯酸碱作用(熵的贡献,氢键乃至后来的水合作用压力以及疏水作用),而非DLVO理论则涉及细胞表面外的构象变化。

抛开作用机理不谈,污染物降解的程度与效率决定于它们的难降解程度(由其在粘土矿物中的吸附、解吸及扩散作用控制)。

换句话说,污染物微生物降解途径的可利用性在粘土矿物对烃类化合物的表面全降解作用机理尚待摸索的当下仍是一个颇具争议性的话题。

在粘土矿物、微生物及污染物三者作用的过程中可能发生下列三种吸附现象:一是粘土矿物吸附污染物;二是粘土矿物吸收微生物;三是微生物吸附污染物。

所有上述吸附相构成了由微生物组成的生物膜,进而促成瓦解污染物的微观环境的形成。

研究表明,微生物能轻松降解吸附于高岭土表层的敌草快,尽管如此,深陷于蒙脱石内部的同种化合物却并未得到有效降解。

大分子量的尤其是年代久远的土壤中的多环芳烃及有机挥发物难以被降解,除非借助溶解与扩散作用的发生实现这类物质的解吸。

故有机污染物的两相构型有望出现在它们的降解过程中。

这种两相现象自始至终,由快到慢地发生于有机污染物从土壤中解吸的全过程中。

从急速反应相中很容易获得能轻易被生物利用的有机碎片,而这种碎片在生物降解过程中的反应级数主要取决于解吸速率。

微生物自身有时能影响这种解析过程,例如恶臭假单胞菌属相较于产碱杆菌属而言能在提高有机粘土效度及促进有机萘分子从吸附剂表面解吸的过程中发挥更大作用。

尽管如此,对于土壤这类往往含有有机粘土混合物的物质,可以配合另一种具有显著热力学亲和力的微粒溶胶共作用于有机污染物,尤其是疏水化合物,以达到降解目的。

因此,有机碳容量成为了粘土矿物降解污染物过程中又一个关键的决定因素。

污染物有可能被消解或被封锁于粘土矿物微孔中,除非阳离子π键或n-π电子供受相互作用在微生物作用路径上(如粘土表面)居于支配地位。

由于体型较为庞大,细菌不能进入粘土矿物内部极细微的孔洞中,致使部分被封锁的物质不能参与微生物反应。

为实现将污染物解吸为可利用形态的目的,一种可利用的方式正是借由表面活性剂或是生物表面活性剂实现的。

这类物质可以降低各相表面张力,使各种化合物被生物降解过程充分利用,并如上文第四部分所述生物胞外酶能在被捕集的污染物的异化降解过程中发挥关键作用。

表面活性剂的应用及其生物毒性表面活性剂是自然状态下的两亲分子,可能同时对微生物产生促进与抑制双重作用,季铵类表面活性剂,当以游离态形式存在时,将对土壤微生物产生更为严重的毒害作用。

尽管如此,亦有研究表明土壤与粘土矿物能有效固结毒性物质,大幅降低其毒性,在粘土辅助环境修复过程中,这些表面活性剂常被用作粘土矿物调制剂,尤其是用于将亲水粘土矿物转化为疏水性产品制有机粘土时。

某些有机粘土会因其采用的人工合成表面活性剂而对自然微生物群落产生毒害,如由十六(八)烷基三甲基溴化铵改性的膨润土会削弱脱氢酶作用,抑制硝化,并对土壤微生物及其他生物相构成损害。

表面活性剂类型、应用率以及土壤自身理化性质(如pH、含水量、粘土质含量等)均对微生物群落的受毒害作用有重要影响,例如:Arquad2HT-75改性膨润土相较于HDTMA/ODTMA 改性的膨润土具有显著的低毒性。

借助分子探测技术有研究表明某些有机化粘土(如烷基季铵或2-羟乙基甲铵改性膨润土)对于原始土壤生物相不会产生任何抑制作用。

毒性作用生成与否的巨大差异还发生在不同的泥土类型及不同的功能性生物基因中,其作用机理尚待研究。

由于由某些特定有机分子制备的有机粘土正作为潜在的抑菌介质广泛应用于科学研究中,此话题引发的讨论正变的更为激烈。

因此,尽管表面活性剂能提高有机污染物的可生化性至适合改性粘土矿物降解的范围,但其分子自身也存在潜在的微生物毒性。

这种微生物毒性很大程度上取决于其分子结构、投加量、投加方式(游离基或复合粘土基)以及应用媒介(水性悬液或土壤介质)等因素。

因此,对于表面活性剂的利用仍需进行更为细致与缜密的研究,以确定能使改性粘土辅助微生物降解作用达到最佳效果的生物质条件。

从这个层面上来说,生物表面活性剂表征了一种更为绿色化的途径,削减了由人工合成表面活性剂使用所衍生的毒性作用。

生物表面活性剂的应用近几年来,生物表面活性剂促进污染物微生物降解的应用日益增多,其作用机理主要还是通过增加可生化利用的基质来实现的,其相较于人工合成表面活性剂而言高效低毒(甚至无毒)的特点使其倍受青睐。

在碳氢化合物生物降解的过程中,多数能产生表面活性剂的细菌可形成鼠李糖脂、糖脂类、脂质、emulsan(RAG-1代谢产生的一种胞外脂多糖)等天然表面活性剂。

通过生成鼠李糖脂能实现微生物对广谱多环芳烃(如萘、芴、菲、芘)及多种挥发性有机污染物的有效降解。

微生物分泌的生物表面活性成分能通过减小固液相表面张力与增加稠环芳烃水中溶解度的方式从土壤表层中解吸稠环芳烃及其它碳氢化合物,尤其是当此类污染物浓度较高,甚至超过临界胶束浓度时,这种作用表现得尤为明显。

为进一步实现粘土基质PAHs/VOCs的生物降解,一种兼容性的组合——降解细菌与表面活性剂生成细菌协同作用,或许能克服粘土矿物内部微孔中滞留的污染物无法消解的阻碍,构成一种近乎完美的耦合双微生物体系。

正如表3中所呈现的,(生物)表面活性剂不论是否形成胶束都能够发挥功效。

通过从粘土中解吸污染物、增大污染物溶解度,最终促成污染物向生物反应区的转移。

尽管如此,胶束形态下,尤其是合成表面活性剂或将形成一道屏障限制稠环芳烃的降解。

大为不同的是,生物表面活性剂常以非胶束形态存在并能显著提高生物可利用性,依靠微生物直接摄食污染物的机制实现生物降解。

尽管如此,从土壤中解析的生物表面活性剂很可能在降解过程中构成自体限制。

粘土含量及其中铁的氧化物的含量会显著影响鼠李糖脂的解吸过程,相较于蒙脱土型粘土,高岭土型粘土的吸收的生物表面活性剂要相对少量,且会促使大量金属物质从土壤中解吸。

不过,粘土、烃类污染物、微生物三相系统在生物表面活性剂存在状况下的作用机制还不甚明朗。

在水性悬液及土壤系统中实现含生物表面活性剂的粘土吸附剂的就地生产还是一个未曾涉足的领域,亟待严谨探究。

这在当下人们迫切希望生物表面活性剂能在现实世界环境生态治理中取代化学表面活性剂来实现对碳氢化合物降解的大环境下显得尤为重要。

粘土微生物的相互作用对于有机污染物的去除兼具正反两方面的影响。

在微生物降解污染的过程中,由外部引入的细菌往往难以高效生长,其原因大致有以下三个方面:一是在高污染地区污染物本身具有生物毒性;二是本土微生物的竞争抑制;三是其它恶劣的环境因素。

在这种状态下,采用粘土吸附剂可以固结部分污染物,削弱其对引入微生物的即时毒性进而营造良好的生存环境供微生物增殖。

此外,工程粘土吸附剂能降低混合污染物环境下重金属及合金对微生物的毒性,促使微生物发挥降解有机污染物的功效。

诸多报道印证了粘土基质对生物膜提高污染物生物降解具有支撑作用。

有假设认为降解过程中产生的代谢产物能通过吸附作用在粘土矿物表面及微生物细胞表面间实现迁移,这使得降解过程得以持续进行。

涉及这种作用的细菌能够通过该过程所形成一种叫做“粘土”的特殊结构实现自我防卫,以应对来自掠食性变形虫、线虫以及鞭毛虫的威胁。

亦有报道粘土矿物会阻碍有机污染物的可生化性。

但这种阻碍只需要通过将生物表面活性剂生成细菌或胞外酶生成细菌引入系统这种极为绿色的方式就可以克服。

胞外酶这种能参与到某些化合物降解过程中的物质同样能被土壤原子所吸附,但这个过程受到降解效率的限制。

尽管如此,此过程在具有单峰孔径分布的介孔矾土或硅土材料中较易发生。

相比较而言,具有多峰孔径分布的矿物则不易发生此类酶掩蔽现象。

取而代之的是酶物质能侵入粘土微孔结构,促进烃类的降解。

研究指出由菌生成的胞外酶能够有效渗透入CTMA改性蒙脱土内层空间并降解内含其中的有机磷杀虫剂(苯线磷)。

然而,尽管人们对于改性粘土生物降解技术的优势与缺陷都进行了论述,其最终取得成功的潜在作用条件仍不能一概而论,而是有赖于因地制宜,统筹兼顾各联系量,乃至系统中某单一组分的影响综合确定得出的。

粘土介质生物降解技术的经济性分析与应用展望通过微生物进行生物降解是环境生态治理中经济可行且环境友好的可选途径。

目前的研究致力于提高生物降解过程的速率。

粘土介质微生物能提高环境污染物的生物降解速率并有望以最小的投入换取最大产出。

事实表明,粘土及改性粘土矿物对于众多环境污染物而言是极为廉价的超级吸附剂。

很多粘土沉积物在世界范围内广泛存在(包括美国、中国、印度、澳大利亚、巴西、捷克共和国、德国等均有报道),分布于沿海滩涂及海底沉积床层。

因此,这些地方就成了吸附剂及微生物载体经济且天然的来源地。

相关文档
最新文档