英文文献及翻译
英文文献翻译
外文文献原稿和译文原稿Sodium Polyacrylate:Also known as super-absorbent or “SAP”(super absorbent polymer), Kimberly Clark used to call it SAM (super absorbent material). It is typically used in fine granular form (like table salt). It helps improve capacity for better retention in a disposable diaper, allowing the product to be thinner with improved performance and less usage of pine fluff pulp. The molecular structure of the polyacrylate has sodium carboxylate groups hanging off the main chain. When it comes in contact with water, the sodium detaches itself, leaving only carboxylions. Being negatively charged, these ions repel one another so that the polymer also has cross-links, which effectively leads to a three-dimensional structure. It has hige molecular weight of more than a million; thus, instead of getting dissolved, it solidifies into a gel. The Hydrogen in the water (H-O-H) is trapped by the acrylate due to the atomic bonds associated with the polarity forces between the atoms. Electrolytes in the liquid, such as salt minerals (urine contains 0.9% of minerals), reduce polarity, thereby affecting superabsorbent properties, especially with regard to the superabsorbent capacity for liquid retention. This is the main reason why diapers containing SAP should never be tested with plain water. Linear molecular configurations have less total capacity than non-linear molecules but, on the other hand, retention of liquid in a linear molecule is higher than in a non-linear molecule, due to improved polarity. For a list of SAP suppliers, please use this link: SAP, the superabsorbent can be designed to absorb higher amounts of liquids (with less retention) or very high retentions (but lower capacity). In addition, a surface cross linker can be added to the superabsorbent particle to help it move liquids while it is saturated. This helps avoid formation of "gel blocks", the phenomenon that describes the impossibility of moving liquids once a SAP particle gets saturated.History of Super Absorbent Polymer ChemistryUn til the 1980’s, water absorbing materials were cellulosic or fiber-based products. Choices were tissue paper, cotton, sponge, and fluff pulp. The water retention capacity of these types of materials is only 20 times their weight – at most.In the early 1960s, the United States Department of Agriculture (USDA) was conducting work on materials to improve water conservation in soils. They developed a resin based on the grafting of acrylonitrile polymer onto the backbone of starch molecules (i.e. starch-grafting). The hydrolyzed product of the hydrolysis of this starch-acrylonitrile co-polymer gave water absorption greater than 400 times its weight. Also, the gel did not release liquid water the way that fiber-based absorbents do.The polymer came to be known as “Super Slurper”.The USDA gave the technical know how several USA companies for further development of the basic technology. A wide range of grating combinations were attempted including work with acrylic acid, acrylamide and polyvinyl alcohol (PVA).Since Japanese companies were excluded by the USDA, they started independent research using starch, carboxy methyl cellulose (CMC), acrylic acid, polyvinyl alcohol (PVA) and isobutylene maleic anhydride (IMA).Early global participants in the development of super absorbent chemistry included Dow Chemical, Hercules, General Mills Chemical, DuPont, National Starch & Chemical, Enka (Akzo), Sanyo Chemical, Sumitomo Chemical, Kao, Nihon Starch and Japan Exlan.In the early 1970s, super absorbent polymer was used commercially for the first time –not for soil amendment applications as originally intended –but for disposable hygienic products. The first product markets were feminine sanitary napkins and adult incontinence products.In 1978, Park Davis (d.b.a. Professional Medical Products) used super absorbent polymers in sanitary napkins.Super absorbent polymer was first used in Europe in a baby diaper in 1982 when Schickendanz and Beghin-Say added the material to the absorbent core. Shortly thereafter, UniCharm introduced super absorbent baby diapers in Japan while Proctor & Gamble and Kimberly-Clark in the USA began to use the material.The development of super absorbent technology and performance has been largely led by demands in the disposable hygiene segment. Strides in absorption performance have allowed the development of the ultra-thin baby diaper which uses a fraction of the materials – particularly fluff pulp – which earlier disposable diapers consumed.Over the years, technology has progressed so that there is little if any starch-grafted super absorbent polymer used in disposable hygienic products. These super absorbents typically are cross-linked acrylic homo-polymers (usually Sodium neutralized).Super absorbents used in soil amendments applications tend to be cross-linked acrylic-acrylamide co-polymers (usually Potassium neutralized).Besides granular super absorbent polymers, ARCO Chemical developed a super absorbent fiber technology in the early 1990s. This technology was eventually sold to Camelot Absorbents. There are super absorbent fibers commercially available today. While significantly more expensive than the granular polymers, the super absorbent fibers offer technical advantages in certain niche markets including cable wrap, medical devices and food packaging.Sodium polyacrylate, also known as waterlock, is a polymer with the chemical formula [-CH2-CH(COONa)-]n widely used in consumer products. It has the ability to absorb as much as 200 to 300 times its mass in water. Acrylate polymers generally are considered to possess an anionic charge. While sodium neutralized polyacrylates are the most common form used in industry, there are also other salts available including potassium, lithium and ammonium.ApplicationsAcrylates and acrylic chemistry have a wide variety of industrial uses that include: ∙Sequestering agents in detergents. (By binding hard water elements such as calcium and magnesium, the surfactants in detergents work more efficiently.) ∙Thickening agents∙Coatings∙Fake snowSuper absorbent polymers. These cross-linked acrylic polymers are referred to as "Super Absorbents" and "Water Crystals", and are used in baby diapers. Copolymerversions are used in agriculture and other specialty absorbent applications. The origins of super absorbent polymer chemistry trace back to the early 1960s when the U.S. Department of Agriculture developed the first super absorbent polymer materials. This chemical is featured in the Maximum Absorbency Garment used by NASA.译文聚丙烯酸钠聚丙烯酸钠,又可以称为超级吸收剂或者又叫高吸水性树脂,凯博利克拉克教授曾经称它为SAM即:超级吸收性物质。
英文文献及翻译(计算机专业)
NET-BASED TASK MANAGEMENT SYSTEM Hector Garcia-Molina, Jeffrey D. Ullman, Jennifer WisdomABSTRACTIn net-based collaborative design environment, design resources become more and more varied and complex. Besides common information management systems, design resources can be organized in connection with design activities.A set of activities and resources linked by logic relations can form a task. A task has at least one objective and can be broken down into smaller ones. So a design project can be separated into many subtasks forming a hierarchical structure.Task Management System (TMS) is designed to break down these tasks and assign certain resources to its task nodes.As a result of decomposition.al1 design resources and activities could be managed via this system.KEY WORDS:Collaborative Design, Task Management System (TMS), Task Decomposition, Information Management System1 IntroductionAlong with the rapid upgrade of request for advanced design methods, more and more design tool appeared to support new design methods and forms. Design in a web environment with multi-partners being involved requires a more powerful and efficient management system .Design partners can be located everywhere over the net with their own organizations. They could be mutually independent experts or teams of tens ofemployees. This article discusses a task management system (TMS) which manages design activities and resources by breaking down design objectives and re-organizing design resources in connection with the activities. Comparing with common information management systems (IMS) like product data management system and document management system, TMS can manage the whole design process. It has two tiers which make it much more f1exible in structure.The 1ower tier consists of traditional common IMSS and the upper one fulfills logic activity management through controlling a tree-like structure, allocating design resources and making decisions about how to carry out a design project. Its functioning paradigm varies in different projects depending on the project’s scale and purpose. As a result of this structure, TMS can separate its data model from its logic mode1.It could bring about structure optimization and efficiency improvement, especially in a large scale project.2 Task Management in Net-Based Collaborative Design Environment 2.1 Evolution of the Design EnvironmentDuring a net-based collaborative design process, designers transform their working environment from a single PC desktop to LAN, and even extend to WAN. Each design partner can be a single expert or a combination of many teams of several subjects, even if they are far away from each other geographically. In the net-based collaborative design environment, people from every terminal of the net can exchange their information interactively with each other and send data to authorized roles via their design tools. The Co Design Space is such an environment which provides a set of these tools to help design partners communicate and obtaindesign information. Code sign Space aims at improving the efficiency of collaborative work, making enterprises increase its sensitivity to markets and optimize the configuration of resource.2.2 Management of Resources and Activities in Net-Based Collaborative EnvironmentThe expansion of design environment also caused a new problem of how to organize the resources and design activities in that environment. As the number of design partners increases, resources also increase in direct proportion. But relations between resources increase in square ratio. To organize these resources and their relations needs an integrated management system which can recognize them and provide to designers in case of they are needed.One solution is to use special information management system (IMS).An IMS can provide database, file systems and in/out interfaces to manage a given resource. For example there are several IMS tools in Co Design Space such as Product Data Management System, Document Management System and so on. These systems can provide its special information which design users want.But the structure of design activities is much more complicated than these IM S could manage, because even a simple design project may involve different design resources such as documents, drafts and equipments. Not only product data or documents, design activities also need the support of organizations in design processes. This article puts forward a new design system which attempts to integrate different resources into the related design activities. That is task management system (TMS).3 Task Breakdown Model3.1 Basis of Task BreakdownWhen people set out to accomplish a project, they usually separate it into a sequence of tasks and finish them one by one. Each design project can be regarded as an aggregate of activities, roles and data. Here we define a task as a set of activities and resources and also having at least one objective. Because large tasks can be separated into small ones, if we separate a project target into several lower—level objectives, we define that the project is broken down into subtasks and each objective maps to a subtask. Obviously if each subtask is accomplished, the project is surely finished. So TMS integrates design activities and resources through planning these tasks.Net-based collaborative design mostly aims at products development. Project managers (PM) assign subtasks to designers or design teams who may locate in other cities. The designers and teams execute their own tasks under the constraints which are defined by the PM and negotiated with each other via the collaborative design environment. So the designers and teams are independent collaborative partners and have incompact coupling relationships. They are driven together only by theft design tasks. After the PM have finished decomposing the project, each designer or team leader who has been assigned with a subtask become a 1ow-class PM of his own task. And he can do the same thing as his PM done to him, re-breaking down and re-assigning tasks.So we put forward two rules for Task Breakdown in a net-based environment, incompact coupling and object-driven. Incompact coupling means the less relationship between two tasks. When two subtasks were coupled too tightly, therequirement for communication between their designers will increase a lot. Too much communication wil1 not only waste time and reduce efficiency, but also bring errors. It will become much more difficult to manage project process than usually in this situation. On the other hand every task has its own objective. From the view point of PM of a superior task each subtask could be a black box and how to execute these subtasks is unknown. The PM concerns only the results and constraints of these subtasks, and may never concern what will happen inside it.3.2 Task Breakdown MethodAccording to the above basis, a project can be separated into several subtasks. And when this separating continues, it will finally be decomposed into a task tree. Except the root of the tree is a project, all eaves and branches are subtasks. Since a design project can be separated into a task tree, all its resources can be added to it depending on their relationship. For example, a Small-Sized-Satellite.Design (3SD) project can be broken down into two design objectives as Satellite Hardware. Design (SHD) and Satellite-Software-Exploit (SSE). And it also has two teams. Design team A and design team B which we regard as design resources. When A is assigned to SSE and B to SHD. We break down the project as shown in Fig 1.It is alike to manage other resources in a project in this way. So when we define a collaborative design project’s task model, we should first claim the project’s targets. These targets include functional goals, performance goals, and quality goals and so on. Then we could confirm how to execute this project. Next we can go on to break down it. The project can be separated into two or more subtasks since there are at 1east two partners in a collaborative project. Either wecould separate the project into stepwise tasks, which have time sequence relationships in case of some more complex projects and then break down the stepwise tasks according to their phase-to-phase goals.There is also another trouble in executing a task breakdown. When a task is broken into severa1 subtasks; it is not merely “a simple sum motion”of other tasks. In most cases their subtasks could have more complex relations.To solve this problem we use constraints. There are time sequence constraint (TSC) and logic constraint (LC). The time sequence constraint defines the time relationships among subtasks. The TSC has four different types, FF, FS, SF and SS. F means finish and S presents start. If we say Tabb is FS and lag four days, it means Tb should start no later than four days after Ta is finished.The logic constraint is much more complicated. It defines logic relationship among multiple tasks.Here is given an example:“Task TA is separated into three subtasks, Ta, T b and Tc. But there are two more rules.Tb and Tc can not be executed until Ta is finished.Tb and Tc can not be executed both,that means if Tb was executed, Tc should not be executed, and vice versa. This depends on the result of Ta.”So we say Tb and Tc have a logic constraint. After finishing breaking down the tasks, we can get a task tree as Fig, 2 illustrates.4 TMS Realization4.1 TMS StructureAccording to our discussion about task tree model and task breakdown basis, we can develop a Task Management System (TMS)based on Co Design Space using Java language, JSP technology and Microsoft SQL 2000. The task management system’s structure is shown in Fig. 3.TMS has four main modules namely Task Breakdown, Role Management, Statistics and Query and Data Integration. The Task Breakdown module helps users to work out task tree. Role Management module performs authentication and authorization of access control. Statistics and Query module is an extra tool for users to find more information about their task. The last Data Integration Module provides in/out interface for TMS with its peripheral environment.4.2 Key Points in System Realization4.2.1 Integration with Co Design SpaceCo Design Space is an integrated information management system which stores, shares and processes design data and provides a series of tools to support users. These tools can share all information in the database because they have a universal Data Mode1. Which is defined in an XML (extensible Markup Language) file, and has a hierarchical structure. Based on this XML structure the TMS h data mode1 definition is organized as following.<?xml version= 1.0 encoding= UTF-8’?><!--comment:Common Resource Definitions Above.The Followingare Task Design--><!ELEMENT ProductProcessResource (Prcses?, History?,AsBuiltProduct*,ItemsObj?, Changes?, ManufacturerParts?,SupplierParts?,AttachmentsObj? , Contacts?, PartLibrary?,AdditionalAttributes*)><!ELEMENT Prcses (Prcs+) ><!ELEMENT Prcs (Prcses,PrcsNotes?,PrcsArc*,Contacts?,AdditionalAttributes*,Attachments?)><!ELEM ENT PrcsArc EMPTY><!ELEMENT PrcsNotes(PrcsNote*)><!ELEMENT PrcsNote EMPTY>Notes: Element “Pros” is a task node object, and “Process” is a task set object which contains subtask objects and is belongs to a higher class task object. One task object can have no more than one “Presses” objects. According to this definition, “Prcs”objects are organized in a tree-formation process. The other objects are resources, such as task link object (“Presage”), task notes (“Pros Notes”), and task documents (“Attachments”) .These resources are shared in Co Design database.文章出处:计算机智能研究[J],47卷,2007:647-703基于网络的任务管理系统摘要在网络与设计协同化的环境下,设计资源变得越来越多样化和复杂化。
自动化专业-外文文献-英文文献-外文翻译-plc方面
1、外文原文(复印件)A: Fundamentals of Single-chip MicrocomputerTh e si ng le-ch i p mi cr oc om pu ter is t he c ul mi nat i on o f bo th t h e d ev el op me nt o f th e d ig it al com p ut er an d t he int e gr at ed ci rc ui ta r gu ab ly th e t ow m os t s i gn if ic ant i nv en ti on s o f t h e 20t h c en tu ry[1].Th es e to w t ype s o f a rc hi te ct ur e a re fo un d i n s i ng le—ch ip m i cr oc om pu te r。
S o me em pl oy th e s p li t p ro gr am/d at a me mo ry of t he H a rv ar d ar ch it ect u re, sh ow n in Fi g.3-5A—1,ot he r s fo ll ow t hep h il os op hy, wi del y a da pt ed f or ge n er al—pu rp os e c o mp ut er s an dm i cr op ro ce ss or s, of ma ki ng no lo gi c al di st in ct io n be tw ee n p ro gr am a n d da ta m em or y a s i n th e Pr in cet o n ar ch it ec tu re,sh ow n in F ig。
3-5A-2.In g en er al te r ms a s in gl e—ch i p mi cr oc om pu ter isc h ar ac te ri zed b y the i nc or po ra tio n of al l t he uni t s o f a co mp ut er i n to a s in gl e de v i ce,as s ho wn i n F ig3—5A—3。
英文文献全文翻译
英文文献全文翻译全文共四篇示例,供读者参考第一篇示例:LeGuin, Ursula K. (December 18, 2002). "Dancing at the Edge of the World: Thoughts on Words, Women, Places".《世界边缘的舞蹈:关于语言、女性和地方的思考》Introduction:In "Dancing at the Edge of the World," Ursula K. LeGuin explores the intersection of language, women, and places. She writes about the power of words, the role of women in society, and the importance of our connection to the places we inhabit. Through a series of essays, LeGuin invites readers to think critically about these topics and consider how they shape our understanding of the world.Chapter 1: LanguageConclusion:第二篇示例:IntroductionEnglish literature translation is an important field in the study of language and culture. The translation of English literature involves not only the linguistic translation of words or sentences but also the transfer of cultural meaning and emotional resonance. This article will discuss the challenges and techniques of translating English literature, as well as the importance of preserving the original author's voice and style in the translated text.Challenges in translating English literature第三篇示例:Title: The Importance of Translation of Full English TextsTranslation plays a crucial role in bringing different languages and cultures together. More specifically, translating full English texts into different languages allows for access to valuable information and insights that may otherwise be inaccessible to those who do not speak English. In this article, we will explore the importance of translating full English texts and the benefits it brings.第四篇示例:Abstract: This article discusses the importance of translating English literature and the challenges translators face when putting together a full-text translation. It highlights the skills and knowledge needed to accurately convey the meaning and tone of the original text while preserving its cultural and literary nuances. Through a detailed analysis of the translation process, this article emphasizes the crucial role translators play in bridging the gap between languages and making English literature accessible to a global audience.IntroductionEnglish literature is a rich and diverse field encompassing a wide range of genres, styles, and themes. From classic works by Shakespeare and Dickens to contemporary novels by authors like J.K. Rowling and Philip Pullman, English literature offers something for everyone. However, for non-English speakers, accessing and understanding these works can be a challenge. This is where translation comes in.Translation is the process of rendering a text from one language into another, while striving to preserve the original meaning, tone, and style of the original work. Translating afull-length English text requires a deep understanding of both languages, as well as a keen awareness of the cultural andhistorical context in which the work was written. Additionally, translators must possess strong writing skills in order to convey the beauty and complexity of the original text in a new language.Challenges of Full-text TranslationTranslating a full-length English text poses several challenges for translators. One of the most significant challenges is capturing the nuances and subtleties of the original work. English literature is known for its rich and layered language, with intricate wordplay, metaphors, and symbolism that can be difficult to convey in another language. Translators must carefully consider each word and phrase in order to accurately convey the author's intended meaning.Another challenge of full-text translation is maintaining the author's unique voice and style. Each writer has a distinct way of expressing themselves, and a good translator must be able to replicate this voice in the translated text. This requires a deep understanding of the author's writing style, as well as the ability to adapt it to the conventions of the target language.Additionally, translators must be mindful of the cultural and historical context of the original work. English literature is deeply rooted in the history and traditions of the English-speaking world, and translators must be aware of these influences in orderto accurately convey the author's intended message. This requires thorough research and a nuanced understanding of the social, political, and economic factors that shaped the work.Skills and Knowledge RequiredTo successfully translate a full-length English text, translators must possess a wide range of skills and knowledge. First and foremost, translators must be fluent in both the source language (English) and the target language. This includes a strong grasp of grammar, syntax, and vocabulary in both languages, as well as an understanding of the cultural and historical context of the works being translated.Translators must also have a keen eye for detail and a meticulous approach to their work. Every word, sentence, and paragraph must be carefully considered and translated with precision in order to accurately convey the meaning of the original text. This requires strong analytical skills and a deep understanding of the nuances and complexities of language.Furthermore, translators must possess strong writing skills in order to craft a compelling and engaging translation. Translating a full-length English text is not simply a matter of substituting one word for another; it requires creativity, imagination, and a deep appreciation for the beauty of language. Translators mustbe able to capture the rhythm, cadence, and tone of the original work in their translation, while also adapting it to the conventions of the target language.ConclusionIn conclusion, translating a full-length English text is a complex and challenging task that requires a high level of skill, knowledge, and creativity. Translators must possess a deep understanding of both the source and target languages, as well as the cultural and historical context of the work being translated. Through their careful and meticulous work, translators play a crucial role in making English literature accessible to a global audience, bridging the gap between languages and cultures. By preserving the beauty and complexity of the original text in their translations, translators enrich our understanding of literature and bring the works of English authors to readers around the world.。
英文文献+翻译
Characterization of production of Paclitaxel and related Taxanes in Taxus Cuspidata Densiformis suspension cultures by LC,LC/MS, and LC/MS/MSCHAPTER THEREPLANT TISSUE CULTUREⅠ. Potential of Plant cell Culture for Taxane ProductionSeveral alternative sources of paclitaxel have been identified and are currently the subjects of considerable investigation worldwide. These include the total synthesis and biosynthesis of paclitaxel, the agriculture supply of taxoids from needles of Taxus species, hemisynthesis (the attachment of a side chain to biogenetic precursors of paclitaxel such as baccatin Ⅲ or 10-deacetylbaccatin Ⅲ), fungus production, and the production of taxoids by cell and tissue culture. This reciew will concentrate only on the latter possibility.Plant tissue culture is one approach under investigation to provide large amounts and a stable supply of this compound exhibiting antineoplastic activity. A process to produce paclitaxel or paclitaxel-like compounds in cell culture has already been parented. The development of fast growing cell lines capable of producing paclitaxel would not only solve the limitations in paclitaxel supplies presently needed for clinical use, but would also help conserve the large number of trees that need to be harvested in order to isolate it. Currently, scientists and researchers have been successful in initiating fast plant growth but with limited paclitaxel production or vice versa. Therefore, it is the objective of researchers to find a method that will promote fast plant growth and also produce a large amount of paclitaxel at the same time.Ⅱ. Factors Influencing Growth Paclitaxel ContentA.Choice of Media for GrowthGamborg's (B5) and Murashige & Skoog's (MS) media seem to be superior for callus growth compared to White's (WP) medium. The major difference between these two media is that the MS medium contains 40 mM nitrate and 20mM ammonium, compared to 25mM nitrate and 2mM ammonium. Many researchers have selected the B5 medium over the MS medium for all subsequent studies, although they achieve similar results.Gamborg's B5 media was used throughout our experiments for initiation of callus cultures and suspension cultures due to successful published results. It was supplemented with 2% sucrose, 2 g/L casein hydrolysate, 2.4 mg/L picloram, and 1.8 mg/L α-naphthalene acetic acid. Agar (8 g/L) was used for solid cultures.B. Initiation of Callus CulturesPrevious work indicated that bark explants seem to be the most useful for establishing callus. The age of the tree did not appear to affect the ability to initiate callus when comparing both young and old tree materials grown on Gamborg's B5 medium supplemented with 1-2 mg/L of 2,4-dichlorophenoxyacetic acid. Callus cultures initiated and maintained in total darkness were generally pale-yellow to light brown in color. This resulted in sufficient masses of friable callus necessary for subculture within 3-4 weeks. However, the growth rate can decline substantially following the initial subculture and result in very slow-growing, brown-colored clumps of callus. It has been presumed that these brown-colored exudates are phenolic in nature and can eventually lead to cell death. This common phenomenon is totally random and unpredictable. Once this phenomenon has been triggered, the cells could not be saved by placing them in fresh media. However, adding polyvinylpyrrolidone to the culture media can help keep the cells alive and growing. Our experience with callus initiationwas similar to those studies.Our studies have found that callus which initiated early (usually within 2 weeks ) frequently did not proliferate when subcultured and turned brown and necrotic. In contrast, calli which developed from 4 weeks to 4 months after explants were fist placed on initiation media were able to be continuously subcultured when transferred at 1-2 month intervals. The presence of the survival of callus after subsequent subculturing. The relationship between paclitaxel concentration and callus initiation, however, has not been clarified.C. Effect of SugarSucrose is the preferred carbon source for growth in plant cell cultures, although the presence of more rapidly metabolized sugar such as glucose favors fast growth. Other sugars such as lactose, galactose, glucose, and fructose also support cell growth to some extent. On the other hand, sugar alcohols such as mannitol and sorbital which are generally used to raise the sugars added play a major role in the production of paclitaxel. In general, raising the initial sugar levels lead to an increase of secondary metabolite production. High initial levels of sugar increase the osmotic potential, although the role of osmotic pressure on the synthesis of secondary metabolites is not cleat. Kim and colleagues have shown that the highest level of paclitaxel was obtained with fructosel. The optimum concentration of each sugar for paclitaxel production was found to be the same at 6% in all cases. Wickremesinhe and Arteca have provided additional support that fructose is the most effective for paclitaxel production. However, other combinations of sugars such as sucrose combined with glucose also increased paclitaxel production.The presence of extracellular invertase activity and rapid extracellular sucrose hydrolysis has been observed in many cell cultures. These reports suggest that cells secrete or possess on their surface excess amounts of invertase, which result in the hydrolysis of sucrose at a much faster rate. The hydrolysis of sucrose coupled with the rapid utilization of fructose in the medium during the latter period of cell growth. This period of increased fructose availability coincided with the faster growth phase of the cells.D. Effect of Picloram and Methyl JasmonatePicloram (4-amino-3.5.6-trichloropicolinic acid) increases growth rate while methyl jasmonate has been reported to be an effective elicitor in the production of paclitaxel and other taxanes. However, little is known about the mechanisms or pathways that stimulate these secondary metabolites.Picloram had been used by Furmanowa and co-workers and Ketchum and Gibson but no details on the effect of picloram on growth rates were given. Furmanowa and hid colleagues observed growth of callus both in the presence and absence of light. The callus grew best in the dark showing a 9.3 fold increase, whereas there was only a 2-4 fold increase in the presence of light. Without picloram, callus growth was 0.9 fold. Unfortunately,this auxin had no effect on taxane production and the high callus growth rate was very unstable.Jasmonates exhibit various morphological and physiological activities when applied exogenously to plants. They induce transcriptional activation of genes involved in the formation of secondary metabolites. Methyl jasmonate was shown to stimulate paclitaxel and cephalomannine (taxane derivative) production in callus and suspension cultures. However, taxane production was best with White's medium compared to Gamborg's B5 medium. This may be due to the reduced concentration of potassium nitrate and a lack of ammonium sulfate with White's medium.E. Effect of Copper Sulfate and Mercuric ChlorideMetal ions have shown to play significant roles in altering the expression of secondary metabolic pathways in plant cell culture. Secondary metabolites,such as furano-terpenes, have been production by treatment of sweet potato root tissue with mercuric chloride. The results for copper sulfate, however, have not been reported. F. Growth Kinetics and Paclitaxel ProductionLow yields of paclitaxel may be attributed to the kinetics of taxane production that is not fully understood. Many reports stated inconclusive results on the kinetics of taxane production. More studies are needed in order to quantitate the taxane production. According to Nett-Fetto, the maximum instantaneous rate of paclitaxel production occurred at the third week upon further incubation. The paclitaxel level either declined or was not expected to increase upon further incubation. Paclitaxel production was very sensitive to slight variations in culture conditions. Due to this sensitivity, cell maintenance conditions, especially initial cell density, length of subculture interval, and temperature must be maintained as possible.Recently, Byun and co-workers have made a very detailed study on the kinetics of cell growth and taxane production. In their investigation, it was observed that the highest cell weight occurred at day 7 after inoculation. Similarly, the maximum concentration for 10-deacetyl baccatin Ⅲ and baccatin Ⅲ were detected at days 5 and 7, respectively. This result indicated that they are metabolic intermediates of paclitaxel. However, paclitaxel's maximum concentration was detected at day 22 but gradually declined. Byun and his colleagues suggested that paxlitaxel could be a metabolic intermediate like 10-deacetyl baccatin Ⅲ and baccatin Ⅲ or that pacliltaxel could be decomposed due to cellular morphological changes or DNA degradation characteristic of cell death.Pedtchanker's group also studied the kinetics of paclitaxel production by comparing the suspension cultures in shake flasks and Wilson-type reactors where bubbled air provided agitation and mixing. It was concluded that these cultures of Taxus cuspidata produced high levels of paclitaxel within three weeks (1.1 mg/L per day ). It was also determined that both cultures of the shake flask and Wilson-type reactor produced similar paclitaxel content. However, the Wilson-type reactor had a more rapid uptake of the nutrients (i.e. sugars, phosphate, calcium, and nitrate). This was probably due to the presence of the growth ring in the Wilson reactor. Therefor, the growth rate for the cultures from the Wilson reactor was only 135 mg./L while the shake flasks grew to 310 mg/L in three weeks.In retrospect, strictly controlled culture conditions are essential to consistent production and yield. Slight alterations in media formulations can have significant effects upon the physiology of cells, thereby affecting growth and product formation. All of the manipulations that affect growth and production of plant cells must be carefully integrated and controlled in order to maintain cell viability and stability.利用LC,LC/MS和LC/MS/MS悬浮培养生产紫杉醇及邓西佛米斯红豆杉中相关紫杉醇类的特征描述第三章植物组织培养Ⅰ.利用植物细胞培养生产紫杉的可能性紫杉醇的几个备选的来源已被确定,而且目前是全球大量调查的主题。
电气工程的外文文献(及翻译)
电气工程的外文文献(及翻译)文献一:Electric power consumption prediction model based on grey theory optimized by genetic algorithms本文介绍了一种基于混合灰色理论与遗传算法优化的电力消耗预测模型。
该模型使用时间序列数据来建立模型,并使用灰色理论来解决数据的不确定性问题。
通过遗传算法的优化,模型能够更好地预测电力消耗,并取得了优异的预测结果。
此模型可以在大规模电力网络中使用,并具有较高的可行性和可靠性。
文献二:Intelligent control for energy-efficient operation of electric motors本文研究了一种智能控制方法,用于电动机的节能运行。
该方法提供了一种更高效的控制策略,使电动机能够在不同负载条件下以较低的功率运行。
该智能控制使用模糊逻辑方法来确定最佳的控制参数,并使用遗传算法来优化参数。
实验结果表明,该智能控制方法可以显著降低电动机的能耗,节省电能。
文献三:Fault diagnosis system for power transformers based on dissolved gas analysis本文介绍了一种基于溶解气体分析的电力变压器故障诊断系统。
通过对变压器油中的气体样品进行分析,可以检测和诊断变压器内部存在的故障类型。
该系统使用人工神经网络模型来对气体分析数据进行处理和分类。
实验结果表明,该系统可以准确地检测和诊断变压器的故障,并有助于实现有效的维护和管理。
文献四:Power quality improvement using series active filter based on iterative learning control technique本文研究了一种基于迭代研究控制技术的串联有源滤波器用于电能质量改善的方法。
英文文献原文及对应翻译
Adsorption char acter istics of copper , lead, zinc and cadmium ions by tourmaline(环境科学学报英文版) 电气石对铜、铅、锌、镉离子的吸附特性JIANG Kan1,*, SUN Tie-heng1,2 , SUN Li-na2, LI Hai-bo2(1. School of Municipal and Environmental Engineering, Harbin Institute of Technology, Harbin 150090, China. jiangkan522@; 2. Key Laboratory of Environmental Engineering of Shenyang University, Shenyang 110041, China)摘要:本文研究了电气石对Cu2+、Pb2+、Zn2+和Cd2+的吸附特性,建立了吸附平衡方程。
研究四种金属离子的吸附等温线以及朗缪尔方程。
结果表明电气石能有效地去除水溶液中的重金属且具有选择性:Pb2+> Cu2+> Cd2+> Zn2+。
电气石对金属离子吸附量随着介质中金属离子的初始浓度的增加而增加。
电气石也可以增加金属溶液的pH值;发现电气石对Cu2+、Pb2+、Zn2+和Cd2+的最大吸附量为78.86、154.08、67.25和66.67mg/g;温度在25-55℃对电气石的吸附量影响很小。
此外研究了Cu2+、Pb2+、Zn2+和Cd2+的竞争吸附。
同时观察到电气石对单一金属离子的吸附能力为Pb>Cu>Zn>Cd,在两种金属系统中抑制支配地位是Pb>Cu,Pb>Zn,Pb>Cd,Cu>Zn,Cu>Cd,和Cd>Zn。
关键字:吸附;重金属含量;朗缪尔等温线;电气石介绍重金属是来自不同行业排出的废水,如电镀,金属表面处理,纺织,蓄电池,矿山,陶瓷,玻璃。
数据采集外文文献翻译中英文
数据采集外文文献翻译(含:英文原文及中文译文)文献出处:Txomin Nieva. DATA ACQUISITION SYSTEMS [J]. Computers in Industry, 2013, 4(2):215-237.英文原文DATA ACQUISITION SYSTEMSTxomin NievaData acquisition systems, as the name implies, are products and/or processes used to collect information to document or analyze some phenomenon. In the simplest form, a technician logging the temperature of an oven on a piece of paper is performing data acquisition. As technology has progressed, this type of process has been simplified and made more accurate, versatile, and reliable through electronic equipment. Equipment ranges from simple recorders to sophisticated computer systems. Data acquisition products serve as a focal point in a system, tying together a wide variety of products, such as sensors that indicate temperature, flow, level, or pressure. Some common data acquisition terms are shown below.Data collection technology has made great progress in the past 30 to 40 years. For example, 40 years ago, in a well-known college laboratory, the device used to track temperature rises in bronze made of helium was composed of thermocouples, relays, interrogators, a bundle of papers, anda pencil.Today's university students are likely to automatically process and analyze data on PCs. There are many ways you can choose to collect data. The choice of which method to use depends on many factors, including the complexity of the task, the speed and accuracy you need, the evidence you want, and more. Whether simple or complex, the data acquisition system can operate and play its role.The old way of using pencils and papers is still feasible for some situations, and it is cheap, easy to obtain, quick and easy to start. All you need is to capture multiple channels of digital information (DMM) and start recording data by hand.Unfortunately, this method is prone to errors, slower acquisition of data, and requires too much human analysis. In addition, it can only collect data in a single channel; but when you use a multi-channel DMM, the system will soon become very bulky and clumsy. Accuracy depends on the level of the writer, and you may need to scale it yourself. For example, if the DMM is not equipped with a sensor that handles temperature, the old one needs to start looking for a proportion. Given these limitations, it is an acceptable method only if you need to implement a rapid experiment.Modern versions of the strip chart recorder allow you to retrieve data from multiple inputs. They provide long-term paper records of databecause the data is in graphic format and they are easy to collect data on site. Once a bar chart recorder has been set up, most recorders have enough internal intelligence to operate without an operator or computer. The disadvantages are the lack of flexibility and the relative low precision, often limited to a percentage point. You can clearly feel that there is only a small change with the pen. In the long-term monitoring of the multi-channel, the recorders can play a very good role, in addition, their value is limited. For example, they cannot interact with other devices. Other concerns are the maintenance of pens and paper, the supply of paper and the storage of data. The most important is the abuse and waste of paper. However, recorders are fairly easy to set up and operate, providing a permanent record of data for quick and easy analysis.Some benchtop DMMs offer selectable scanning capabilities. The back of the instrument has a slot to receive a scanner card that can be multiplexed for more inputs, typically 8 to 10 channels of mux. This is inherently limited in the front panel of the instrument. Its flexibility is also limited because it cannot exceed the number of available channels. External PCs usually handle data acquisition and analysis.The PC plug-in card is a single-board measurement system that uses the ISA or PCI bus to expand the slot in the PC. They often have a reading rate of up to 1000 per second. 8 to 16 channels are common, and the collected data is stored directly in the computer and then analyzed.Because the card is essentially a part of the computer, it is easy to establish the test. PC-cards are also relatively inexpensive, partly because they have since been hosted by PCs to provide energy, mechanical accessories, and user interfaces. Data collection optionsOn the downside, the PC plug-in cards often have a 12-word capacity, so you can't detect small changes in the input signal. In addition, the electronic environment within the PC is often susceptible to noise, high clock rates, and bus noise. The electronic contacts limit the accuracy of the PC card. These plug-in cards also measure a range of voltages. To measure other input signals, such as voltage, temperature, and resistance, you may need some external signal monitoring devices. Other considerations include complex calibrations and overall system costs, especially if you need to purchase additional signal monitoring devices or adapt the PC card to the card. Take this into account. If your needs change within the capabilities and limitations of the card, the PC plug-in card provides an attractive method for data collection.Data electronic recorders are typical stand-alone instruments that, once equipped with them, enable the measurement, recording, and display of data without the involvement of an operator or computer. They can handle multiple signal inputs, sometimes up to 120 channels. Accuracy rivals unrivalled desktop DMMs because it operates within a 22 word, 0.004 percent accuracy range. Some data electronic automatic recordershave the ability to measure proportionally, the inspection result is not limited by the user's definition, and the output is a control signal.One of the advantages of using data electronic loggers is their internal monitoring signals. Most can directly measure several different input signals without the need for additional signal monitoring devices. One channel can monitor thermocouples, RTDs, and voltages.Thermocouples provide valuable compensation for accurate temperature measurements. They are typically equipped with multi-channel cards. Built-in intelligent electronic data recorder helps you set the measurement period and specify the parameters for each channel. Once you set it all up, the data electronic recorder will behave like an unbeatable device. The data they store is distributed in memory and can hold 500,000 or more readings.Connecting to a PC makes it easy to transfer data to a computer for further analysis. Most data electronic recorders can be designed to be flexible and simple to configure and operate, and most provide remote location operation options via battery packs or other methods. Thanks to the A/D conversion technology, certain data electronic recorders have a lower reading rate, especially when compared with PC plug-in cards. However, a reading rate of 250 per second is relatively rare. Keep in mind that many of the phenomena that are being measured are physical in nature, such as temperature, pressure, and flow, and there are generallyfewer changes. In addition, because of the monitoring accuracy of the data electron loggers, a large amount of average reading is not necessary, just as they are often stuck on PC plug-in cards.Front-end data acquisition is often done as a module and is typically connected to a PC or controller. They are used in automated tests to collect data, control and cycle detection signals for other test equipment. Send signal test equipment spare parts. The efficiency of the front-end operation is very high, and can match the speed and accuracy with the best stand-alone instrument. Front-end data acquisition works in many models, including VXI versions such as the Agilent E1419A multi-function measurement and VXI control model, as well as a proprietary card elevator. Although the cost of front-end units has been reduced, these systems can be very expensive unless you need to provide high levels of operation, and finding their prices is prohibited. On the other hand, they do provide considerable flexibility and measurement capabilities.Good, low-cost electronic data loggers have the right number of channels (20-60 channels) and scan rates are relatively low but are common enough for most engineers. Some of the key applications include:•product features•Hot die cutting of electronic products•Test of the environmentEnvironmental monitoring•Composition characteristics•Battery testBuilding and computer capacity monitoringA new system designThe conceptual model of a universal system can be applied to the analysis phase of a specific system to better understand the problem and to specify the best solution more easily based on the specific requirements of a particular system. The conceptual model of a universal system can also be used as a starting point for designing a specific system. Therefore, using a general-purpose conceptual model will save time and reduce the cost of specific system development. To test this hypothesis, we developed DAS for railway equipment based on our generic DAS concept model. In this section, we summarize the main results and conclusions of this DAS development.We analyzed the device model package. The result of this analysis is a partial conceptual model of a system consisting of a three-tier device model. We analyzed the equipment project package in the equipment environment. Based on this analysis, we have listed a three-level item hierarchy in the conceptual model of the system. Equipment projects are specialized for individual equipment projects.We analyzed the equipment model monitoring standard package in the equipment context. One of the requirements of this system is the ability to use a predefined set of data to record specific status monitoring reports. We analyzed the equipment project monitoring standard package in the equipment environment. The requirements of the system are: (i) the ability to record condition monitoring reports and event monitoring reports corresponding to the items, which can be triggered by time triggering conditions or event triggering conditions; (ii) the definition of private and public monitoring standards; (iii) Ability to define custom and predefined train data sets. Therefore, we have introduced the "monitoring standards for equipment projects", "public standards", "special standards", "equipment monitoring standards", "equipment condition monitoring standards", "equipment project status monitoring standards and equipment project event monitoring standards, respectively Training item triggering conditions, training item time triggering conditions and training item event triggering conditions are device equipment trigger conditions, equipment item time trigger conditions and device project event trigger condition specialization; and training item data sets, training custom data Sets and trains predefined data sets, which are device project data sets, custom data sets, and specialized sets of predefined data sets.Finally, we analyzed the observations and monitoring reports in the equipment environment. The system's requirement is to recordmeasurements and category observations. In addition, status and incident monitoring reports can be recorded. Therefore, we introduce the concept of observation, measurement, classification observation and monitoring report into the conceptual model of the system.Our generic DAS concept model plays an important role in the design of DAS equipment. We use this model to better organize the data that will be used by system components. Conceptual models also make it easier to design certain components in the system. Therefore, we have an implementation in which a large number of design classes represent the concepts specified in our generic DAS conceptual model. Through an industrial example, the development of this particular DAS demonstrates the usefulness of a generic system conceptual model for developing a particular system.中文译文数据采集系统Txomin Nieva数据采集系统, 正如名字所暗示的, 是一种用来采集信息成文件或分析一些现象的产品或过程。
法学 毕业论文 文献 外文 英文 翻译
附件一:英文文献INTRODUCTIONOffences of strict liability are those crimes which do not require mens rea with regard to at least one or more elements of the actus reus. The defendant need not have intended or known about that circumstance or consequence. Liability is said to be strict with regard to that element. For a good example see:R v Prince[1875]:The defendant ran off with an under-age girl. He was charged with an offence of taking a girl under the age of 16 out of the possession of her parents contrary to s55 of the Offences Against the Person Act 1861. The defendant knew that the girl was in the custody her father but he believed on reasonable grounds that the girl was aged 18. It was held that knowledge that the girl was under the age of 16 was not required in order to establish the offence. It was sufficient to show that the defendant intended to take the girl out of the possession of her father.It is only in extreme and rare cases where no mens rea is required for liability, thereby making the particular offence "absolute".GENERAL PRINCIPLESThe vast majority of strict liability crimes are statutory offences. However, statutes do not state explicitly that a particular offence is one of strict liability. Where a statute uses terms such as "knowingly" or "recklessly" then the offence being created is one that requires mens rea. Alternatively, it may make it clear that an offence of strict liability is being created. In many cases it will be a matter for the courts to interpret the statute and decide whether mens rea is required or not. What factors are taken into account by the courts when assessing whether or not an offence falls into the category of strict liability offences?THE MODERN CRITERIAIn Gammon (Hong Kong) Ltd v Attorney-General for Hong Kong [1984], the Privy Council considered the scope and role of strict liability offences in the modern criminal law and their effect upon the "presumption of mens rea". Lord Scarman laid down the criteria upon which a court should decide whether or not it is appropriate to impose strict liability: "In their Lordships' opinion, the law … may be stated in the following propositions … : (1) there is a presumption of law that mens rea is required before a person can be held guilty of a criminal offence; (2) the presumption is particularly strong where the offence is "truly criminal" in character; (3) the presumption applies to statutory offences, and can be displaced only if this is clearly or by necessary implication the effect of the statute; (4) the only situation in which the presumption can be displaced is where the statute is concerned with an issue of social concern, and public safety is such an issue; (5) even where a statute is concerned with such an issue, the presumption of mens rea stands unless it can be shown that the creation of strict liability will be effective to promote the objects of the statute by encouraging greater vigilance to prevent the commission of the prohibited act."(1) PRESUMPTION OF MENS REACourts usually begin with the presumption in favor of mens rea, seeing the well-known statement by Wright J in Sherras v De Rutzen:There is a presumption that mens rea, or evil intention, or knowledge of the wrongfulness of the act, is an essential ingredient in every offence; but that presumption is liable to be displaced either by the words of the statute creating the offence or by the subject-matter with which it deals, and both must be considered(2) GRAVITY OF PUNISHMENTAs a general rule, the more serious the criminal offence created by statute, the less likely the courts is to view it as an offence of strict liability. See:Sweet v Parsley [1970]:The defendant was a landlady of a house let to tenants. She retained one room in the house for herself and visited occasionally to collect the rent and letters. While she was absent the police searched the house and found cannabis. The defendant was convicted under s5 of the Dangerous Drugs Act 1965, of "being concerned in the management of premises used for the smoking of cannabis". She appealed alleging that she had no knowledge of the circumstances and indeed could not expect reasonably to have had such knowledge.The House of Lords,quashing her conviction, held that it had to be proved that the defendant had intended the house to be used for drug-taking, since the statute in question created a serious, or "truly criminal" offence, conviction for which would have grave consequences for the defendant. Lord Reid stated that "a stigma still attaches to any person convicted of a truly criminal offence, and the more serious or more disgraceful the offence the greater the stigma". And equally important, "the press in this country are vigilant to expose injustice, and every manifestly unjust conviction made known to the public tends to injure the body politic [people of a nation] by undermining public confidence in the justice of the law and of its administration."Lord Reid went on to point out that in any event it was impractical to impose absolute liability for an offence of this nature, as those who were responsible for letting properties could not possibly be expected to know everything that their tenants were doing.(3) WORDING OF THE STATUTEIn determining whether the presumption in favor of mens rea is to be displaced, the courts are required to have reference to the whole statute in which the offence appears. See:Cundy v Le Cocq (1884) :The defendant was convicted of unlawfully selling alcohol to an intoxicated person, contrary to s13 of the Licensing Act 1872. On appeal, the defendant contended that he had been unaware of the customer's drunkenness and thus should be acquitted. The Divisional Court interpreted s13 as creating an offence of strict liability since it was itself silent as to mens rea, whereas other offences under the same Act expressly required proof of knowledge on the part of the defendant. It was held that it was not necessary to consider whether the defendant knew, or had means of knowing, or could with ordinary care have detected that the person served was drunk. If he served a drink to a person who was in fact drunk, he was guilty. Stephen J stated: Here, as I have already pointed out, the object of this part of the Act is to prevent the sale of intoxicating liquor to drunken persons, and it is perfectly natural to carry that out by throwing on the publican the responsibility of determining whether the person supplied comes within that category.(4) ISSUES OF SOCIAL CONCERNSee :R v Blake (1996) :Investigation officers heard an unlicensed radio station broadcast and traced it to a flat where the defendant was discovered alone standing in front of the record decks, still playing music and wearing a set of headphones. Though the defendant admitted that he knewhe was using the equipment, he claimed that he believed he was making demonstration tapes and did not know he was transmitting. The defendant was convicted of using wireless telegraphy equipment without a license, contrary to s1 (1) Wireless Telegraphy Act 1949 and appealed on the basis that the offence required mens rea.The Court of Appeal held that the offence was an absolute (actually a strict) liability offence. The Court applied Lord Scarman's principles in Gammon and found that, though the presumption in favor of mens rea was strong because the offence carried a sentence of imprisonment and was, therefore, "truly criminal", yet the offence dealt with issues of serious social concern in the interests of public safety (namely, frequent unlicensed broadcasts on frequencies used by emergency services) and the imposition of strict liability encouraged greater vigilance in setting up careful checks to avoid committing the offence.(5) IS THERE ANY PURPOSE IN IMPOSING STRICT LIABILITY?The courts will be reluctant to construe a statute as imposing strict liability upon a defendant, where there is evidence to suggest that despite his having taken all reasonable steps, he cannot avoid the commission of an offence. See:Sherras v De Rutzen [1895]: The defendant was convicted of selling alcohol to a police officer whilst on duty, contrary to s16(2) of the Licensing Act 1872. He had reasonably believed the constable to be off duty as he had removed his arm-band, which was the acknowledged method of signifying off duty. The Divisional Court held that the conviction should be quashed, despite the absence from s16 (2) of any words requiring proof of mens rea as an element of the offence. Wright J expressed the view that the presumption in favor of mens rea would only be displaced by the wording of the statute itself, or its subject matter. In this case the latter factor was significant, in that no amount of reasonable care by the defendant would have prevented the offence from being committed. Wright J stated: "It is plain that if guilty knowledge is not necessary, no care on the part of the publican could save him from a conviction under section 16, subsection (2), since it would be as easy for the constable to deny that he was on duty when asked, or to produce a forged permission from his superior officer, as to remove his armlet before entering the public house. I am, therefore, of opinion that this conviction ought to be quashed."MODERN EXAMPLESThe following case is a modern example of the imposition of strict liability: Alphacell v Woodward [1972] The defendants were charged with causing polluted matter to enter a river contrary to s2 of the Rivers (Prevention of Pollution) Act 1951. The river had in fact been polluted because a pipe connected to the defendant's factory had been blocked, and the defendants had not been negligent. The House of Lords nevertheless held that the defendants were liable. Lord Salmon stated: If this appeal succeeded and it were held to be the law that no conviction be obtained under the 1951 Act unless the prosecution could discharge the often impossible onus of proving that the pollution was caused intentionally or negligently, a great deal of pollution would go unpunished and undeterred to the relief of many riparian factory owners. As a result, many rivers which are now filthy would become filthier still and many rivers which are now clean would lose their cleanliness. The legislature no doubt recognized that as a matter of public policy this would be most unfortunate. Hence s2(1)(a) which encourages riparian factory owners not only to take reasonable steps to prevent pollution but to do everything possible to ensure that they do not cause it.ARGUMENTS FOR STRICT LIABILITY1. The primary function of the courts is the prevention of forbidden acts. What acts should be regarded as forbidden? Surely only such acts as we can assert ought not to have been done. Some of the judges who upheld the conviction of Prince did so on the ground that men should be deterred from taking girls out of the possession of their parents, whatever the girl's age. This reasoning can hardly be applied to many modern offences of strict liability. We do not wish to deter people from driving cars, being concerned in the management of premises, financing hire purchase transactions or canning peas. These acts, if done with all proper care, are not such acts as the law should seek to prevent.2. Another argument that is frequently advanced in favor of strict liability is that, without it, many guilty people would escape - that there is neither time nor personnel available to litigate the culpability of each particular infraction. T his argument assumes that it is possible to deal with these cases without deciding whether D had mens rea or not, whether he was negligent or not. Certainly D may be convicted without deciding these questions, but how can he be sentenced? Suppose that a butcher sells some meat which is unfit for human consumption. Clearly the court will deal differently with (i) the butcher who knew that the meat was tainted; (ii) the butcher who did not know, but ought to have known; and (iii) the butcher who did not know and had no means of finding out. Sentence can hardly be imposed without deciding into which category the convicted person falls.3. The argument which is probably most frequently advanced by the courts for imposing strict liability is that it is necessary to do so in the interests of the public. Now it may be conceded that in many of the instances where strict liability has been imposed, the public does need protection against negligence and, assuming that the threat of punishment can make the potential harm doer more careful, there may be a valid ground for imposing liability for negligence as well as where there is mens rea. This is a plausible argument in favor of strict liability if there were no middle way between mens rea and strict liability - that is liability for negligence - and the judges have generally proceeded on the basis that there is no such middle way. Liability for negligence has rarely been spelled out of a statute except where, as in driving without due care, it is explicitly required. Lord Devlin has said: "It is not easy to find a way of construing a statute apparently expressed in terms of absolute liability so as to produce the requirement of negligence."ARGUMENTS AGAINST STRICT LIABILITY1. The case against strict liability, then, is, first, that it is unnecessary. It results in the conviction of persons who have behaved impeccably and who should not be required to alter their conduct in any way.2. Secondly, that it is unjust. Even if an absolute discharge can be given D may feel rightly aggrieved at having been formally convicted of an offence for which he bore no responsibility. Moreover, a conviction may have far-reaching consequences outside the courts, so that it is no answer to say that only a nominal penalty is imposed.3. The imposition of liability for negligence would in fact meet the arguments of most of those who favor strict liability. Such statutes are not meant to punish the vicious will but to put pressure upon the thoughtless and inefficient to do their whole duty in the interest of public health or safety or morals." The "thoughtless and inefficient" are, of course, the negligent. The objection tooffences of strict liability is not that these persons are penalized, but that others who are completely innocent are also liable to conviction. Though Lord Devlin was skeptical about the possibility of introducing the criterion of negligence (above), in Reynolds v Austin (1951) he stated that strict liability should only apply when there is something that the defendant can do to promote the observance of the law - which comes close to requiring negligence. If there were something which D could do to prevent the commission of the crime and which he failed to do, he might generally be said to have failed to comply with a duty - perhaps a high duty - of care; and so have been negligent.4. In Alphacell v Woodward (1972) Lord Salmon thought the relevant statutory section, "encourages riparian factory owners not only to take reasonable steps to prevent pollution but to do everything possible to ensure that they do not cause it." This suggests that, however vast the expenditure involved, and however unreasonable it may be in relation to the risk, D is under a duty to take all possible steps. Yet it may be doubted whether factory owners will in fact do more than is reasonable; and it is questionable whether they ought to be required to do so, at the risk - even though it be unlikely - of imprisonment. The contrary argument is that the existence of strict liability does induce organizations to aim at higher and higher standards.POSSIBLE DEVELOPMENTSThere are several possible compromises between mens rea and strict liability in regulatory offences. A "halfway house" has developed in Australia. The effect of Australian cases is: D might be convicted without proof of any mens rea by the Crown; but acquitted if he proved on a balance of probabilities that he lacked mens rea and was not negligent; ie, that he had an honest and reasonable belief in a state of facts which, would have made his act innocent. The onus of proving reasonable mistake is on D.STATUTORY DEFENCESIt is common for the drastic effect of a statute imposing strict liability to be mitigated by the provision of a statutory defense. It is instructive to consider one example. Various offences relating to the treatment and sale of food are enacted by the first twenty sections of the Food Safety Act 1990. Many, if not all, of these are strict liability offences. Section 21(1), however, provides that it shall be a defense for the person charged with any of the offences to prove that he took all reasonable precautions and exercised all due diligence to avoid the commission of the offence by himself or by a person under his control. Statutory defenses usually impose on the defendant a burden of proving that he had no mens rea and that he took all reasonable precautions and exercised all due diligence to avoid the commission of an offence. The effect of such provisions is that the prosecution need do no more than prove that the accused did the prohibited act and it is then for him to establish, if he can, that he did it innocently. Such provisions are a distinct advance on unmitigated strict liability.附件二:英文文献翻译介绍严格责任犯罪是关于客观方面的一个或多个因素不要求犯罪意图的那些犯罪。
英文文献小短文(原文加汉语翻译)
A fern that hyperaccumulates arsenic(这是题目,百度一下就能找到原文好,原文还有表格,我没有翻译)A hardy, versatile, fast-growing plant helps to remove arsenic from contaminated soilsContamination of soils with arsenic,which is both toxic and carcinogenic, is widespread1. We have discovered that the fern Pteris vittata (brake fern) is extremely efficient in extracting arsenic from soils and translocating it into its above-ground biomass. This plant —which, to our knowledge, is the first known arsenic hyperaccumulator as well as the first fern found to function as a hyperaccumulator— has many attributes that recommend it for use in the remediation of arsenic-contaminated soils.We found brake fern growing on a site in Central Florida contaminated with chromated copper arsenate (Fig. 1a). We analysed the fronds of plants growing at the site for total arsenic by graphite furnace atomic absorption spectroscopy. Of 14 plant species studied, only brake fern contained large amounts of arsenic (As;3,280–4,980 We collected additional samples of the plant and soil from the contaminated site –1,603 As) and from an uncontaminated site –As). Brake fern extracted arsenic efficiently from these soils into its fronds: plantsgrowing in the contaminated site contained 1,442–7,526 Arsenic and those from the uncontaminated site contained –These values are much higher than those typical for plants growing in normal soil, which contain less than of arsenic3.As well as being tolerant of soils containing as much as 1,500 arsenic, brake fern can take up large amounts of arsenic into its fronds in a short time (Table 1). Arsenic concentration in fern fronds growing in soil spiked with 1,500 Arsenic increased from to 15,861 in two weeks. Furthermore, in the same period, ferns growing in soil containing just 6 arsenic accumulated 755 Of arsenic in their fronds, a 126-fold enrichment. Arsenic concentrations in brake fernroots were less than 303 whereas those in the fronds reached 7,234 of 100 Arsenic significantly stimulated fern growth, resulting in a 40% increase in biomass compared with the control (data not shown).After 20 weeks of growth, the plant was extracted using a solution of 1:1 methanol:water to speciate arsenic with high-performance liquid chromatography–inductively coupled plasma mass spectrometry. Almost all arsenic was present as relatively toxic inorganic forms, with little detectable organoarsenic species4. The concentration of As(III) was greater in the fronds (47–80%) than in the roots %), indicating that As(V)was converted to As(III) during translocation from roots to fronds.As well as removing arsenic from soils containing different concentrations of arsenic (Table 1), brake fern also removed arsenic from soils containing different arsenic species (Fig. 1c). Again, up to 93% of the arsenic was concentrated in the fronds. Although both FeAsO4 and AlAsO4 are relatively insoluble in soils1, brake fern hyperaccumulated arsenic derived from these compounds into its fronds (136–315 levels 3–6 times greater than soil arsenic.Brake fern is mesophytic and is widely cultivated and naturalized in many areas with a mild climate. In the United States, it grows in the southeast and in southern California5. The fern is versatile and hardy, and prefers sunny (unusual for a fern) and alkaline environments (where arsenic is more available). It has considerable biomass, and is fast growing, easy to propagate,and perennial.We believe this is the first report of significant arsenic hyperaccumulation by an unmanipulated plant. Brake fern has great potential to remediate arsenic-contaminated soils cheaply and could also aid studies of arsenic uptake, translocation, speciation, distribution anddetoxification in plants.*Soil and Water Science Department, University ofFlorida, Gainesville, Florida 32611-0290, USAe-mail†Cooperative Extension Service, University ofGeorgia, Terrell County, PO Box 271, Dawson,Georgia 31742, USA‡Department of Chemistry & SoutheastEnvironmental Research Center, FloridaInternational University, Miami, Florida 33199,1. Nriagu, J. O. (ed.) Arsenic in the Environment Part 1: Cycling and Characterization (Wiley, New York, 1994).2. Brooks, R. R. (ed.) Plants that Hyperaccumulate Heavy Metals (Cambridge Univ. Press, 1998).3. Kabata-Pendias, A. & Pendias, H. in Trace Elements in Soils and Plants 203–209 (CRC, Boca Raton, 1991).4. Koch, I., Wang, L., Ollson, C. A., Cullen, W. R. & Reimer, K. J. Envir. Sci. Technol. 34, 22–26 (2000).5. Jones, D. L. Encyclopaedia of Ferns (Lothian, Melbourne, 1987).积累砷的蕨类植物耐寒,多功能,生长快速的植物,有助于从污染土壤去除砷有毒和致癌的土壤砷污染是非常广泛的。
英文文献及翻译(计算机专业)
英文文献及翻译(计算机专业)The increasing complexity of design resources in a net-based collaborative XXX common systems。
design resources can be organized in n with design activities。
A task is formed by a set of activities and resources linked by logical ns。
XXX managementof all design resources and activities via a Task Management System (TMS)。
which is designed to break down tasks and assign resources to task nodes。
This XXX。
2 Task Management System (TMS)TMS is a system designed to manage the tasks and resources involved in a design project。
It poses tasks into smaller subtasks。
XXX management of all design resources and activities。
TMS assigns resources to task nodes。
XXX。
3 Collaborative DesignCollaborative design is a process that XXX a common goal。
In a net-based collaborative design environment。
n XXX n for all design resources and activities。
英文文献翻译
Preparation and characterization of Ag-TiO2 hybrid clusters powders[1](Ag-TiO2混合团簇粉末的制备和表征)Abstract:液相电弧放电法被用于制备纳米Ag-TiO2复合超细粉末。
XRD和TEM图表明颗粒呈葫芦状形态,分布狭窄。
我们讨论了实验条件对产品的影响,比较了这种方法制备的粉末和其他γ射线辐照法制备的粉末。
Introduction:材料合成技术,提高了研究特定电子和光学特性的能力。
这也导致了设备和不同效应的快速发展,如集成光学型偏振器[1]和量子霍耳效应。
所需的长度尺度对于这些结构的控制是在纳米级别的[ 2 ]。
科学家面临的一个新的挑战是半导体量子点的生长,它具有新的光学响应,引起了对其基础物理方面和三阶非线性光致发光的应用等的研究兴趣。
这方面的一个例子是Ag-TiO2复合材料通过胶体方法合成[ 3 ]或由γ射线辐照法合成[ 4 ]。
对比其他制备超细金属颗粒的方法,γ射线辐照法能在室温的环境压力下产生粉末。
在这封信中,我们开发了一种新的方法,即液相电弧放电法,用以制备纳米复合材料,当它经水热处理可以得到纳米级别的超细粉。
Preparation and photocatalytic activity of immobilized composite photocatalyst (titania nanoparticle/activated carbon)[2]固定化复合光催化剂(TiO2纳米颗粒/活性炭)的制备和光催化活性研究Abstract:制备了一种固定化复合光催化剂——TiO2纳米颗粒/活性炭(AC),并研究了它在降解纺织染料的光催化活性。
AC通过油菜籽壳制备。
碱性红18(BR18)和碱性红46(BR46)被用来作为模型染料。
并采用了傅里叶变换红外(FTIR),波长色散X射线光谱(WDX),扫描电子显微镜(SEM),紫外可见分光光度法,化学需氧量(COD)和离子色谱(IC)分析。
英文文献小短文(原文加汉语翻译)
A fern that hyperaccumulates arsenic(这是题目,百度一下就能找到原文好,原文还有表格,我没有翻译)A hardy, versatile, fast-growing plant helps to remove arsenic from contaminated soilsContamination of soils with arsenic,which is both toxic and carcinogenic, is widespread1. We have discovered that the fern Pteris vittata (brake fern) is extremely efficient in extracting arsenic from soils and translocating it into its above-ground biomass. This plant —which, to our knowledge, is the first known arsenic hyperaccumulator as well as the first fern found to function as a hyperaccumulator— has many attributes that recommend it for use in the remediation of arsenic-contaminated soils.We found brake fern growing on a site in Central Florida contaminated with chromated copper arsenate (Fig. 1a). We analysed the fronds of plants growing at the site for total arsenic by graphite furnace atomic absorption spectroscopy. Of 14 plant species studied, only brake fern contained large amounts of arsenic (As;3,280–4,980 p.p.m.). We collected additional samples of the plant and soil from the contaminated site (18.8–1,603 p.p.m. As) and from an uncontaminated site (0.47–7.56 p.p.m. As). Brake fern extracted arsenic efficiently from these soils into its fronds: plants growing in the contaminated site contained 1,442–7,526p.p.m. Arsenic and those from the uncontaminated site contained 11.8–64.0 p.p.m. These values are much higher than those typical for plants growing in normal soil, which contain less than 3.6 p.p.m. of arsenic3.As well as being tolerant of soils containing as much as 1,500 p.p.m. arsenic, brake fern can take up large amounts of arsenic into its fronds in a short time (Table 1). Arsenic concentration in fern fronds growing in soil spiked with 1,500 p.p.m. Arsenic increased from 29.4 to 15,861 p.p.m. in two weeks. Furthermore, in the same period, ferns growing in soil containing just 6 p.p.m. arsenic accumulated 755 p.p.m. Of arsenic in their fronds, a 126-fold enrichment. Arsenic concentrations in brake fern roots were less than 303 p.p.m., whereas those in the fronds reached 7,234 p.p.m.Addition of 100 p.p.m. Arsenic significantly stimulated fern growth, resulting in a 40% increase in biomass compared with the control (data not shown).After 20 weeks of growth, the plant was extracted using a solution of 1:1 methanol:water to speciate arsenic with high-performance liquid chromatography–inductively coupled plasma mass spectrometry. Almostall arsenic was present as relatively toxic inorganic forms, with little detectable organoarsenic species4. The concentration of As(III) was greater in the fronds (47–80%) than in the roots (8.3%), indicating that As(V) was converted to As(III) during translocation from roots to fronds.As well as removing arsenic from soils containing different concentrations of arsenic (Table 1), brake fern also removed arsenic from soils containing different arsenic species (Fig. 1c). Again, up to 93% of the arsenic was concentrated in the fronds. Although both FeAsO4 and AlAsO4 are relatively insoluble in soils1, brake fern hyperaccumulated arsenic derived from these compounds into its fronds (136–315 p.p.m.)at levels 3–6 times greater than soil arsenic.Brake fern is mesophytic and is widely cultivated and naturalized in many areas with a mild climate. In the United States, it grows in the southeast and in southern California5. The fern is versatile and hardy, and prefers sunny (unusual for a fern) and alkaline environments (where arsenic is more available). It has considerable biomass, and is fast growing, easy to propagate,and perennial.We believe this is the first report of significant arsenic hyperaccumulationby an unmanipulated plant. Brake fern has great potential to remediate arsenic-contaminated soils cheaply and could also aid studies of arsenic uptake, translocation, speciation, distribution and detoxification in plants. *Soil and Water Science Department, University ofFlorida, Gainesville, Florida 32611-0290, USAe-mail: lqma@†Cooperative Extension Service, University ofGeorgia, Terrell County, PO Box 271, Dawson,Georgia 31742, USA‡Department of Chemistry & SoutheastEnvironmental Research Center, FloridaInternational University, Miami, Florida 33199,1. Nriagu, J. O. (ed.) Arsenic in the Environment Part 1: Cyclingand Characterization (Wiley, New York, 1994).2. Brooks, R. R. (ed.) Plants that Hyperaccumulate Heavy Metals (Cambridge Univ. Press, 1998).3. Kabata-Pendias, A. & Pendias, H. in Trace Elements in Soils and Plants 203–209 (CRC, Boca Raton, 1991).4. Koch, I., Wang, L., Ollson, C. A., Cullen, W. R. & Reimer, K. J. Envir. Sci. Technol. 34, 22–26 (2000).5. Jones, D. L. Encyclopaedia of Ferns (Lothian, Melbourne, 1987).积累砷的蕨类植物耐寒,多功能,生长快速的植物,有助于从污染土壤去除砷有毒和致癌的土壤砷污染是非常广泛的。
英文文献翻译
燕京理工学院YANCHING INSTITUTE OF TECHNOLOGY外文文献原稿和译文学院:机电工程学院专业:机械工程学号:130310159姓名:李健鹏指导教师:王海鹏黄志强2017 年3月外文文献原稿和译文原稿Style of materialsMaterials may be grouped in several ways. Scientists often classify materials by their state: solid, liquid, or gas. They also separate them into organic (once living) and inorganic (never living) materials.For industrial purposes, materials are divided into engineering materials or nonengineering materials. Engineering materials are those used in manufacture and become parts of products.Nonengineering materials are the chemicals, fuels, lubricants, and other materials used in the manufacturing process, which do not become part of the product.Engineering materials may be further subdivided into: ①Metal ②Ceramics③Composite ④Polymers, etc.Metals and Metal AlloysMetals are elements that generally have good electrical and thermal conductivity. Many metals have high strength, high stiffness, and have good ductility.Some metals, such as iron, cobalt and nickel, are magnetic. At low temperatures, some metals and intermetallic compounds become superconductors.What is the difference between an alloy and a pure metal? Pure metals are elements which come from a particular area of the periodic table. Examples of pure metals include copper in electrical wires and aluminum in cooking foil and beverage cans.Alloys contain more than one metallic element. Their properties can be changed by changing the elements present in the alloy. Examples of metal alloys include stainless steel which is an alloy of iron, nickel, and chromium; and gold jewelry which usually contains an alloy of gold and nickel.Why are metals and alloys used? Many metals and alloys have high densities andare used in applications which require a high mass-to-volume ratio.Some metal alloys, such as those based on aluminum, have low densities and are used in aerospace applications for fuel economy. Many alloys also have high fracture toughness, which means they can withstand impact and are durable.What are some important properties of metals?Density is defined as a material’s mass divided by its volume. Most metal s have relatively high densities, especially compared to polymers.Materials with high densities often contain atoms with high atomic numbers, such as gold or lead. However, some metals such as aluminum or magnesium have low densities, and are used in applications that require other metallic properties but also require low weight.Fracture toughness can be described as a material’s ability to avoid fracture, especially when a flaw is introduced. Metals can generally contain nicks and dents without weakening very much, and are impact resistant. A football player counts on this when he trusts that his facemask won’t shatter.Plastic deformation is the ability of bend or deform before breaking. As engineers, we usuallydesign materials so that they don’t def orm under normal conditions. You don’t want your car to lean to the east after a strong west wind.However, sometimes we can take advantage of plastic deformation. The crumple zones in a car absorb energy by undergoing plastic deformation before they break.The atomic bonding of metals also affects their properties. In metals, the outer valence electrons are shared among all atoms, and are free to travel everywhere. Since electrons conduct heat and electricity, metals make good cooking pans and electrical wires.It is impossible to see through metals, since these valence electrons absorb any photons of light which reach the metal. No photons pass through.Alloys are compounds consisting of more than one metal. Adding other metals can affect the density, strength, fracture toughness, plastic deformation, electrical conductivity and environmental degradation.For example, adding a small amount of iron to aluminum will make it stronger. Also, adding some chromium to steel will slow the rusting process, but will make itmore brittle.Ceramics and GlassesA ceramic is often broadly defined as any inorganic nonmetallic material.By this definition, ceramic materials would also include glasses; however, many materials scientists add the stipulation that “ceramic” must also be crystalline.A glass is an inorganic nonmetallic material that does not have a crystalline structure. Such materials are said to be amorphous.Properties of Ceramics and GlassesSome of the useful properties of ceramics and glasses include high melting temperature, low density, high strength, stiffness, hardness, wear resistance, and corrosion resistance.Many ceramics are good electrical and thermal insulators. Some ceramics have special properties: some ceramics are magnetic materials; some are piezoelectric materials; and a few special ceramics are superconductors at very low temperatures. Ceramics and glasses have one major drawback: they are brittle.Ceramics are not typically formed from the melt. This is because most ceramics will crack extensively (i.e. form a powder) upon cooling from the liquid state.Hence, all the simple and efficient manufacturing techniques used for glass production such as casting and blowing, which involve the molten state, cannot be used for th e production of crystalline ceramics. Instead, “sintering” or “firing” is the process typically used.In sintering, ceramic powders are processed into compacted shapes and then heated to temperatures just below the melting point. At such temperatures, the powders react internally to remove porosity and fully dense articles can be obtained.An optical fiber contains three layers: a core made of highly pure glass with a high refractive index for the light to travel, a middle layer of glass with a lower refractive index known as the cladding which protects the core glass from scratches and other surface imperfections, and an out polymer jacket to protect the fiber from damage.In order for the core glass to have a higher refractive index than the cladding, the core glass is doped with a small, controlled amount of an impurity, or dopant, whichcauses light to travel slower, but does not absorb the light.Because the refractive index of the core glass is greater than that of the cladding, light traveling in the core glass will remain in the core glass due to total internal reflection as long as the light strikes the core/cladding interface at an angle greater than the critical angle. The total internal reflection phenomenon, as well as the high purity of the core glass, enables light to travel long distances with little loss of intensity.CompositesComposites are formed from two or more types of materials. Examples include polymer/ceramic and metal/ceramic composites. Composites are used because overall propertiesof the composites are superior to those of the individual components.For example: polymer/ceramic composites have a greater modulus than the polymer component, but aren’t as brittle as ceramics.Two types of composites are: fiber-reinforced composites and particle-reinforced composites.Fiber-reinforced CompositesReinforcing fibers can be made of metals, ceramics, glasses, or polymers that have been turned into graphite and known as carbon fibers. Fibers increase the modulus of the matrix material.The strong covalent bonds along the fiber’s length give them a very high modulus in this direction because to break or extend the fiber the bonds must also be broken or moved.Fibers are difficult to process into composites, making fiber-reinforced composites relatively expensive.Fiber-reinforced composites are used in some of the most advanced, and therefore most expensive sports equipment, such as a time-trial racing bicycle frame which consists of carbon fibers in a thermoset polymer matrix.Body parts of race cars and some automobiles are composites made of glass fibers (or fiberglass) in a thermoset matrix.Fibers have a very high modulus along their axis, but have a low modulusperpendicular to their axis. Fiber composite manufacturers often rotate layers of fibers to avoid directional variations in the modulus.Particle-reinforced compositesParticles used for reinforcing include ceramics and glasses such as small mineral particles, metal particles such as aluminum, and amorphous materials, including polymers and carbon black.Particles are used to increase the modulus of the matrix, to decrease the permeability of the matrix, to decrease the ductility of the matrix. An example of particle-reinforced composites is an automobile tire which has carbon black particles in a matrix of polyisobutylene elastomeric polymer.Polymers A polymer has a repeating structure, usually based on a carbon backbone. The repeating structure results in large chainlike molecules. Polymers are useful because they are lightweight, corrosion resistant, easy to process at low temperatures and generally inexpensive.Some important characteristics of polymers include their size (or molecular weight), softening and melting points, crystallinity, and structure. The mechanical properties of polymers generally include low strength and high toughness. Their strength is often improved using reinforced composite structures.Important Characteristics of PolymersSize. Single polymer molecules typically have molecular weights between10,000 and 1,000,000g/mol—that can be more than 2,000 repeating units depending on the polymer structure!The mechanical properties of a polymer are significantly affected by the molecular weight, with better engineering properties at higher molecular weights.Thermal transitions. The softening point (glass transition temperature) and the melting point of a polymer will determine which it will be suitable for applications. These temperatures usually determine the upper limit for which a polymer can be used.For example, many industrially important polymers have glass transition temperatures near the boiling point of water (100℃, 212℉), and they are most useful for room temperature applications. Some specially engineered polymers can withstandtemperatures as high as 300℃(572℉).Crystallinity. Polymers can be crystalline or amorphous, but they usuallyhave a combination of crystalline and amorphous structures (semi-crystalline).Interchain interactions. The polymer chains can be free to slide past one another (thermo-plastic) or they can be connected to each other with crosslinks (thermoset or elastomer). Thermo-plastics can be reformed and recycled, while thermosets and elastomers are not reworkable.Intrachain structure. The chemical structure of the chains also has a tremendous effect on the properties. Depending on the structure the polymer may be hydrophilic or hydrophobic (likes or hates water), stiff or flexible, crystalline or amorphous, reactive or unreactive.The understanding of heat treatment is embraced by the broader study of metallurgy. Metallurgy is the physics, chemistry, and engineering related to metals from ore extraction to the final product.Heat treatment is the operation of heating and cooling a metal in its solid state to change its physical properties. According to the procedure used, steel can be hardened to resist cutting action and abrasion, or it can be softened to permit machining.With the proper heat treatment internal stresses may be removed, grain size reduced, toughness increased, or a hard surface produced on a ductile interior. The analysis of the steel must be known because small percentages of certain elements, notably carbon, greatly affect the physical properties.Alloy steel owe their properties to the presence of one or more elements other than carbon, namely nickel, chromium, manganese, molybdenum, tungsten, silicon, vanadium, and copper. Because of their improved physical properties they are used commercially in many ways not possible with carbon steels.The following discussion applies principally to the heat treatment of ordinary commercial steels known as plain carbon steels. With this process the rate ofcooling is the controlling factor, rapid cooling from above the critical range results in hard structure, whereas very slow cooling produces the opposite effect.A Simplified Iron-carbon DiagramIf we focus only on the materials normally known as steels, a simplifieddiagram is often used.Those portions of the iron-carbon diagram near the delta region and those above 2% carbon content are of little importance to the engineer and are deleted. A simplified diagram, such as the one in Fig.2.1, focuses on the eutectoid region and is quite useful in understanding the properties and processing of steel.The key transition described in this diagram is the decomposition of single-phase austenite(γ) to the two-phase ferrite plus carbide structure as temperature drops.Control of this reaction, which arises due to the drastically different carbon solubility of austenite and ferrite, enables a wide range of properties to be achieved through heat treatment.To begin to understand these processes, consider a steel of the eutectoid composition, 0.77% carbon, being slow cooled along line x-x’ in Fig.2.1. At the upper temperatures, only austenite is present, the 0.77% carbon being dissolved in solid solution with the iron. When the steel cools to 727℃(1341℉), several changes occur simultaneously.The iron wants to change from the FCC austenite structure to the BCC ferrite structure, but the ferrite can only contain 0.02% carbon in solid solution.The rejected carbon forms the carbon-rich cementite intermetallic with composition Fe3C. In essence, the net reaction at the eutectoid is austenite0.77%C→ferrite 0.02%C+cementite 6.67%C.Since this chemical separation of the carbon component occurs entirely in the solid state, the resulting structure is a fine mechanical mixture of ferrite and cementite. Specimens prepared by polishing and etching in a weak solution of nitric acid and alcohol reveal the lamellar structure of alternating plates that forms on slow cooling.This structure is composed of two distinct phases, but has its own set of characteristic properties and goes by the name pearlite, because oits resemblance to mother- of- pearl at low magnification.Steels having less than the eutectoid amount of carbon (less than 0.77%) are known as hypo-eutectoid steels. Consider now the transformation of such a material represented by cooling along line y-y’ in Fig.2.1.At high temperatures, the material is entirely austenite, but upon cooling enters aregion where the stable phases are ferrite and austenite. Tie-line and level-law calculations show that low-carbon ferrite nucleates and grows, leaving the remaining austenite richer in carbon.At 727℃(1341℉), the austenite is of eutectoid composition (0.77% carbon) and further cooling transforms the remaining austenite to pearlite. The resulting structure is a mixture of primary or pro-eutectoid ferrite (ferrite that formed above the eutectoid reaction) and regions of pearlite.Hypereutectoid steels are steels that contain greater than the eutectoid amount of carbon. When such steel cools, as shown in z-z’ of Fig.2.1 the process is similar to the hypo-eutectoid case, except that the primary or pro-eutectoid phase is now cementite instead of ferrite.As the carbon-rich phase forms, the remaining austenite decreases in carbon content, reaching the eutectoid composition at 727℃(1341℉). As before, any remaining austenite transforms to pearlite upon slow cooling through this temperature.It should be remembered that the transitions that have been described by the phase diagrams are for equilibrium conditions, which can be approximated by slow cooling. With slow heating, these transitions occur in the reverse manner.However, when alloys are cooled rapidly, entirely different results may be obtained, because sufficient time is not provided for the normal phase reactions to occur, in such cases, the phase diagram is no longer a useful tool for engineering analysis.HardeningHardening is the process of heating a piece of steel to a temperature within or above its critical range and then cooling it rapidly.If the carbon content of the steel is known, the proper temperature to which the steel should be heated may be obtained by reference to the iron-iron carbide phase diagram. However, if the composition of the steel is unknown, a little preliminary experimentation may be necessary to determine the range.译文材料的类型材料可以按多种方法分类。
外文文献及翻译
((英文参考文献及译文)二〇一六年六月本科毕业论文 题 目:STATISTICAL SAMPLING METHOD, USED INTHE AUDIT学生姓名:王雪琴学 院:管理学院系 别:会计系专 业:财务管理班 级:财管12-2班 学校代码: 10128 学 号: 201210707016Statistics and AuditRomanian Statistical Review nr. 5 / 2010STATISTICAL SAMPLING METHOD, USED IN THE AUDIT - views, recommendations, fi ndingsPhD Candidate Gabriela-Felicia UNGUREANUAbstractThe rapid increase in the size of U.S. companies from the earlytwentieth century created the need for audit procedures based on the selectionof a part of the total population audited to obtain reliable audit evidence, tocharacterize the entire population consists of account balances or classes oftransactions. Sampling is not used only in audit – is used in sampling surveys,market analysis and medical research in which someone wants to reach aconclusion about a large number of data by examining only a part of thesedata. The difference is the “population” from which the sample is selected, iethat set of data which is intended to draw a conclusion. Audit sampling appliesonly to certain types of audit procedures.Key words: sampling, sample risk, population, sampling unit, tests ofcontrols, substantive procedures.Statistical samplingCommittee statistical sampling of American Institute of CertifiedPublic Accountants of (AICPA) issued in 1962 a special report, titled“Statistical sampling and independent auditors’ which allowed the use ofstatistical sampling method, in accordance with Generally Accepted AuditingStandards (GAAS). During 1962-1974, the AICPA published a series of paperson statistical sampling, “Auditor’s Approach to Statistical Sampling”, foruse in continuing professional education of accountants. During 1962-1974,the AICPA published a series of papers on statistical sampling, “Auditor’sApproach to Statistical Sampling”, for use in continuing professional educationof accountants. In 1981, AICPA issued the professional standard, “AuditSampling”, which provides general guidelines for both sampling methods,statistical and non-statistical.Earlier audits included checks of all transactions in the period coveredby the audited financial statements. At that time, the literature has not givenparticular attention to this subject. Only in 1971, an audit procedures programprinted in the “Federal Reserve Bulletin (Federal Bulletin Stocks)” includedseveral references to sampling such as selecting the “few items” of inventory.Statistics and Audit The program was developed by a special committee, which later became the AICPA, that of Certified Public Accountants American Institute.In the first decades of last century, the auditors often applied sampling, but sample size was not in related to the efficiency of internal control of the entity. In 1955, American Institute of Accountants has published a study case of extending the audit sampling, summarizing audit program developed by certified public accountants, to show why sampling is necessary to extend the audit. The study was important because is one of the leading journal on sampling which recognize a relationship of dependency between detail and reliability testing of internal control.In 1964, the AICPA’s Auditing Standards Board has issued a report entitled “The relationship between statistical sampling and Generally Accepted Auditing Standards (GAAS)” which illustrated the relationship between the accuracy and reliability in sampling and provisions of GAAS.In 1978, the AICPA published the work of Donald M. Roberts,“Statistical Auditing”which explains the underlying theory of statistical sampling in auditing.In 1981, AICPA issued the professional standard, named “Audit Sampling”, which provides guidelines for both sampling methods, statistical and non-statistical.An auditor does not rely solely on the results of a single procedure to reach a conclusion on an account balance, class of transactions or operational effectiveness of the controls. Rather, the audit findings are based on combined evidence from several sources, as a consequence of a number of different audit procedures. When an auditor selects a sample of a population, his objective is to obtain a representative sample, ie sample whose characteristics are identical with the population’s characteristics. This means that selected items are identical with those remaining outside the sample.In practice, auditors do not know for sure if a sample is representative, even after completion the test, but they “may increase the probability that a sample is representative by accuracy of activities made related to design, sample selection and evaluation” [1]. Lack of specificity of the sample results may be given by observation errors and sampling errors. Risks to produce these errors can be controlled.Observation error (risk of observation) appears when the audit test did not identify existing deviations in the sample or using an inadequate audit technique or by negligence of the auditor.Sampling error (sampling risk) is an inherent characteristic of the survey, which results from the fact that they tested only a fraction of the total population. Sampling error occurs due to the fact that it is possible for Revista Română de Statistică nr. 5 / 2010Statistics and Auditthe auditor to reach a conclusion, based on a sample that is different from the conclusion which would be reached if the entire population would have been subject to audit procedures identical. Sampling risk can be reduced by adjusting the sample size, depending on the size and population characteristics and using an appropriate method of selection. Increasing sample size will reduce the risk of sampling; a sample of the all population will present a null risk of sampling.Audit Sampling is a method of testing for gather sufficient and appropriate audit evidence, for the purposes of audit. The auditor may decide to apply audit sampling on an account balance or class of transactions. Sampling audit includes audit procedures to less than 100% of the items within an account balance or class of transactions, so all the sample able to be selected. Auditor is required to determine appropriate ways of selecting items for testing. Audit sampling can be used as a statistical approach and a non- statistical.Statistical sampling is a method by which the sample is made so that each unit consists of the total population has an equal probability of being included in the sample, method of sample selection is random, allowed to assess the results based on probability theory and risk quantification of sampling. Choosing the appropriate population make that auditor’ findings can be extended to the entire population.Non-statistical sampling is a method of sampling, when the auditor uses professional judgment to select elements of a sample. Since the purpose of sampling is to draw conclusions about the entire population, the auditor should select a representative sample by choosing sample units which have characteristics typical of that population. Results will not extrapolate the entire population as the sample selected is representative.Audit tests can be applied on the all elements of the population, where is a small population or on an unrepresentative sample, where the auditor knows the particularities of the population to be tested and is able to identify a small number of items of interest to audit. If the sample has not similar characteristics for the elements of the entire population, the errors found in the tested sample can not extrapolate.Decision of statistical or non-statistical approach depends on the auditor’s professional judgment which seeking sufficient appropriate audits evidence on which to completion its findings about the audit opinion.As a statistical sampling method refer to the random selection that any possible combination of elements of the community is equally likely to enter the sample. Simple random sampling is used when stratification was not to audit. Using random selection involves using random numbers generated byRomanian Statistical Review nr. 5 / 2010Statistics and Audit a computer. After selecting a random starting point, the auditor found the first random number that falls within the test document numbers. Only when the approach has the characteristics of statistical sampling, statistical assessments of risk are valid sampling.In another variant of the sampling probability, namely the systematic selection (also called random mechanical) elements naturally succeed in office space or time; the auditor has a preliminary listing of the population and made the decision on sample size. “The auditor calculated a counting step, and selects the sample element method based on step size. Step counting is determined by dividing the volume of the community to sample the number of units desired. Advantages of systematic screening are its usability. In most cases, a systematic sample can be extracted quickly and method automatically arranges numbers in successive series.”[2].Selection by probability proportional to size - is a method which emphasizes those population units’recorded higher values. The sample is constituted so that the probability of selecting any given element of the population is equal to the recorded value of the item;Stratifi ed selection - is a method of emphasis of units with higher values and is registered in the stratification of the population in subpopulations. Stratification provides a complete picture of the auditor, when population (data table to be analyzed) is not homogeneous. In this case, the auditor stratifies a population by dividing them into distinct subpopulations, which have common characteristics, pre-defined. “The objective of stratification is to reduce the variability of elements in each layer and therefore allow a reduction in sample size without a proportionate increase in the risk of sampling.” [3] If population stratification is done properly, the amount of sample size to come layers will be less than the sample size that would be obtained at the same level of risk given sample with a sample extracted from the entire population. Audit results applied to a layer can be designed only on items that are part of that layer.I appreciated as useful some views on non-statistical sampling methods, which implies that guided the selection of the sample selecting each element according to certain criteria determined by the auditor. The method is subjective; because the auditor selects intentionally items containing set features him.The selection of the series is done by selecting multiple elements series (successive). Using sampling the series is recommended only if a reasonable number of sets used. Using just a few series there is a risk that the sample is not representative. This type of sampling can be used in addition to other samples, where there is a high probability of occurrence of errors. At the arbitrary selection, no items are selected preferably from the auditor, Revista Română de Statistică nr. 5 / 2010Statistics and Auditthat regardless of size or source or characteristics. Is not the recommended method, because is not objective.That sampling is based on the auditor’s professional judgment, which may decide which items can be part or not sampled. Because is not a statistical method, it can not calculate the standard error. Although the sample structure can be constructed to reproduce the population, there is no guarantee that the sample is representative. If omitted a feature that would be relevant in a particular situation, the sample is not representative.Sampling applies when the auditor plans to make conclusions about population, based on a selection. The auditor considers the audit program and determines audit procedures which may apply random research. Sampling is used by auditors an internal control systems testing, and substantive testing of operations. The general objectives of tests of control system and operations substantive tests are to verify the application of pre-defined control procedures, and to determine whether operations contain material errors.Control tests are intended to provide evidence of operational efficiency and controls design or operation of a control system to prevent or detect material misstatements in financial statements. Control tests are necessary if the auditor plans to assess control risk for assertions of management.Controls are generally expected to be similarly applied to all transactions covered by the records, regardless of transaction value. Therefore, if the auditor uses sampling, it is not advisable to select only high value transactions. Samples must be chosen so as to be representative population sample.An auditor must be aware that an entity may change a special control during the course of the audit. If the control is replaced by another, which is designed to achieve the same specific objective, the auditor must decide whether to design a sample of all transactions made during or just a sample of transactions controlled again. Appropriate decision depends on the overall objective of the audit test.Verification of internal control system of an entity is intended to provide guidance on the identification of relevant controls and design evaluation tests of controls.Other tests:In testing internal control system and testing operations, audit sample is used to estimate the proportion of elements of a population containing a characteristic or attribute analysis. This proportion is called the frequency of occurrence or percentage of deviation and is equal to the ratio of elements containing attribute specific and total number of population elements. WeightRomanian Statistical Review nr. 5 / 2010Statistics and Audit deviations in a sample are determined to calculate an estimate of the proportion of the total population deviations.Risk associated with sampling - refers to a sample selection which can not be representative of the population tested. In other words, the sample itself may contain material errors or deviations from the line. However, issuing a conclusion based on a sample may be different from the conclusion which would be reached if the entire population would be subject to audit.Types of risk associated with sampling:Controls are more effective than they actually are or that there are not significant errors when they exist - which means an inappropriate audit opinion. Controls are less effective than they actually are that there are significant errors when in fact they are not - this calls for additional activities to establish that initial conclusions were incorrect.Attributes testing - the auditor should be defining the characteristics to test and conditions for misconduct. Attributes testing will make when required objective statistical projections on various characteristics of the population. The auditor may decide to select items from a population based on its knowledge about the entity and its environment control based on risk analysis and the specific characteristics of the population to be tested.Population is the mass of data on which the auditor wishes to generalize the findings obtained on a sample. Population will be defined compliance audit objectives and will be complete and consistent, because results of the sample can be designed only for the population from which the sample was selected.Sampling unit - a unit of sampling may be, for example, an invoice, an entry or a line item. Each sample unit is an element of the population. The auditor will define the sampling unit based on its compliance with the objectives of audit tests.Sample size - to determine the sample size should be considered whether sampling risk is reduced to an acceptable minimum level. Sample size is affected by the risk associated with sampling that the auditor is willing to accept it. The risk that the auditor is willing to accept lower, the sample will be higher.Error - for detailed testing, the auditor should project monetary errors found in the sample population and should take into account the projected error on the specific objective of the audit and other audit areas. The auditor projects the total error on the population to get a broad perspective on the size of the error and comparing it with tolerable error.For detailed testing, tolerable error is tolerable and misrepresentations Revista Română de Statistică nr. 5 / 2010Statistics and Auditwill be a value less than or equal to materiality used by the auditor for the individual classes of transactions or balances audited. If a class of transactions or account balances has been divided into layers error is designed separately for each layer. Design errors and inconsistent errors for each stratum are then combined when considering the possible effect on the total classes of transactions and account balances.Evaluation of sample results - the auditor should evaluate the sample results to determine whether assessing relevant characteristics of the population is confirmed or needs to be revised.When testing controls, an unexpectedly high rate of sample error may lead to an increase in the risk assessment of significant misrepresentation unless it obtained additional audit evidence to support the initial assessment. For control tests, an error is a deviation from the performance of control procedures prescribed. The auditor should obtain evidence about the nature and extent of any significant changes in internal control system, including the staff establishment.If significant changes occur, the auditor should review the understanding of internal control environment and consider testing the controls changed. Alternatively, the auditor may consider performing substantive analytical procedures or tests of details covering the audit period.In some cases, the auditor might not need to wait until the end audit to form a conclusion about the effectiveness of operational control, to support the control risk assessment. In this case, the auditor might decide to modify the planned substantive tests accordingly.If testing details, an unexpectedly large amount of error in a sample may cause the auditor to believe that a class of transactions or account balances is given significantly wrong in the absence of additional audit evidence to show that there are not material misrepresentations.When the best estimate of error is very close to the tolerable error, the auditor recognizes the risk that another sample have different best estimate that could exceed the tolerable error.ConclusionsFollowing analysis of sampling methods conclude that all methods have advantages and disadvantages. But the auditor is important in choosing the sampling method is based on professional judgment and take into account the cost / benefit ratio. Thus, if a sampling method proves to be costly auditor should seek the most efficient method in view of the main and specific objectives of the audit.Romanian Statistical Review nr. 5 / 2010Statistics and Audit The auditor should evaluate the sample results to determine whether the preliminary assessment of relevant characteristics of the population must be confirmed or revised. If the evaluation sample results indicate that the relevant characteristics of the population needs assessment review, the auditor may: require management to investigate identified errors and likelihood of future errors and make necessary adjustments to change the nature, timing and extent of further procedures to take into account the effect on the audit report.Selective bibliography:[1] Law no. 672/2002 updated, on public internal audit[2] Arens, A şi Loebbecke J - Controve …Audit– An integrate approach”, 8th edition, Arc Publishing House[3] ISA 530 - Financial Audit 2008 - International Standards on Auditing, IRECSON Publishing House, 2009- Dictionary of macroeconomics, Ed C.H. Beck, Bucharest, 2008Revista Română de Statistică nr. 5 / 2010Statistics and Audit摘要美国公司的规模迅速增加,从第二十世纪初创造了必要的审计程序,根据选定的部分总人口的审计,以获得可靠的审计证据,以描述整个人口组成的帐户余额或类别的交易。
机械类外文文献翻译(中英文翻译)
机械类外文文献翻译(中英文翻译)英文原文Mechanical Design and Manufacturing ProcessesMechanical design is the application of science and technology to devise new or improved products for the purpose of satisfying human needs. It is a vast field of engineering technology which not only concerns itself with the original conception of the product in terms of its size, shape and construction details, but also considers the various factors involved in the manufacture, marketing and use of the product.People who perform the various functions of mechanical design are typically called designers, or design engineers. Mechanical design is basically a creative activity. However, in addition to being innovative, a design engineer must also have a solid background in the areas of mechanical drawing, kinematics, dynamics, materials engineering, strength of materials and manufacturing processes.As stated previously, the purpose of mechanical design is to produce a product which will serve a need for man. Inventions, discoveries and scientific knowledge by themselves do not necessarily benefit people; only if they are incorporated into a designed product will a benefit be derived. It should be recognized, therefore, that a human need must be identified before a particular product is designed.Mechanical design should be considered to be an opportunity to use innovative talents to envision a design of a product, to analyze the systemand then make sound judgments on how the product is to be manufactured. It is important to understand the fundamentals of engineering rather than memorize mere facts and equations. There are no facts or equations which alone can be used to provide all the correct decisions required to produce a good design.On the other hand, any calculations made must be done with the utmost care and precision. For example, if a decimal point is misplaced, an otherwise acceptable design may not function.Good designs require trying new ideas and being willing to take a certain amount of risk, knowing that if the new idea does not work the existing method can be reinstated. Thus a designer must have patience, since there is no assurance of success for the time and effort expended. Creating a completely new design generally requires that many old and well-established methods be thrust aside. This is not easy since many people cling to familiar ideas, techniques and attitudes. A design engineer should constantly search for ways to improve an existing product and must decide what old, proven concepts should be used and what new, untried ideas should be incorporated.New designs generally have "bugs" or unforeseen problems which must be worked out before the superior characteristics of the new designs can be enjoyed. Thus there is a chance for a superior product, but only at higher risk. It should be emphasized that, if a design does not warrant radical new methods, such methods should not be applied merely for the sake of change.During the beginning stages of design, creativity should be allowedto flourish without a great number of constraints. Even though many impractical ideas may arise, it is usually easy to eliminate them in the early stages of design before firm details are required by manufacturing. In this way, innovative ideas are not inhibited. Quite often, more than one design is developed, up to the point where they can be compared against each other. It is entirely possible that the design which is ultimately accepted will use ideas existing in one of the rejected designs that did not show as much overall promise.Psychologists frequently talk about trying to fit people to the machines they operate. It is essentially the responsibility of the design engineer to strive to fit machines to people. This is not an easy task, since there is really no average person for which certain operating dimensions and procedures are optimum.Another important point which should be recognized is that a design engineer must be able to communicate ideas to other people if they are to be incorporated. Communicating the design to others is the final, vital step in the design process. Undoubtedly many great designs, inventions, and creative works have been lost to mankind simply because the originators were unable or unwilling to explain their accomplishments to others. Presentation is a selling job. The engineer, when presenting a new solution to administrative, management, or supervisory persons, is attempting to sell or to prove to them that this solution is a better one. Unless this can be done successfully, the time and effort spent on obtaining the solution have been largely wasted.Basically, there are only three means of communication available tous. These are the written, the oral, and the graphical forms. Therefore the successful engineer will be technically competent and versatile in all three forms of communication. A technically competent person who lacks ability in any one of these forms is severely handicapped. If ability in all three forms is lacking, no one will ever know how competent that person is!The competent engineer should not be afraid of the possibility of not succeeding in a presentation. In fact, occasional failure should be expected because failure or criticism seems to accompany every really creative idea. There is a great deal to be learned from a failure, and the greatest gains are obtained by those willing to risk defeat. In the final analysis, the real failure would lie in deciding not to make the presentation at all. To communicate effectively, the following questions must be answered:(1) Does the design really serve a human need?(2) Will it be competitive with existing products of rival companies?(3) Is it economical to produce?(4) Can it be readily maintained?(5) Will it sell and make a profit?Only time will provide the true answers to the preceding questions, but the product should be designed, manufactured and marketed only with initial affirmative answers. The design engineer also must communicate the finalized design to manufacturing through the use of detail and assembly drawings.Quite often, a problem will occur during the manufacturing cycle [3].It may be that a change is required in the dimensioning or tolerancing of a part so that it can be more readily produced. This fails in the category of engineering changes which must be approved by the design engineer so that the product function will not be adversely affected. In other cases, a deficiency in the design may appear during assembly or testing just prior to shipping. These realities simply bear out the fact that design is a living process. There is always a better way to do it and the designer should constantly strive towards finding that better way.Designing starts with a need, real or imagined. Existing apparatus may need improvements in durability, efficiently, weight, speed, or cost. New apparatus may be needed to perform a function previously done by men, such as computation, assembly, or servicing. With the objective wholly or partly defined, the next step in design is the conception of mechanisms and their arrangements that will perform the needed functions.For this, freehand sketching is of great value, not only as a record of one's thoughts and as an aid in discussion with others, but particularly for communication with one's own mind, as a stimulant for creative ideas.When the general shape and a few dimensions of the several components become apparent, analysis can begin in earnest. The analysis will have as its objective satisfactory or superior performance, plus safety and durability with minimum weight, and a competitive east. Optimum proportions and dimensions will be sought for each critically loaded section, together with a balance between the strength of the several components. Materials and their treatment will be chosen. These important objectives can be attained only by analysis based upon the principles ofmechanics, such as those of statics for reaction forces and for the optimumutilization of friction; of dynamics for inertia, acceleration, and energy; of elasticity and strength of materials for stress。
中英文文献以及翻译(化工类)
Foreign material:Chemical Industry1.Origins of the Chemical IndustryAlthough the use of chemicals dates back to the ancient civilizations, the evolution of what we know as the modern chemical industry started much more recently. It may be considered to have begun during the Industrial Revolution, about 1800, and developed to provide chemicals roe use by other industries. Examples are alkali for soapmaking, bleaching powder for cotton, and silica and sodium carbonate for glassmaking. It will be noted that these are all inorganic chemicals. The organic chemicals industry started in the 1860s with the exploitation of William Henry Perkin’s discovery if the first synthetic dyestuff—mauve. At the start of the twentieth century the emphasis on research on the applied aspects of chemistry in Germany had paid off handsomely, and by 1914 had resulted in the German chemical industry having 75% of the world market in chemicals. This was based on the discovery of new dyestuffs plus the development of both the contact process for sulphuric acid and the Haber process for ammonia. The later required a major technological breakthrough that of being able to carry out chemical reactions under conditions of very high pressure for the first time. The experience gained with this was to stand Germany in good stead, particularly with the rapidly increased demand for nitrogen-based compounds (ammonium salts for fertilizers and nitric acid for explosives manufacture) with the outbreak of world warⅠin 1914. This initiated profound changes which continued during the inter-war years (1918-1939).Since 1940 the chemical industry has grown at a remarkable rate, although this has slowed significantly in recent years. The lion’s share of this growth has been in the organic chemicals sector due to the development and growth of the petrochemicals area since 1950s. The explosives growth in petrochemicals in the 1960s and 1970s was largely due to the enormous increase in demand for synthetic polymers such as polyethylene, polypropylene, nylon, polyesters and epoxy resins.The chemical industry today is a very diverse sector of manufacturing industry, within which it plays a central role. It makes thousands of different chemicals whichthe general public only usually encounter as end or consumer products. These products are purchased because they have the required properties which make them suitable for some particular application, e.g. a non-stick coating for pans or a weedkiller. Thus chemicals are ultimately sold for the effects that they produce.2. Definition of the Chemical IndustryAt the turn of the century there would have been little difficulty in defining what constituted the chemical industry since only a very limited range of products was manufactured and these were clearly chemicals, e.g., alkali, sulphuric acid. At present, however, many intermediates to products produced, from raw materials like crude oil through (in some cases) many intermediates to products which may be used directly as consumer goods, or readily converted into them. The difficulty cones in deciding at which point in this sequence the particular operation ceases to be part of the chemical industry’s sphere of activities. To consider a specific example to illustrate this dilemma, emulsion paints may contain poly (vinyl chloride) / poly (vinyl acetate). Clearly, synthesis of vinyl chloride (or acetate) and its polymerization are chemical activities. However, if formulation and mixing of the paint, including the polymer, is carried out by a branch of the multinational chemical company which manufactured the ingredients, is this still part of the chemical industry of does it mow belong in the decorating industry?It is therefore apparent that, because of its diversity of operations and close links in many areas with other industries, there is no simple definition of the chemical industry. Instead each official body which collects and publishes statistics on manufacturing industry will have its definition as to which operations are classified as the chemical industry. It is important to bear this in mind when comparing statistical information which is derived from several sources.3. The Need for Chemical IndustryThe chemical industry is concerned with converting raw materials, such as crude oil, firstly into chemical intermediates and then into a tremendous variety of other chemicals. These are then used to produce consumer products, which make our livesmore comfortable or, in some cases such as pharmaceutical produces, help to maintain our well-being or even life itself. At each stage of these operations value is added to the produce and provided this added exceeds the raw material plus processing costs then a profit will be made on the operation. It is the aim of chemical industry to achieve this.It may seem strange in textbook this one to pose the question “do we need a chemical industry?” However trying to answer this question will provide(ⅰ) an indication of the range of the chemical industry’s activities, (ⅱ) its influence on our lives in everyday terms, and (ⅲ) how great is society’s need for a chemical industry. Our approach in answering the question will be to consider the industry’s co ntribution to meeting and satisfying our major needs. What are these? Clearly food (and drink) and health are paramount. Other which we shall consider in their turn are clothing and (briefly) shelter, leisure and transport.(1)Food. The chemical industry makes a major contribution to food production in at least three ways. Firstly, by making available large quantities of artificial fertilizers which are used to replace the elements (mainly nitrogen, phosphorus and potassium) which are removed as nutrients by the growing crops during modern intensive farming. Secondly, by manufacturing crop protection chemicals, i.e., pesticides, which markedly reduce the proportion of the crops consumed by pests. Thirdly, by producing veterinary products which protect livestock from disease or cure their infections.(2)Health. We are all aware of the major contribution which the pharmaceutical sector of the industry has made to help keep us all healthy, e.g. by curing bacterial infections with antibiotics, and even extending life itself, e.g. ß–blockers to lower blood pressure.(3)Clothing. The improvement in properties of modern synthetic fibers over the traditional clothing materials (e.g. cotton and wool) has been quite remarkable. Thus shirts, dresses and suits made from polyesters like Terylene and polyamides like Nylon are crease-resistant, machine-washable, and drip-dry or non-iron. They are also cheaper than natural materials.Parallel developments in the discovery of modern synthetic dyes and the technology to “bond” th em to the fiber has resulted in a tremendous increase in the variety of colors available to the fashion designer. Indeed they now span almost every color and hue of the visible spectrum. Indeed if a suitable shade is not available, structural modification of an existing dye to achieve this canreadily be carried out, provided there is a satisfactory market for the product.Other major advances in this sphere have been in color-fastness, i.e., resistance to the dye being washed out when the garment is cleaned.(4)Shelter, leisure and transport. In terms of shelter the contribution of modern synthetic polymers has been substantial. Plastics are tending to replace traditional building materials like wood because they are lighter, maintenance-free (i.e. they are resistant to weathering and do not need painting). Other polymers, e.g. urea-formaldehyde and polyurethanes, are important insulating materials f or reducing heat losses and hence reducing energy usage.Plastics and polymers have made a considerable impact on leisure activities with applications ranging from all-weather artificial surfaces for athletic tracks, football pitches and tennis courts to nylon strings for racquets and items like golf balls and footballs made entirely from synthetic materials.Like wise the chemical industry’s contribution to transport over the years has led to major improvements. Thus development of improved additives like anti-oxidants and viscosity index improves for engine oil has enabled routine servicing intervals to increase from 3000 to 6000 to 12000 miles. Research and development work has also resulted in improved lubricating oils and greases, and better brake fluids. Yet again the contribution of polymers and plastics has been very striking with the proportion of the total automobile derived from these materials—dashboard, steering wheel, seat padding and covering etc.—now exceeding 40%.So it is quite apparent even from a brief look at the chemical industry’s contribution to meeting our major needs that life in the world would be very different without the products of the industry. Indeed the level of a country’s development may be judged by the production level and sophistication of its chemical industry4. Research and Development (R&D) in Chemical IndustriesOne of the main reasons for the rapid growth of the chemical industry in the developed world has been its great commitment to, and investment in research and development (R&D). A typical figure is 5% of sales income, with this figure being almost doubled for the most research intensive sector, pharmaceuticals. It is important to emphasize that we are quoting percentages here not of profits but of sales income, i.e. the total money received, which has to pay for raw materials, overheads, staff salaries, etc. as well. In the past this tremendous investment has paid off well, leading to many useful and valuable products being introduced to the market. Examplesinclude synthetic polymers like nylons and polyesters, and drugs and pesticides. Although the number of new products introduced to the market has declined significantly in recent years, and in times of recession the research department is usually one of the first to suffer cutbacks, the commitment to R&D remains at a very high level.The chemical industry is a very high technology industry which takes full advantage of the latest advances in electronics and engineering. Computers are very widely used for all sorts of applications, from automatic control of chemical plants, to molecular modeling of structures of new compounds, to the control of analytical instruments in the laboratory.Individual manufacturing plants have capacities ranging from just a few tones per year in the fine chemicals area to the real giants in the fertilizer and petrochemical sectors which range up to 500,000 tonnes. The latter requires enormous capital investment, since a single plant of this size can now cost $520 million! This, coupled with the widespread use of automatic control equipment, helps to explain why the chemical industry is capital-rather than labor-intensive.The major chemical companies are truly multinational and operate their sales and marketing activities in most of the countries of the world, and they also have manufacturing units in a number of countries. This international outlook for operations, or globalization, is a growing trend within the chemical industry, with companies expanding their activities either by erecting manufacturing units in other countries or by taking over companies which are already operating there.化学工业1.化学工业的起源尽管化学品的使用可以追溯到古代文明时代,我们所谓的现代化学工业的发展却是非常近代(才开始的)。
儿童教育外文翻译文献
儿童教育外文翻译文献(文档含中英文对照即英文原文和中文翻译)原文:The Role of Parents and Community in the Educationof the Japanese ChildHeidi KnipprathAbstractIn Japan, there has been an increased concern about family and community participation in the child’s educat ion. Traditionally, the role of parents and community in Japan has been one of support and less one of active involvement in school learning. Since the government commenced education reforms in the last quarter of the 20th century, a more active role for parents and the community in education has been encouraged. These reforms have been inspired by the need to tackle various problems that had arisen, such as the perceived harmful elements of society’spreoccupation with academic achievement and the problematic behavior of young people. In this paper, the following issues are examined: (1) education policy and reform measures with regard to parent and community involvement in the child’s education; (2) the state of parent and community involvement at the eve of the 20th century.Key Words: active involvement, community, education reform, Japan, parents, partnership, schooling, supportIntroduction: The Discourse on the Achievement GapWhen western observers are tempted to explain why Japanese students attain high achievement scores in international comparative assessment studies, they are likely to address the role of parents and in particular of the mother in the education of the child. Education mom is a phrase often brought forth in the discourse on Japanese education to depict the Japanese mother as being a pushy, and demanding home-bound tutor, intensely involved in the child’s education due to severe academic competition. Although this image of the Japanese mother is a stereotype spread by the popular mass media in Japan and abroad, and the extent by which Japanese mothers are absorbed in their children is exaggerated (Benjamin, 1997, p. 16; Cummings, 1989, p. 297; Stevenson & Stigler, 1992, p. 82), Stevenson and Stigler (1992) argue that Japanese parents do play an indispensable role in the academic performance of their children. During their longitudinal and cross-national research project, they and their collaborators observed that Japanese first and fifth graders persistently achieved higher on math tests than American children. Besides reciting teacher’s teaching style, cultural beliefs, and organization of schooling, Stevenson and Stigler (1992) mention parent’s role in supporting the learning conditions of the child to explain differences in achievement between elementary school students of the United States and students of Japan. In Japan, children receive more help at home with schoolwork (Chen & Stevenson, 1989; Stevenson & Stigler, 1992), and tend to perform less household chores than children in the USA (Stevenson et al., 1990; Stevenson & Stigler, 1992). More Japanese parents than American parents provide space and a personal desk and purchase workbooks for their children to supplement their regular text-books at school (Stevenson et al., 1990; Stevenson & Stigler, 1992). Additionally, Stevenson and Stigler (1992) observed that American mothers are much more readily satisfied with their child’s performance than Asian parents are, have less realistic assessments of their child’s academic perform ance, intelligence, and other personality characteristics, and subsequently have lower standards. Based on their observation of Japanese, Chinese and American parents, children and teachers, Stevenson and Stigler (1992) conclude that American families can increase the academic achievement of their children by strengthening the link between school and home, creating a physical and psychological environment that is conducive to study, and by making realistic assessments and raising standards. Also Benjamin (1997), who performed ‘day-to-day ethnography’ to find out how differences in practice between American and Japanese schools affect differences in outcomes, discusses the relationship between home and school and how the Japanese mother is involved in the academic performance standards reached by Japanese children. She argues that Japanese parents are willing to pay noticeable amounts of money for tutoring in commercial establishments to improve the child’s performance on entrance examinations, to assist in ho mework assignments, to facilitate and support their children’s participation in school requirements and activities, and to check notebooks of teachers on the child’s progress and other school-related messages from the teacher. These booklets are read and written daily by teachers and parents. Teachers regularly provide advice and reminders to parents, and write about homework assignments of the child, special activities and the child’s behavior (Benjamin, 1997, p. 119, p. 1993–1995). Newsletters, parents’ v isits to school, school reports, home visits by the teacher and observation days sustain communication in later years at school. According toBenjamin (1997), schools also inform parents about how to coach their children on proper behavior at home. Shimahara (1986), Hess and Azuma (1991), Lynn (1988) and White (1987) also try to explain national differences in educational achievement. They argue that Japanese mothers succeed in internalizing into their children academic expectations and adaptive dispositions that facilitate an effective teaching strategy, and in socializing the child into a successful person devoted to hard work.Support, Support and SupportEpstein (1995) constructed a framework of six types of involvement of parents and the community in the school: (1) parenting: schools help all families establish home environments to support children as students; (2) communicating: effective forms of school-to-home and home-to-school communications about school programs and children’s progress; (3) volu nteering: schools recruit and organize parents help and support; (4) learning at home: schools provide information and ideas to families about how to help students at home with homework and other curriculum-related activities, decisions and planning; (5) decision making: schools include parents in school decisions, develop parent leaders and representatives; and (6) collaborating with the community: schools integrate resources and services from the community to strengthen school programs, family practices, and student learning and development. All types of involvement mentioned in studies of Japanese education and in the discourse on the roots of the achievement gap belong to one of Epstein’s first four types of involvement: the creation of a conducive learn ing environment (type 4), the expression of high expectations (type 4), assistance in homework (type 4), teachers’ notebooks (type 2), mother’s willingness to facilitate school activities (type3) teachers’ advice about the child’s behavior (type 1), observ ation days by which parents observe their child in the classroom (type 2), and home visits by the teachers (type 1). Thus, when one carefully reads Stevenson and Stigler’s, Benjamin’s and other’s writings about Japanese education and Japanese students’ high achievement level, one notices that parents’ role in the child’s school learning is in particular one of support, expected and solicited by the school. The fifth type (decision making) as well as the sixth type (community involvement) is hardly ever mentioned in the discourse on the achievement gap.In 1997, the OECD’s Center for Educational Research and Innovation conducted a cross-national study to report the actual state of parents as partners in schooling in nine countries, including Japan. In its report, OECD concludes that the involvement of Japanese parents in their schools is strictly limited, and that the basis on which it takes place tends to be controlled by the teacher (OECD, 1997, p. 167). According to OECD (1997), many countries are currently adopting policies to involve families closely in the education of their children because (1) governments are decentralizing their administrations; (2) parents want to be increasingly involved; and (3) because parental involvement is said to be associated with higher achievement in school (p. 9). However, parents in Japan, where students already score highly on international achievement tests, are hardly involved in governance at the national and local level, and communication between school and family tends to be one-way (Benjamin, 1997; Fujita, 1989; OECD, 1997). Also parent–teacher associations (PTA, fubo to kyoshi no kai ) are primarily presumed to be supportive of school learning and not to participate in school governance (cf. OECD, 2001, p. 121). On the directionsof the occupying forces after the second world war, PTA were established in Japanese schools and were considered with the elective education boards to provide parents and the community an opportunity to participate actively in school learning (Hiroki, 1996, p. 88; Nakata, 1996, p. 139). The establishment of PTA and elective education boards are only two examples of numerous reform measures the occupying forces took to decentralize the formal education system and to expand educational opportunities. But after they left the country, the Japanese government was quick to undo liberal education reform measures and reduced the community and parental role in education. The stipulation that PTA should not interfere with personnel and other administrative tasks of schools, and the replacement of elective education boards by appointed ones, let local education boards believe that parents should not get involved with school education at all (Hiroki, 1996, p. 88). Teachers were regarded to be the experts and the parents to be the laymen in education (Hiroki, 1996, p. 89).In sum, studies of Japanese education point into one direction: parental involvement means being supportive, and community involvement is hardly an issue at all. But what is the actual state of parent and community involvement in Japanese schools? Are these descriptions supported by quantitative data?Statistics on Parental and Community InvolvementTo date, statistics of parental and community involvement are rare. How-ever, the school questionnaire of the TIMSS-R study did include some interesting questions that give us a clue about the degree of involvement relatively compared to the degree of involvement in other industrialized countries. The TIMSS-R study measured science and math achievement of eighth graders in 38 countries. Additionally, a survey was held among principals, teachers and students. Principals answered questions relating to school management, school characteristics, and involvement. For convenience, the results of Japan are only compared with the results of those countries with a GNP of 20650 US dollars or higher according to World Bank’s indicators in 1999.Unfortunately, only a very few items on community involvement were measured. According to the data, Japanese principals spend on average almost eight hours per month on representing the school in the community (Table I). Australian and Belgian principals spend slightly more hours and Dutch and Singaporean principals spend slightly less on representing the school and sustaining communication with the community. But when it comes to participation from the community, Japanese schools report a nearly absence of involvement (Table II). Religious groups and the business community have hardly any influence on the curriculum of the school. In contrast, half of the principals report that parents do have an impact in Japan. On one hand, this seems a surprising result when one is reminded of the centralized control of the Ministry of Education. Moreover, this control and the resulting uniform curriculum are often cited as a potential explanation of the high achievement levels in Japan. On the other hand, this extent of parental impact on the curriculum might be an indicator of the pressure parents put on schools to prepare their children appropriately for the entrance exams of senior high schools.In Table III, data on the extent of other types of parental involvement in Japan and other countries are given. In Japan, parental involvement is most common in case of schools volunteering for school projects and programs, and schools expecting parents to make sure that thechild completes his or her homework. The former is together with patrolling the grounds of the school to monitor student behavior most likely materialized through the PTA. The kinds and degree of activities of PTA vary according to the school, but the activities of the most active and well-organized PTA’s of 395 elementary schools investigated by Sumida (2001)range from facilitating sport and recreation for children, teaching greetings, encouraging safe traffic, patrolling the neighborhood, publishing the PTA newspaper to cleaning the school grounds (pp. 289–350). Surprisingly, less Japanese principals expect from the parents to check one’s child’s completion of homework than principals of other countries. In the discourse on the achievement gap, western observers report that parents and families in Japan provide more assistance with their children’s homework than parents and families outside Japan. This apparent contradiction might be the result of the fact that these data are measured at the lower secondary level while investigations of the roots of Japanese students’ high achievement levels focus on childhood education and learning at primary schools. In fact, junior high school students are given less homework in Japan than their peers in other countries and less homework than elementary school students in Japan. Instead, Japanese junior high school students spend more time at cram schools. Finally, Japanese principals also report very low degrees of expectations toward parents with regard to serving as a teacher aid in the classroom, raising funds for the school, assisting teachers on trips, and serving on committees which select school personnel and review school finances. The latter two items measure participation in school governance.In other words, the data support by and large the descriptions of parental of community involvement in Japanese schooling. Parents are requested to be supportive, but not to mount the territory of the teacher nor to be actively involved in governance. Moreover, whilst Japanese principals spend a few hours per month on communication toward the community, involvement from the community with regard to the curriculum is nearly absent, reflecting the nearly absence of accounts of community involvement in studies on Japanese education. However, the reader needs to be reminded that these data are measured at the lower secondary educational level when participation by parents in schooling decreases (Epstein, 1995; OECD, 1997; Osakafu Kyoiku Iinkai, unpublished report). Additionally, the question remains what stakeholders think of the current state of involvement in schooling. Some interesting local data provided by the Osaka Prefecture Education Board shed a light on their opinion.ReferencesBenjamin, G. R. (1997). Japanese lessons. New York: New York University Press.Cave, P. (2003). Educational reform in Japan in the 1990s: ‘Individuality’ and other uncertainties. Comparative Education Review, 37(2), 173–191.Chen, C., & Stevenson, H. W. (1989). Homework: A cross-cultural examination. Child Development, 60(3), 551–561.Chuo Kyoiku Shingikai (1996). 21 seiki o tenbo shita wagakuni no kyoiku no arikata ni tsu-ite [First Report on the Model for Japanese Education in the Perspective of theCummings, W. K. (1989). The American perception of Japanese parative Education, 25(3), 293–302.Epstein, J. L. (1995). School/family/community partnerships. Phi Delta Kappan , 701–712.Fujita, M. (1989). It’s all mother’s fault: childcare and the socialization of working mothers in Japan. The Journal of Japanese Studies , 15(1), 67–91.Harnish, D. L. (1994). Supplemental education in Japan: juku schooling and its implication. Journal of Curriculum Studies , 26(3), 323–334.Hess, R. D., & Azuma, H. (1991). Cultural support for schooling, contrasts between Japanand the United States. Educational Researcher , 20(9), 2–8, 12.Hiroki, K. (1996). Kyoiku ni okeru kodomo, oya, kyoshi, kocho no kenri, gimukankei[Rights and duties of principals, teachers, parents and children in education. InT. Horio & T. Urano (Eds.), Soshiki toshite no gakko [School as an organization](pp. 79–100). Tokyo: Kashiwa Shobo. Ikeda, H. (2000). Chiiki no kyoiku kaikaku [Local education reform]. Osaka: Kaiho Shup-pansha.Kudomi, Y., Hosogane, T., & Inui, A. (1999). The participation of students, parents and the community in promoting school autonomy: case studies in Japan. International Studies in Sociology of Education, 9(3), 275–291.Lynn, R. (1988).Educational achievement in Japan. London: MacMillan Press.Martin, M. O., Mullis, I. V. S., Gonzalez, E. J., Gregory, K. D., Smith, T. A., Chrostowski,S. J., Garden, R. A., & O’Connor, K. M. (2000). TIMSS 1999 Intern ational science report, findings from IEA’s Repeat of the Third International Mathematics and ScienceStudy at the Eight Grade.Chestnut Hill: The International Study Center.Mullis, I. V. S., Martin, M. O., Gonzalez, E. J., Gregory, K. D., Garden, R. A., O’Connor, K. M.,Chrostowski, S. J., & Smith, T. A.. (2000). TIMSS 1999 International mathemat-ics report, findings from IEA’s Repeat of the Third International Mathematics and Science Study at the Eight Grade.Chestnut Hill: The International Study Center. Ministry of Education, Science, Sports and Culture (2000).Japanese government policies in education, science, sports and culture. 1999, educational reform in progress. Tokyo: PrintingBureau, Ministry of Finance.Monbusho Ed. (1999).Heisei 11 nendo, wagakuni no bunkyoshisaku : Susumu kaikaku [Japanese government policies in education, science, sports and culture 1999: Educational reform in progress]. Tokyo: Monbusho.Educational Research for Policy and Practice (2004) 3: 95–107 © Springer 2005DOI 10.1007/s10671-004-5557-6Heidi KnipprathDepartment of MethodologySchool of Business, Public Administration and TechnologyUniversity of Twente P.O. Box 2177500 AE Enschede, The Netherlands译文:家长和社区在日本儿童教育中的作用摘要在日本,人们越来越关心家庭和社区参与到儿童教育中。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Research ArticleMechanical Properties of Fiber Reinforced Lightweight Concrete Containing SurfactantY oo-Jae Kim, Jiong Hu, Soon-Jae Lee, and Byung-Hee Y ouDepartment of Engineering Technology, Texas State University, San Marcos, TX 78666, USA Correspondence should be addressed to Y oo-Jae Kim, yk10@Received 21 June 2010; Accepted 24 November 2010Academic Editor: Tarun Kant Copyright © 2010 Y oo-Jae Kim et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Fiber reinforced aerated lightweight concrete (FALC) was developed to reduce concrete’s density and to improve its fire resistance, thermal conductivity, and energy absorption. Compression tests were performed to determine basic properties of FALC. The primary independent variables were the types and volume fraction of fibers, and the amount of air in the concrete. Polypropylene and carbon fibers were investigated at 0, 1, 2, 3, and 4% volume ratios. The lightweight aggregate used was made of expanded clay. A self-compaction agent was used to reduce the water-cement ratio and keep good workability. A surfactant was also added to introduce air into the concrete. This study provides basic information regarding the mechanical properties of FALC and compares FALC with fiber reinforced lightweight concrete. The properties investigated include the unit weight, uniaxial compressive strength, modulus of elasticity, and toughness index. Based on the properties, a stress-strain prediction model was proposed. It was demonstrated that the proposed model accurately predicts the stress-strain behavior of FALC.1. IntroductionIn the last three decades, prefabrication has been applied to small housing and tall building construction, and precast concrete panels have become one of the widely used materials in construction system. Recently, much attention has been directed toward the use of lightweight concrete for precast concrete to improve the performances, such as dead load reduction, fire resistance, and thermal conductivity, of the buildings. Additionally, the structure of a precast building should be able to resist impact loading cases, particularly earthquakes, since resisting earthquakes of these buildings under the performances is becoming an important consideration [1, 2].Many efforts have been applied toward developing high performance concrete for building structures with enhanced performance and safety. V arious types of precast concrete products, such as autoclaved aerated lightweight concrete (AALC), fiber reinforced concrete (FRC), and lightweight concrete, have been developed and experimentally verified.A number of them have been applied in full-scale build-ing structures. AALC is well known and widely accepted, but its small size and weak strength limit its use instructural elements [3]. Lightweight aggregate concretes offer strength, deadload reduction, and thermal conductivity,but their limited ability to absorb earthquake energy raises concerns. In contrast, FRC has greater energy-absorbing ability, which is called “ductility or inelastic deformation capacity,” than normal concrete, but its weight poses problems. Fiber aerated lightweight concrete (FALC) has a promising future for precast concrete panels that can be used in both small and tall building structures because it combines the comfort of AALC, the adaptability of lightweight aggregate concrete, and the reliability of FRC [4-6]. The purpose of this study is to investigate the material properties of FALC, including the compressive strength, modulus of elasticity and toughness index, with different densities, fibers, and volume fractions of fiber. Also, a new modulus of elasticity equation is presented, and the effects of fibers on strength and toughness are evaluated. Based on these properties, a stress-strain prediction model is proposed.2. Experimental ProgramsTo perform this experiment, lightweight concrete mix designs with various densities, air volume, chopped fiber volume and types were used. To improve compressive strength and ductility, as well as the performances for wall panel, expanded clay coarse, fine aggregate, and surfactant to control the density, two different kinds of chopped fibers and self-compaction admixture were used for laboratory experiment. Also, preliminary test results included not only a complete stress-strain curve, but also a measure of ductility, such as energy to failure per unit strength, or ratio of failure strain to yield strain to find constitutive model. In this work, the surfactant contents were 0 and 0.1%, and the fiber volume fractions were 0, 1, 2, 3, and 4%.2.1. Materials.The materials used consisted of early high strength Type I cement satisfying ASTM C150, coarse lightweight aggregate, and fine lightweight aggregate. A self-compaction agent ( Sika V iscoCrete 6000) was used to reduce water and maintain good workability. Surfactant was used to control the density of concrete. Fibers currently being used in concrete can broadly be classified into two types. Low modulus, high elongation fibers, such as nylon, polypropylene, and polyethylene, are capable of large energy absorption characteristics. They do not improve strength; however, they impart toughness and resistance to impact and explosive loading. On the other hand, high strength, high modulus fibers such as steel, glass, asbestos, and carbon produce strong composites. They impart strength and stiffness to the composite and, to varying degrees, dynamic properties. Polypropylene and carbon fiber were used in this test. Table 1 presents the properties of these fibers. Tables 2 show properties of aggregates and admixtures, respectively.2.2. Mixture Proportions.All the mixtures had a cement content of 560 kg/m3 and a fiber content of 5.6, 11.2, 16.8, or 22.4 kg/m3 . This cement content was chosen from the previous tests to provide a compressive strength of about 38 MPa. The water cement ratio was fixed at 0.45. The self-compaction agent provided maximum water reduction ( 10% ∼45% of ordinary water cement ratio), increased early strength, and provided excellent plasticity while maintaining slump for up to two hours. To prevent tangling or balling of the fibers with consequent non uniform fiber distribution, the self-compaction agent and a low shear mixer were used. Table 4 presents detailed mixing proportions .Except for batches without surfactant, the same mixing procedure was followed for all batches. First, fine aggregate and water were mixed for 2 minutes to allow for absorption, since the fine lightweight aggregates were not presoaked. Then, cement was added with surfactant for 5 minutes to make air bubbles. Following that, coarse aggregate, fibers, and a self-compacting agent were mixed for 3 minutes. No tangling or balling of the fibers was observed during the mixing. Occasionally, the mixing time was longer than as described due to surfactant contingency.2.3. Test Specimens.All the fiber aerated lightweight concrete cylinders for compression testing were 100 × 200 mm. The specimens were cast in plastic molds and were compacted by hand and vibrator. After casting, the specimens were covered with wet towels for 24 hours. They were then cured in a saturated water bath maintained at 23 ± 2◦C for seven days. After four days of drying in the laboratory environment at 21 ± 2◦C and 50 ± 15% humidity, they were tested.All the specimens were tested in uniaxial compression using rigid steel plates on an MTS 100 ton test frame. Load and displacements were measured using the load cell and LVDT of the load frame. Axial strain was measured using extensometers located on opposite sides of the cylinder. The average of these extensometer readings was taken as the axial strain value. All the measurements were stored in the computer which runs the MTS test frame.3. T est Results3.1. Compressive Strength.According to the test results (Tables 5 and 6) for polypropylene fiber lightweight concrete with no surfactant, axial stresses ranged from 31.5 to38.3 MPa, with axial strain at peak stress varying from 0.0034 to 0.0044 mm/mm. For carbon fiber lightweight concrete with no surfactant, axial stresses ranged from 29.9 to 39.4 MPa, with axial strain at peak stress varying from 0.0037 to 0.0046 mm/mm. Transversely, when 0.1% surfactant was used with polypropylene fiber lightweight concrete, axial stresses ranged from 12.1 to 17.0 MPa, with axial strain at peak stress varying from 0.0021 to 0.0028 mm/mm. For carbon fiber lightweight concrete with 0.1% surfactant, axial stresses ranged from 12.6 to 17.5 MPa, with axial strain at peak stress varying from 0.0023 to 0.0031 mm/mm. As shown in Table 6,when 0.1% of surfactant was added, compressive strength decreased by 50∼58%. In polypropylene and carbon fiber lightweight concrete with no surfactant, the addition of fibers further increased the strength up to 3% of fiber volume fraction. In both polypropylene and carbon fiber lightweight concrete with 0.1% surfactant, the increase of fiber resulted in gradual decrease of compressive strength. Thus, two main factors that decrease the compressive strength are observed to be fiber volume fraction and the amount of surfactant (Figure 1).3.2. Modulus of Elasticity . Modulus of elasticity is a primary concern in concrete strength. In the case of fiber lightweight concrete without surfactant, the increase in the modulus of elasticity appears to be affected slightly by fiber volume fraction. Moreover, the decrease in the modulus of elasticity provided by fibers with 0.1% surfactant was significant. For polypropylene and carbon fiber lightweight concrete with no surfactant, the modulus of elasticity ranged from 6.6 to 12.0 GPa, and 8.2 to 10.4 GPa, respectively. On the other hand, for polypropylene and carbon fiber lightweight concrete with 0.1% surfactant, the modulus of elasticity ranged from 5.3 to 7.3 GPa, and 6.0 to 8.3 GPa, respectively (see Table 5 and 6). According to Figure 2, the best fiber volume fraction for modulus of elasticity is between 2% and 3% in all cases.According to ACI 318-05 [1], the modulus of elasticity of concrete depends on its compressive strength and density. However, there is not a specific equation for modulus of elasticity with unit weights between 1120 and 1440 kg/m3 . Figures 3 and 4 show the comparison of the modulus of elasticity of ACI equation with experimental data from both polypropylene fiber and carbon fiber. Comparison of the modulus of elasticity from experimental data to ACI 318-05 equation shows that in unit weight between 1425.6and 1489.7 kg/m3 with both fibers, ACI 318-05 equation overesti-lightweight concrete with 0.1% surfactant and unit weights mates about 16∼104% of experimental data. Comparatively, in unit weight between 1137.3 and 1297.5 kg/m3 , the values of the modulus of elasticity with ACI Code 8.5 equation ranges from -21% to 19% with both fibers. The influences of fiber volume fraction and unit weight on the modulus of elasticity are presented in Tables 5 and 6. Equation (1) relates these results to values calculated by means of the modulus of elasticity given in ACI 318-05(Ef c = 1.259192 1 − e−0.8134Ec r2 = 0.94), (1)Where Ef c = modulus of elasticity of fiber aerated lightweight concrete, and Ec = modulus of elasticity calculated by ACI 318-05 equation (GPa).3.3. Unit Weight.The unit weight of the concrete was measured at 7 days curing, and again after 4 days of drying in the laboratory environment at 21 ± 2◦C and 50 ± 15% humidity. The results are presented in Tables 5 and 6. The unit weight of polypropylene fiber reinforced lightweight concrete ranged from 1467.7 to 1489.7 kg/m3, with compressive strengths from 31.5 to 38.3 MPa. For carbon fiber reinforced lightweight concrete, unit weight varied from 1425.6 to 1505.7kg/m3 , and compressive strengths varied from 29.9 to 39.4 MPa. For polypropylene fiber reinforced varying from 1201.4 to 1297.5 kg/m3 compressive strengths ranged from 12.1 to 17.0 MPa. For carbon fiber reinforced lightweight concrete with 0.1% surfactant and unit weights varying from 1137.3 to 1297.5 kg/m3 , compressive strengths ranged from 12.6 to 17.5 MPa. It was found that there is no trend with respect to either fiber volume fraction or types of fiber.3.4. Toughness IndexOne of the main objectives of adding fibers to a concrete matrix is to increase its toughness, its energy-absorbing capability, and make it more suitable for use in structures subjected to impact and earthquake loads. The normalized stress-strain curves (Figure 5) show that the slope of theascending portion of the curves in fiber reinforced lightweight concrete is the same asfor normal lightweight concrete. However, in the post-peak portion of the stress-strain curve, the curves gradually drop, then increase in strain capacity. Figure 6 indicates that the addition of fibers improved ductility to a limited extent. The increase of toughness with fiber volume fraction is more significant for carbon fiber than for polypropylene fiber .The toughness index is defined here as the area under the stress-strain curve of fiber concrete up to a strain of 0.015, divided by the area of no fiber lightweight concrete withnormalized stress up to a strain of 0.015. The toughness of polypropylene and carbon fiber reinforced lightweight concrete with no surfactant ranged from 1.05 to 1.33, and from 1.05 to 1.74, respectively. However, with 0.1 % surfactant, toughness ranged from 2.11 to 2.75 for polypropylene, and from 1.97 to 2.64 for carbon fiber where RI is the reinforcing index (Vf • l/φ). TI = 1.338 + 0.221 • RI(r2 = 0.92) for polypropylene fiber,TI = 1.354 + 0.023 • RI(r2 = 0.89) for carbon fiber,(2)An increase in the volume fraction and modulus of elasticity of fibers generally led to a decrease in the slope of the descending portion of the stress-strain curve. For both fibers, an increase in fiber volume fraction led to similar results. The aspect ratio (l/φ) and the fiber volume fraction seemed to play an important role in improving the peak strain and the toughness of the composite. Improvements of the toughness index due to adding more fiber were relatively significant in lower unit weight concretes.As mentioned above, the post-peak portion of the stress-strain curve for FALC is significantly related to the fiber aspect ratio and volume fraction. Therefore, an inflection poin t (εi ) based on the reinforcing index is selected for the descending portion of the curve for FALC. In the proposed equation by Ezeldin and Balaguru [4], the equation is derived from the inflection point modulus of elasticity from reinforcing index for high strength reinforced concrete, however, as indicated, the post peak portion of stress-strain curve was different between high strength and lightweight concrete. In FALC, inflection point modulus of elasticity must bederived from modulus of elasticity of each fiber other than reinforcing index, then pick an inflection point based on toughness index is selected.The following equation was derived:where TI = toughness index, εi = strain at inflection point,and ε0 = strain at maximum stress.4. ConclusionsThe experimental work reported here sought to characterize the mechanical properties and stress-strain behavior of fiber aerated lightweight concrete. The following conclusions were drawn.(1) Using conventional lightweight aggregate, FALC air dry densities as low as 1137 kg/m3can be achieved by adding 0.1% of surfactant and additives.(2) Both compressive strength and elastic modulus are strongly dependent on the amount of air in the concrete. The increase in surfactant content results in a less compressive strength and elastic modulus compared to non surfactant concrete.(3) Both the compressive strength and elastic modulus are weakly dependent on the amount offiber in the concrete.(4) The toughness index is strongly dependent on the amount of fiber in the aerated concrete. While an increased polypropylene fiber volume fraction improves the toughness index of the concrete, carbon fiber improves this index to a greater degree.(5) The stress-strain curve was represented by using a fractional equation based on the reinforcing index. A fair correlation was achieved in predicting the stress-strain curve.References[1] ACI Committee 318, Building Code Requirements for Reinforced Concrete (ACI 318-05) and Commentary, American Concrete Institute, Detroit, Mich, USA, 2005.[2] Building Research Establishment, “Autoclaved aerated con-crete,” Building Research Establishment Digest 342, pp. 1-8, March 1989.[3] F. C. Mc Cormick, “Rational proportioning of preformed foam cellular concrete,” ACI Journal, vol. 64, pp. 104-110, 1967.[4] A. S. Ezeldin and P. N. Balaguru, “Normal- and high-strength fiber-reinforced concrete under compress ion,” Journal of Mate-rials in Civil Engineering, vol. 4, no. 4, pp. 415-429, 1992.[5] C. H. Henager, “Steel fibrous concrete—a review of testing procedures,” in Proceedings of the Symposium on Fiber Concrete, pp. 16-28, London, UK, 1980.6] C. D. Johnston, Fiber Reinforced Cements and Concretes, Gordon and Breach Science, Amsterdam, The Netherlands, 2001. [7] R. N. Swamy, P. S. Mangat, and C. V. S. K. Rao, The Mechanics of Fiber Reinforcement of Cement Matrices, Fiber Reinforced Concrete, SP44, American Concrete Institute, Detroit, Mich, USA, 1973.研究文章含表面活性剂的纤维增强轻质混凝土的力学性能工程技术部,美国德克萨斯州州立大学圣马科斯,德克萨斯州786662010年6月21日,2010年11月24日纤维增强加气轻质混凝土(FALC)的开发,是用来减少混凝土的密度,并提高其耐火性能,导热系数以及能量的吸收能力。