英文文献+翻译
收藏知乎网友总结的23种英文文献翻译软件,助力文献阅读
01搜狗翻译搜狗翻译的文档翻译功能有三个优点:第一,可以直接上传文档,流程操作简单化,这才是一键翻译哇,我之前只能说是很多键……;第二,在线阅读翻译结果时,系统可实时提供原文与译文的双屏对照,方便对比查看;第三,译文可直接免费下载,方便进一步研读或分享。
02Google Chrome浏览器假设一个情景,你想在PubMed上找到以清华大学为第一单位的施一公教授的文章,那么,可以在Chrome浏览器上,登上PubMed,搜索格式为Yigong Shi Tsinghua University,即可找到其发表的文章。
接着,看上一篇蛮不错的,点击进去看看,然后,还是全英文。
这时候,你可以试下Chrome自带的网页翻译,真的可以秒翻译,将英文翻译为中文,而且还可以快速转换中/英界面。
03Adobe Acrobat笔者在这里给大伙介绍另一款秒翻译PDF文档的神器(笔者使用的Adobe Acrobat Pro DC,至于具体的下载和安装方式,读者可自行百度)。
但是,需要注意一点,这是Adobe Acrobat,而不是Adobe Reader。
在这里,请应许笔者介绍下开发出Adobe Acrobat的公司——Adobe。
Adobe,在软件界绝对是巨头中巨头的存在。
打个比方,我们常用的PS、PR、AE、In、LR等,无一例外都是领域中的顶尖水平,而且都是Adobe家的。
其中,Adobe家中就有一款几位出色的PDF编辑及处理软件——Adobe Acrobat。
(据说PDF作为国际通用的文件存储格式,也是依它而起)OK,进入主题,Adobe Acrobat是长这个样子的。
它可能干嘛呢?PDF 转word、图片合拼为PDF、编辑PDF等等,可以说,与PDF相关的,它都可以搞定。
那如何使用它来帮助我们翻译文献PDF呢?第一步,用它打开文献PDF文件;第二步,点击使用界面上的“文件”,接着点击“另存为”,选择存储格式为“HTML”,如下图;第三步,PDF文档在导出完成后,会得到两个文件,一是将PDF转为HTML格式的网页文件,另一个则是支持网页文件里面的图片(若删,网页里面的图片显示不出来)第四步,找到网页文件,打开方式选择Google Chrome浏览器,接着,结合Chrome浏览器的网页翻译,即可秒翻。
英文文献翻译
外文文献原稿和译文原稿Sodium Polyacrylate:Also known as super-absorbent or “SAP”(super absorbent polymer), Kimberly Clark used to call it SAM (super absorbent material). It is typically used in fine granular form (like table salt). It helps improve capacity for better retention in a disposable diaper, allowing the product to be thinner with improved performance and less usage of pine fluff pulp. The molecular structure of the polyacrylate has sodium carboxylate groups hanging off the main chain. When it comes in contact with water, the sodium detaches itself, leaving only carboxylions. Being negatively charged, these ions repel one another so that the polymer also has cross-links, which effectively leads to a three-dimensional structure. It has hige molecular weight of more than a million; thus, instead of getting dissolved, it solidifies into a gel. The Hydrogen in the water (H-O-H) is trapped by the acrylate due to the atomic bonds associated with the polarity forces between the atoms. Electrolytes in the liquid, such as salt minerals (urine contains 0.9% of minerals), reduce polarity, thereby affecting superabsorbent properties, especially with regard to the superabsorbent capacity for liquid retention. This is the main reason why diapers containing SAP should never be tested with plain water. Linear molecular configurations have less total capacity than non-linear molecules but, on the other hand, retention of liquid in a linear molecule is higher than in a non-linear molecule, due to improved polarity. For a list of SAP suppliers, please use this link: SAP, the superabsorbent can be designed to absorb higher amounts of liquids (with less retention) or very high retentions (but lower capacity). In addition, a surface cross linker can be added to the superabsorbent particle to help it move liquids while it is saturated. This helps avoid formation of "gel blocks", the phenomenon that describes the impossibility of moving liquids once a SAP particle gets saturated.History of Super Absorbent Polymer ChemistryUn til the 1980’s, water absorbing materials were cellulosic or fiber-based products. Choices were tissue paper, cotton, sponge, and fluff pulp. The water retention capacity of these types of materials is only 20 times their weight – at most.In the early 1960s, the United States Department of Agriculture (USDA) was conducting work on materials to improve water conservation in soils. They developed a resin based on the grafting of acrylonitrile polymer onto the backbone of starch molecules (i.e. starch-grafting). The hydrolyzed product of the hydrolysis of this starch-acrylonitrile co-polymer gave water absorption greater than 400 times its weight. Also, the gel did not release liquid water the way that fiber-based absorbents do.The polymer came to be known as “Super Slurper”.The USDA gave the technical know how several USA companies for further development of the basic technology. A wide range of grating combinations were attempted including work with acrylic acid, acrylamide and polyvinyl alcohol (PVA).Since Japanese companies were excluded by the USDA, they started independent research using starch, carboxy methyl cellulose (CMC), acrylic acid, polyvinyl alcohol (PVA) and isobutylene maleic anhydride (IMA).Early global participants in the development of super absorbent chemistry included Dow Chemical, Hercules, General Mills Chemical, DuPont, National Starch & Chemical, Enka (Akzo), Sanyo Chemical, Sumitomo Chemical, Kao, Nihon Starch and Japan Exlan.In the early 1970s, super absorbent polymer was used commercially for the first time –not for soil amendment applications as originally intended –but for disposable hygienic products. The first product markets were feminine sanitary napkins and adult incontinence products.In 1978, Park Davis (d.b.a. Professional Medical Products) used super absorbent polymers in sanitary napkins.Super absorbent polymer was first used in Europe in a baby diaper in 1982 when Schickendanz and Beghin-Say added the material to the absorbent core. Shortly thereafter, UniCharm introduced super absorbent baby diapers in Japan while Proctor & Gamble and Kimberly-Clark in the USA began to use the material.The development of super absorbent technology and performance has been largely led by demands in the disposable hygiene segment. Strides in absorption performance have allowed the development of the ultra-thin baby diaper which uses a fraction of the materials – particularly fluff pulp – which earlier disposable diapers consumed.Over the years, technology has progressed so that there is little if any starch-grafted super absorbent polymer used in disposable hygienic products. These super absorbents typically are cross-linked acrylic homo-polymers (usually Sodium neutralized).Super absorbents used in soil amendments applications tend to be cross-linked acrylic-acrylamide co-polymers (usually Potassium neutralized).Besides granular super absorbent polymers, ARCO Chemical developed a super absorbent fiber technology in the early 1990s. This technology was eventually sold to Camelot Absorbents. There are super absorbent fibers commercially available today. While significantly more expensive than the granular polymers, the super absorbent fibers offer technical advantages in certain niche markets including cable wrap, medical devices and food packaging.Sodium polyacrylate, also known as waterlock, is a polymer with the chemical formula [-CH2-CH(COONa)-]n widely used in consumer products. It has the ability to absorb as much as 200 to 300 times its mass in water. Acrylate polymers generally are considered to possess an anionic charge. While sodium neutralized polyacrylates are the most common form used in industry, there are also other salts available including potassium, lithium and ammonium.ApplicationsAcrylates and acrylic chemistry have a wide variety of industrial uses that include: ∙Sequestering agents in detergents. (By binding hard water elements such as calcium and magnesium, the surfactants in detergents work more efficiently.) ∙Thickening agents∙Coatings∙Fake snowSuper absorbent polymers. These cross-linked acrylic polymers are referred to as "Super Absorbents" and "Water Crystals", and are used in baby diapers. Copolymerversions are used in agriculture and other specialty absorbent applications. The origins of super absorbent polymer chemistry trace back to the early 1960s when the U.S. Department of Agriculture developed the first super absorbent polymer materials. This chemical is featured in the Maximum Absorbency Garment used by NASA.译文聚丙烯酸钠聚丙烯酸钠,又可以称为超级吸收剂或者又叫高吸水性树脂,凯博利克拉克教授曾经称它为SAM即:超级吸收性物质。
毕业论文英文参考文献与译文
Inventory managementInventory ControlOn the so-called "inventory control", many people will interpret it as a "storage management", which is actually a big distortion.The traditional narrow view, mainly for warehouse inventory control of materials for inventory, data processing, storage, distribution, etc., through the implementation of anti-corrosion, temperature and humidity control means, to make the custody of the physical inventory to maintain optimum purposes. This is just a form of inventory control, or can be defined as the physical inventory control. How, then, from a broad perspective to understand inventory control? Inventory control should be related to the company's financial and operational objectives, in particular operating cash flow by optimizing the entire demand and supply chain management processes (DSCM), a reasonable set of ERP control strategy, and supported by appropriate information processing tools, tools to achieved in ensuring the timely delivery of the premise, as far as possible to reduce inventory levels, reducing inventory and obsolescence, the risk of devaluation. In this sense, the physical inventory control to achieve financial goals is just a means to control the entire inventory or just a necessary part; from the perspective of organizational functions, physical inventory control, warehouse management is mainly the responsibility of The broad inventory control is the demand and supply chain management, and the whole company's responsibility.Why until now many people's understanding of inventory control, limited physical inventory control? The following two reasons can not be ignored:First, our enterprises do not attach importance to inventory control. Especially those who benefit relatively good business, as long as there is money on the few people to consider the problem of inventory turnover. Inventory control is simply interpreted as warehouse management, unless the time to spend money, it may have been to see the inventory problem, and see the results are often very simple procurement to buy more, or did not do warehouse departments .Second, ERP misleading. Invoicing software is simple audacity to call it ERP, companies on their so-called ERP can reduce the number of inventory, inventory control, seems to rely on their small software can get. Even as SAP, BAAN ERP world, the field ofthese big boys, but also their simple modules inside the warehouse management functionality is defined as "inventory management" or "inventory control." This makes the already not quite understand what our inventory control, but not sure what is inventory control.In fact, from the perspective of broadly understood, inventory control, shouldinclude the following:First, the fundamental purpose of inventory control. We know that the so-called world-class manufacturing, two key assessment indicators (KPI) is, customer satisfaction and inventory turns, inventory turns and this is actually the fundamental objective of inventory control.Second, inventory control means. Increase inventory turns, relying solely on the so-called physical inventory control is not enough, it should be the demand and supply chain management process flow of this large output, and this big warehouse management processes in addition to including this link, the more important The section also includes: forecasting and order processing, production planning and control, materials planning and purchasing control, inventory planning and forecasting in itself, as well as finished products, raw materials, distribution and delivery of the strategy, and even customs management processes.And with the demand and supply chain management processes throughout the process, it is the information flow and capital flow management. In other words, inventory itself is across the entire demand and supply management processes in all aspects of inventory control in order to achieve the fundamental purpose, it must control all aspects of inventory, rather than just manage the physical inventory at hand.Third, inventory control, organizational structure and assessment.Since inventory control is the demand and supply chain management processes, output, inventory control to achieve the fundamental purpose of this process must be compatible with a rational organizational structure. Until now, we can see that many companies have only one purchasing department, purchasing department following pipe warehouse. This is far short of inventory control requirements. From the demand and supply chain management process analysis, we know that purchasing and warehouse management is the executive arm of the typical, and inventory control should focus on prevention, the executive branch is very difficult to "prevent inventory" for the simple reason that they assessment indicatorsin large part to ensure supply (production, customer). How the actual situation, a reasonable demand and supply chain management processes, and thus set the corresponding rational organizational structure and is a question many of our enterprisesto exploreThe role of inventory controlInventory management is an important part of business management. In the production and operation activities, inventory management must ensure that both the production plant for raw materials, spare parts demand, but also directly affect the purchasing, sales of share, sales activities. To make an inventory of corporate liquidity, accelerate cash flow, the security of supply under the premise of minimizing Yaku funds, directly affects the operational efficiency. Ensure the production and operation needs of the premise, so keep inventories at a reasonable level; dynamic inventory control, timely, appropriate proposed order to avoid over storage or out of stock; reduce inventory footprint, lower total cost of inventory; control stock funds used to accelerate cash flow.Problems arising from excessive inventory: increased warehouse space andinventory storage costs, thereby increasing product costs; take a lot of liquidity, resultingin sluggish capital, not only increased the burden of payment of interest, etc., would affect the time value of money and opportunity income; finished products and raw materials caused by physical loss and intangible losses; a large number of enterprise resource idle, affecting their rational allocation and optimization; cover the production, operation of the whole process of the various contradictions and problems, is not conducive to improve the management level.Inventory is too small the resulting problems: service levels caused a decline in the profit impact of marketing and corporate reputation; production system caused by inadequate supply of raw materials or other materials, affecting the normal production process; to shorten lead times, increase the number of orders, so order (production) costs; affect the balance of production and assembly of complete sets.NotesInventory management should particularly consider the following two questions:First, according to sales plans, according to the planned production of the goods circulated in the market, we should consider where, how much storage.Second, starting from the level of service and economic benefits to determine howto ensure inventories and supplementary questions.The two problems with the inventory in the logistics process functions.In general, the inventory function:(1)to prevent interrupted. Received orders to shorten the delivery of goods fromthe time in order to ensure quality service, at the same time to prevent out of stock.(2)to ensure proper inventory levels, saving inventory costs.(3)to reduce logistics costs. Supplement with the appropriate time interval compatible with the reasonable demand of the cargo in order to reduce logistics costs, eliminate or avoid sales fluctuations.(4)ensure the production planning, smooth to eliminate or avoid sales fluctuations.(5)display function.(6)reserve. Mass storage when the price falls, reduce losses, to respond to disasters and other contingencies.About the warehouse (inventory) on what the question, we must consider the number and location. If the distribution center, it should be possible according to customer needs, set at an appropriate place; if it is stored in central places to minimize the complementary principle to the distribution centers, there is no place certain requirements. When the stock base is established, will have to take into account are stored in various locations in what commodities.库存管理库存控制在谈到所谓“库存控制”的时候,很多人将其理解为“仓储管理”,这实际上是个很大的曲解。
英文文献整篇翻译
英文文献整篇翻译Title: The Impact of Climate Change on BiodiversityClimate change is a pressing issue that has significant impacts on biodiversity worldwide. Changes in temperature, precipitation patterns, and extreme weather events are altering ecosystems and threatening the survival of many species. The loss of biodiversity not only affects the natural world but also has implications for human societies.One of the major impacts of climate change onbiodiversity is the shifting of habitats. As temperatures rise, many species are forced to move to higher latitudesor elevations in search of suitable conditions. This can disrupt ecosystems and lead to the decline or extinction of species that are unable to adapt to the new conditions.In addition to habitat loss, climate change is also causing changes in the timing of biological events such as flowering, migration, and reproduction. These changes can disrupt the delicate balance of ecosystems and lead to mismatches between species that depend on each other for survival.Furthermore, climate change is exacerbating otherthreats to biodiversity such as habitat destruction, pollution, and overexploitation. The combination of these factors is putting immense pressure on many species and pushing them closer to extinction.It is essential that we take action to mitigate the impacts of climate change on biodiversity. This includes reducing greenhouse gas emissions, protecting and restoring habitats, and implementing conservation measures to safeguard vulnerable species. By addressing the root causes of climate change and protecting biodiversity, we canensure a sustainable future for both the natural world and human societies.气候变化对生物多样性的影响气候变化是一个紧迫的问题,对全球的生物多样性产生重大影响。
英文文献pdf翻译成中文
英文文献pdf翻译成中文
对于将英文文献PDF翻译成中文,有几种常见的方法可以实现。
首先,你可以使用在线翻译工具,如Google翻译或百度翻译,直接
上传PDF文件并选择需要翻译的语言。
这些工具可以快速将整个文
档翻译成中文,但需要注意的是,由于自动翻译的限制,可能会出
现一些翻译不准确或不通顺的情况,特别是对于专业术语或复杂句子。
其次,你也可以雇佣专业的翻译人员或翻译公司来完成这项任务。
他们可以提供更准确、流畅的翻译,尤其是对于专业领域的文献,这种方式会更可靠。
然而,这需要一定的成本和时间,因为翻
译人员需要逐字逐句地进行翻译,并且可能需要进行专业术语的翻
译和校对。
另外,如果你自己对英文和中文都比较熟悉,也可以考虑自行
翻译。
这种方式可以确保翻译的准确性和流畅性,但需要花费较多
的时间和精力。
无论采用哪种方式,翻译文献需要保证准确性和完整性,尤其
是对于学术文献或专业领域的文献。
希望这些信息能够帮助你找到合适的方法来将英文文献PDF翻译成中文。
2篇英文文献翻译
On some of the design aspects of wind energy conversion systemsAbstractIn the overall process of utilizing wind power, two essential components of technical data, i.e. one related to the engineering or performance characteristics of commercially available wind turbine generators, and the other related to the availability of wind resources ,are need .The performance of wind energy conversion systems(WECs) depends upon subsystems like wind turbine (aaerodynamic),gears(mechanical), and generator (electrical), The availability of wind resources is governed by the climatic conditions of the region ,for which the wind survey is extremely important to exploit wind energy .In this paper, design aspects, such as factor affecting wind power, siting requirements for WECs, problems related with grid connections, classification of wind electric generation schemes, criteria for selection of equipment for WECs, choice of generators, three basic design philosophies , main considerations in wind turbine design ,choice between two and three blade rotors, weight and size considerations and environmental aspects related with WECs have been presented.1. IntroductionWind powered systems have widely used since the tenth century for water pumping, grinding and other low power application. There were several early attempts to build large scale wind powered systems to generate electricity. In 1931,the Russians built a large windmill with a 100 ft(30.5 m) diameter blade, but it had a very low conversion efficiency and was abandoned. In 1945,a Vermount utility built a large wind powered generator to produce electricity. This system costed $1.25 million and had an electrical power output of 1.25 MW. This unit lasted for 23 days before one of the blades failed due to fatigue, and the project was abandoned.The National Aeronautics and Space Administration (NASA), in conjunction with the Energy Research and Development Administration (ERDA), has built andtested a large number of large wind powered generators .The first machine was a100KW unit built at Sandusky, Ohio, for around a million dollars. A number of other machines with power up to 2.5 MW and rotor diameter up to 350 ft(107m) have been constructed. During the 1980s, it became popular to invest money in wind systems because of the tax benefits. Consequently, a number of wind farms were built, particularly in the mountain passes of California. In 1985 ,about half of the world’s wind generated electricity was produced in the Altamount Passe area of California. This area has 6700 turbines with a total rated capacity of 630 MW. Among the renewable sources off energy available today for generation of electrical power, wind energy stands foremost because of the relatively low capital cost involved and the short gestation period required. The world has obtained the installed wind capacity of 13400MW by the end of 1992.The design and successful operation of large scale wind powered generators face a number of formidable problems. If the system is designed to produce a.c power, a constant angular velocity and force problems. Unfortunately, the wind velocity is neither constant in magnitude or direction nor is it constant from the top to bottom of a large rotor. This imposes severe cyclic loads on the turbine blades, creating fatigue problems. This problem is compounded if a downwind rotor system is used because the shadow of the support tower unloads the blade. This effect also produces a noticeable noise, which can be objectionable.The available wind resource is governed by the climatology of the region concerned and has a large variability from one location to the other and also from season to season at any fixed location. A lot of development has taken place in the design of wind energy conversion systems. Modern wind turbines are highly sophisticated machines built, on the aerodynamic principles developed from the aerospace industry, incorporating advanced materials and electronics and are designed to deliver energy across a wide range of speeds. In this paper, WECs related aspects, such as factors affecting wind power, siting requirements for WECs, criteria for selection of equipment for WECs, choice of generators, three basic design philosophies, main considerations in wind turbine design, choice between two and three blade rotors, weight and size considerations and environmental aspects related with WECs have been presented.2. Factors affecting wind powerOne of the most important tools in working with the wind, whether designing a wind turbine or using one, is the firm understanding of the factors affecting the wind power. Following are the important factors that must be considered:2.1. Wind statisticsWind is a highly variable power source, and there are several methods of characterizing this variability. The most common method is the power duration curve. This is a good concept but is not easily used to select Vc and Vr for a given wind site, which is an important design requirement. Another method is to use a statistical representation, particularly a Weibull function.2.2. Load factorThere are at least two major objectives in wind turbine design. One is to maximize the average power output. The other is to meet the necessary load (which is the ratio of average electrical power to the rated electrical power) requirement of the load. Load factor is not of major concern if the wind electric generator (WEG) is acting as a fuel saver on the electric network. But if the generator is pumping irrigation water in asynchronous mode, for example, the load factor is very important.2.3. Seasonal and diurnal variation of wind powerSeasonal and diurnal variations have significant effects in wind. Load duration data are required to judge the appropriate effects. Diurnal variation is less with increased height. Average power may vary from about 80% of the long term annual average power in the early morning hour to about 120% of the long term average power in the early afternoon hours.2.4. Variation with timeFor most applications of wind power, it is more important to know about the continuity of supply than the total amount of energy available in a year. In practice, when the wind blows strongly, e.g. more than 12 m/s, there is no shortage if power, and often, the generated power has to be dumped. Difficulties appear, however, ifthere are extended periods of light or zero winds. A rule of thumb for electricity generation is that sites with average wind speed less than 5 m/s will have unacceptably long periods without generation, and the sites of average 8 m/s or above will be considered very good. In all the cases it will be necessary to match carefully the machine characteristic to the local wind regime to give the type of supply required.3. Siting requirements for WECsIn addition to adequate availability of wind resources (a minimum of 18 km/h or 5 km/h wind speed ) the following factors have to be considered while locating a WEG :a. availability of land,b. availability of power grid (for a grid connected system),c. accessibility of site,d. terrain and soil,f. frequency of lightning strokes.Once the wind resource at a particular site has been established, the next factor to be considered is the availability of land. The area of land required depends upon the size of wind farm. The optimum spacing in a row is 8-12 times the rotor diameter in the wind directions and 1.5-3 times the rotor diameter in cross wind directions. As a rule of thumb, 10 ha/MW can be taken as the land requirement of wind farms, including infrastructure. In order to optimize the power output from a given site, additional information is needed, such as wind rose, wind speeds, vegetation, topography, ground roughness etc., besides the configuration of a set of wind turbines, which can be altered for reaching best array efficiencies and highest generation. Factors such as convenient access to the wind farm site, load bearing capacity of the soil, frequency of cyclones, earthquakes etc., also require consideration before siting the wind farm.4. Choice of generatorsThere are mainly the following three classes of generators:4.1. DC generatorsDC generators are relatively unusual wind/micro-hydro turbine applications because they are expensive and require regular maintenance. Nowadays, for most d.c. applications, for example, it is more common to employ an a.c. generator to generate a.c., which is then converted to d.c. with simple solid state rectifiers.4.2. Synchronous generatorThe major advantage of synchronous generator is that its reactive power characteristic can be controlled, and therefore such machines can be used to supply reactive power to other items of power systems that require reactive power. It is normal for a stand alone wind-Diesel system to have a synchronous generator, usually connected to the Diesel engine. Synchronous generators, when fitted to a wind turbine, must be controlled carefully to prevent the rotor speed accelerating through synchronous speed especially during turbulent winds. Moreover, it requires a flexible coupling in the drive train, or to mount the gearbox assembly on springs or dampers to absorb turbulence. Synchronous generators are costlier than induction generators, particularly in smaller size ranges. Synchronous generators are more prone to failures.4.3. Induction generatorsAn induction generator offers many advantages over a conventional synchronous generator as a source of isolated power supply. Reduced unit cost, ruggedness, brushless (in squirrel cage construction), reduced size, absence of separate DC source and ease of maintenance, self-protection against severe overloads and short circuits are the main advantages. Further, induction generators are loosely coupled devices, i.e. they are heavily damped and, therefore, have the ability to absorb slight changes in rotor speed, and drive train transients to some extent, can, therefore, be absorbed, whereas synchronous generators are closely coupled devices and when used in windturbines, are subjected to turbulence and require additional damping devices, such as flexible couplings in the drive train or mounting the gearbox assembly on springs and dampers. Reactive power consumption and poor voltage regulation under varying speed are the major drawbacks of the induction generators, but the development of static power converters has facilitated control of the output of voltage of the induction generator, within limits.5. Environmental aspects5.1. Audible noiseThe wind turbine is generally quiet. It poses no objectionable noise disturbance in the surrounding area. The wind turbine manufacturers generally supply the noise level data in dB versus the distance from the tower. A typical 600 kW wind turbine may produce 55 dB noise at 50 m distance from the turbine and 40 dB at a 250 m distance. This noise is, however, a steady state noise. The wind turbine makes loud noise while yawing under changing wind direction. Local noise ordinances must be satisfied before installing wind turbines.5.2. Electromagnetic interferenceAny stationary or moving structure in the proximity of a radio or TV station interferes with the signals. The wind turbine towers can cause objectionable electromagnetic interference (EMI) on the performance of the nearby transmitters or receivers.In other aspects, the visual impact of the wind farm can be of concern to some one. The breeding and feeding patterns of birds may be disturbed. They may even be injured and even killed if they collide with the blades.6. ConclusionsThe design of wind energy conversion systems is a very complex task and requires interdisciplinary skills, e.g. civil, mechanical, electrical and electronics, geography, aerospace, environmental etc. An attempt has been made to discuss theimportant design aspects of WECs. In this paper, design aspects, such as factors affecting wind power, siting requirements for WECs, problems related with grid connections, classification of wind electric generation schemes, criteria for selection of equipment for WECs, choice of generators, three basic design philosophies, main considerations and environmentally related aspects with WECs, have been critically discussed.风能转换系统方面的设计摘要要在整个过程中利用风能,需要有两个主要的技术指标,也就是说一个在市场可以买到的符合产品设计或运行特性的风力发电机,另一个是满足要求的风力资源。
英文文献用翻译
Adult【成年人】Aged【老年人】Aged, 80 and over【老年人, 80以上】Catheterization, Central Venous/*instrumentation/methods【*导管插入术, 中心静脉/*仪器/方法】Cost-Benefit Analysis【费用效益分析】Equipment Design【设备设计】Equipment Failure【设备失效】Equipment Safety【设备安全性】Female【女(雌)性】Humans【人类】Infusion Pumps, Implantable/adverse effects/*economics【*输注泵, 植入型/副作用/*经济学】Male【男(雄)性】Middle Aged【中年人】Neoplasms/*drug therapy/pathology【*肿瘤/*药物疗法/病理学】Probability【概率】Prospective Studies【前瞻性研究】Risk Assessment【危险性评估】Sensitivity and Specificity【敏感性与特异性】Treatment Outcome【治疗结果】Vascular Patency【血管未闭】Venous Thrombosis/prevention & control【静脉血栓形成】Adolescent【青少年】Adult【成年人】Aged【老年人】Aged, 80 andover【老年人, 80以上】AntineoplasticAgents/*administration& dosage【*抗肿瘤药】*Catheters,Indwelling/adverseeffects/economics【*导管, 留置/副作用/经济学】Female【女(雌)性】Humans【人类】*Infusion Pumps,Implantable/adverse。
英文文献全文翻译
英文文献全文翻译全文共四篇示例,供读者参考第一篇示例:LeGuin, Ursula K. (December 18, 2002). "Dancing at the Edge of the World: Thoughts on Words, Women, Places".《世界边缘的舞蹈:关于语言、女性和地方的思考》Introduction:In "Dancing at the Edge of the World," Ursula K. LeGuin explores the intersection of language, women, and places. She writes about the power of words, the role of women in society, and the importance of our connection to the places we inhabit. Through a series of essays, LeGuin invites readers to think critically about these topics and consider how they shape our understanding of the world.Chapter 1: LanguageConclusion:第二篇示例:IntroductionEnglish literature translation is an important field in the study of language and culture. The translation of English literature involves not only the linguistic translation of words or sentences but also the transfer of cultural meaning and emotional resonance. This article will discuss the challenges and techniques of translating English literature, as well as the importance of preserving the original author's voice and style in the translated text.Challenges in translating English literature第三篇示例:Title: The Importance of Translation of Full English TextsTranslation plays a crucial role in bringing different languages and cultures together. More specifically, translating full English texts into different languages allows for access to valuable information and insights that may otherwise be inaccessible to those who do not speak English. In this article, we will explore the importance of translating full English texts and the benefits it brings.第四篇示例:Abstract: This article discusses the importance of translating English literature and the challenges translators face when putting together a full-text translation. It highlights the skills and knowledge needed to accurately convey the meaning and tone of the original text while preserving its cultural and literary nuances. Through a detailed analysis of the translation process, this article emphasizes the crucial role translators play in bridging the gap between languages and making English literature accessible to a global audience.IntroductionEnglish literature is a rich and diverse field encompassing a wide range of genres, styles, and themes. From classic works by Shakespeare and Dickens to contemporary novels by authors like J.K. Rowling and Philip Pullman, English literature offers something for everyone. However, for non-English speakers, accessing and understanding these works can be a challenge. This is where translation comes in.Translation is the process of rendering a text from one language into another, while striving to preserve the original meaning, tone, and style of the original work. Translating afull-length English text requires a deep understanding of both languages, as well as a keen awareness of the cultural andhistorical context in which the work was written. Additionally, translators must possess strong writing skills in order to convey the beauty and complexity of the original text in a new language.Challenges of Full-text TranslationTranslating a full-length English text poses several challenges for translators. One of the most significant challenges is capturing the nuances and subtleties of the original work. English literature is known for its rich and layered language, with intricate wordplay, metaphors, and symbolism that can be difficult to convey in another language. Translators must carefully consider each word and phrase in order to accurately convey the author's intended meaning.Another challenge of full-text translation is maintaining the author's unique voice and style. Each writer has a distinct way of expressing themselves, and a good translator must be able to replicate this voice in the translated text. This requires a deep understanding of the author's writing style, as well as the ability to adapt it to the conventions of the target language.Additionally, translators must be mindful of the cultural and historical context of the original work. English literature is deeply rooted in the history and traditions of the English-speaking world, and translators must be aware of these influences in orderto accurately convey the author's intended message. This requires thorough research and a nuanced understanding of the social, political, and economic factors that shaped the work.Skills and Knowledge RequiredTo successfully translate a full-length English text, translators must possess a wide range of skills and knowledge. First and foremost, translators must be fluent in both the source language (English) and the target language. This includes a strong grasp of grammar, syntax, and vocabulary in both languages, as well as an understanding of the cultural and historical context of the works being translated.Translators must also have a keen eye for detail and a meticulous approach to their work. Every word, sentence, and paragraph must be carefully considered and translated with precision in order to accurately convey the meaning of the original text. This requires strong analytical skills and a deep understanding of the nuances and complexities of language.Furthermore, translators must possess strong writing skills in order to craft a compelling and engaging translation. Translating a full-length English text is not simply a matter of substituting one word for another; it requires creativity, imagination, and a deep appreciation for the beauty of language. Translators mustbe able to capture the rhythm, cadence, and tone of the original work in their translation, while also adapting it to the conventions of the target language.ConclusionIn conclusion, translating a full-length English text is a complex and challenging task that requires a high level of skill, knowledge, and creativity. Translators must possess a deep understanding of both the source and target languages, as well as the cultural and historical context of the work being translated. Through their careful and meticulous work, translators play a crucial role in making English literature accessible to a global audience, bridging the gap between languages and cultures. By preserving the beauty and complexity of the original text in their translations, translators enrich our understanding of literature and bring the works of English authors to readers around the world.。
如何翻译外文文献
如何翻译外文文献在科研过程中阅读翻译外文文献是一个非常重要的环节,许多领域高水平的文献都是外文文献,借鉴一些外文文献翻译的经验是非常必要的。
由于特殊原因我翻译外文文献的机会比较多,慢慢地就发现了外文文献翻译过程中的三大利器:Google“翻译”频道、金山词霸(完整版本)和CNKI“翻译助手"。
具体操作过程如下:1.先打开金山词霸自动取词功能,然后阅读文献;2.遇到无法理解的长句时,可以交给Google处理,处理后的结果猛一看,不堪入目,可是经过大脑的再处理后句子的意思基本就明了了;3.如果通过Google仍然无法理解,感觉就是不同,那肯定是对其中某个“常用单词”理解有误,因为某些单词看似很简单,但是在文献中有特殊的意思,这时就可以通过CNKI的“翻译助手”来查询相关单词的意思,由于CNKI的单词意思都是来源与大量的文献,所以它的吻合率很高。
另外,在翻译过程中最好以“段落”或者“长句”作为翻译的基本单位,这样才不会造成“只见树木,不见森林”的误导。
注:1、Google 翻译google,众所周知,谷歌里面的英文文献和资料还算是比较详实的。
我利用它是这样的。
一方面可以用它查询英文论文,当然这方面的帖子很多,大家可以搜索,在此不赘述。
回到我自己说的翻译上来。
下面给大家举个例子来说明如何用吧比如说“电磁感应透明效应”这个词汇你不知道他怎么翻译,首先你可以在CNKI里查中文的,根据它们的关键词中英文对照来做,一般比较准确。
在此主要是说在google里怎么知道这个翻译意思。
大家应该都有词典吧,按中国人的办法,把一个一个词分着查出来,敲到google里,你的这种翻译一般不太准,当然你需要验证是否准确了,这下看着吧,把你的那支离破碎的翻译在google里搜索,你能看到许多相关的文献或资料,大家都不是笨蛋,看看,也就能找到最精确的翻译了,纯西式的!我就是这么用的。
2、CNKI翻译CNKI翻译助手,这个网站不需要介绍太多,可能有些人也知道的。
英文文献及翻译
Recycling Economics: Higher Costs Are An Illusion Hidden CostsM.A.CokeMany people are surprised when they discover their community may pay more for a curbside recycling program than for regular trash pickup. They ask why - in some cases - they must pay more to give their recyclables to someone who will sell them? This leads many people to believe that recycling is not economical. One reason recycling appears to be uneconomical is that some people already pay a higher cost for trash disposal than they realize. Some local governments pay fees to hauling companies, transfer stations, or landfills out of local tax revenue. That lowers the direct cost to residents and businesses, making the regular trash pickup appear to be less expensive than it really is. But when recycling programs begin, residents usually directly pay the full cost of recycling. This can distort the cost comparisons between the recycling program and disposing of trash at landfills. Depletion Costs Recycling also is economical because costs associated with future disposal are avoided. One of these avoided costs is for landfill depletion. Landfills have limited space, and so can receive a limited amount of trash. When it is full, it must be replaced by another landfill that is generally more expensive to operate and maintain. This is due to higher costs of complying with environmental regulations, higher expenses in siting a new location, buying or allocating land, constructing the landfill, operational expenses, and long-term maintenance costs after the landfill is closed. Additionally, the new landfill may be further away than the old landfill, increasing transportation costs. Generally, a new landfill costs more than an older one. Paying the higher cost at a new landfill is avoided by keeping the older landfill open longer. Recycling and other waste-reducing methods keep the older landfill open longer. Because these avoided costs are not seen when people pay the bills, they do not usually think of the savings recycling produces. Environmental Costs Recycling is economical in several ways related to manufacturing processes. Recycling cuts down on waste produced by processing raw materials into usable forms. For example, recycling aluminum reduces mining wastes, processing wastes, and emissions produced by extracting the aluminum from the ore. Recycling usually requires less refining than raw materials. For example, it takes much less energy to melt down an aluminum can to make another aluminum can than to process the raw materials to make a can. This cuts down on chances for environmental damage and conserves our natural resources. With any product, the costs of cleaning up wastes and limitingemissions usually are passed on to consumers who purchase the product. But sometimes damage to the environment is not realized for years, is difficult to attribute to certain industries, or is caused by a combination of many industries. Acid rain is one example of this type of environmental damage. The costs of dealing with this pollution are hard to assess, but are paid for by everyone in efforts to improve the environment.Energy SavingsManufacturing products from recycled material also can save energy. The energy required to produce one aluminum can is equal to the energy embodied in the amount of gasoline it takes to fill the can half full.While recycling saves energy, that does not always mean that industries save money by using recycled materials. Labor costs for recycled products are often higher than those used in processing virgin material. Materials recovered from curbside collection, drop-off centers, and material recovery facilities must be separated, cleaned, and processed.Making a product from recycled material may require new or retrofitted equipment and other capital expenditures while virgin material supplies and equipment needed to produce most goods already exist.But since recycling saves energy, it also cuts down on pollution emitted by utilities and the companies themselves. When energy is used, the price of the resulting pollution is passed on to all energy consumers in their utility bills. Due to the new clean air law, utility companies must comply with tougher standards in reducing pollutants they release while producing energy. The cost of compliance is usually passed on to each energy consumer.If energy use is reduced by methods such as recycling, less pollution is produced. That reduces everyone's cost in terms of paying to reduce pollution and in limiting damage to natural resources.Once the long-term costs and advantages are weighed, recycling does make economic sense. Using resources wisely is always economical..循环经济学:较高的成本是一个错觉隐藏的费用M.A.Coke许多人都很惊讶当他们发现他们的社区可能多支付一个路边废物回收程序要比普通垃圾的猎物。
英文文献及翻译
Geotextile reinforced by soft soil1. IntroductionGeotextile known, it has high tensile strength, durability, corrosion resistance, texture, flexibility, combined with good sand, to form reinforced composite foundation, effectively increase the shear strength , tensile properties, and enhance the integrity and continuity of soil. Strengthening mechanism for the early 60's in the 20th century, Henri Vidal on the use of triaxial tests found a small amount of fiber in the sand, the soil shear strength can improve the image of more than 4 times in recent years, China's rock Laboratory workers also proved in the reinforced sand can effectively improve the soil's bearing capacity, reduce the vertical ground settlement, effectively overcome the poor soil and continuity of overall poor performance. As with the above properties of reinforced soil and the characteristics of its low price, so the project has broad application prospects.2.1 Project OverviewThe proposed retaining wall using rubble retaining wall of gravity, the wall is 6 meters high, the bearing capacity of foundation soil required to 250kPa, while the basement geology from the top down as follows: ①clay to a thickness of 0.7 to 2 meters saturated, soft plastic; ② muddy soil, about 22 - 24 meters thick, saturated, mainly plastic flow, local soft plastic; ③ sand layer to a thickness of 5 to 10 meters, containing silty soil and organic matter, saturated, slightly wet; ④ gravel layer, the thickness of the uneven distribution points, about 0 to 2.2 meters, slightly dense; ⑤ weathered sandstone. Including clay and silty soil bearing capacity is 70kPa, obviously do foundation reinforcement.2.2 Enhanced Treatment of reinforced foundation cushion Reinforcement replacement method can be used for sand and gravel used forsoil treatment, but due to loose bedding, based on past experience, witha gravel mat to treat a large settlement of the foundation always exist, even the characteristics of poor, often resulting in cracks in the superstructure, differential settlement of the image, this works for6-meter-high rubble retaining walls, height and large, and because the walls are 3 meters high wall, if there is differential settlement of retaining walls, cracks, will result in more serious consequences and thus should be used on the cushion reinforcement through economic and technical analysis, decide on the sand and gravel stratum were reinforced hardening. Reinforcement treatment method: first the design elevation and the basement excavation to 200mm thick layer of gravel bedding, and then capped with a layer of geotextile, and then in the thick sand and gravel on the 200, after leveling with the yellow sand using roller compaction; second with loaded bags of sand and gravel laying of geotextile, the gap filled with slag, geotextile bags capped 100 thick gravel, roller compaction. Its on repeat laying geotextile → → compacted gravel, until the design thickness of the cushion, the bridge is 1 m thick cushion, a total of 4 layers of geotextile, two bags of sand.This method works fast, simple machine, investment, after years of use, that reinforce good effect, building and construction units are satisfied.3 ExperienceTo achieve the reinforced soil reinforcement effect, must be reinforced earth construction technology, construction strict quality control: 1, geotextile should increase the initial pre-stress, and its end should be a reliable anchor to play the tensile strength of geotextile, anchoring more firmly, more capacity to improve, the foundation of the stress distribution more uniform, geotextile side Ministry of fixed length by laying end to ensure the fold, the folded end wrapped sand to increase its bond strength to ensure that the use will not be pulled out duringthe period.Second, the construction process have a significant effect on the reinforcement effect, the construction should be as soon as possible so that geotextile in tension, tensile strength geotextile can be played only when the deformation, so do not allow construction of geotextile crease occurs, the earth Fabric tension leveling as much as possible. Geotextile in order to have enough by the early Dutch strain, according to the following procedure works: ① laying geotextile; ② leveled the tension at both ends; both ends of the folded package gravel and sand filling at both ends; ③ center fill sand; ④ 2 higher end of sand; ⑤ Finally, the center of sand filling. Click here to enable the construction method of forming corrugated geotextile being stretched as soon as possible, to play a role in the early loaded.Third, the construction of geotextile-reinforced cushion should the level of shop using geotextile geotextile and laying of gravel bags cushion the turn to play bag cushion integrated turn out good, flexural rigidity, and dispersion of good and peace bedding layer of the overall continuity of good advantages.4 ConclusionGeotextile reinforced by soft soil is an effective, economical, safe, reliable, simple method, but the literature describes only qualitative, experience more components, yet the lack of rigorous The theoretical formula, reliable test data to be adequate, these are yet to be theoretical workers and the general engineering and technical personnel continue to explore.土工织物加筋垫层加固软土地基1. 引言土工织物又称土工聚合物,它具有高抗拉强度,耐久性、耐腐蚀性,质地柔韧,能与砂土很好地结合,组合成加筋土复合地基,有效地提高土的抗剪强度、抗拉性能,增强土体的整体性和连续性。
八大英文文献翻译神器
你值得拥有的八大英文文献翻译神器不管是做科研还是写SCI论文,开始都需要阅读大量的文献,做课题至少查阅600篇,粗看300篇,细看100篇,研读50篇,在看到一叠叠论文后,由于语言问题,往往会觉得无从下手,下面分享几款常用的文献翻译神器。
1、谷歌浏览器翻译优点:页面简洁,使用方便,随开随用,多种语言随时切换,只要有网就能翻译。
缺点:功能比较单一,排版比较乱,界面不是很美观。
2、SCI Translate9.0目前有9.0普通版以及VIP版,VIP版内置Google 人工智能云翻译引擎,翻译精准度很强;没有广告。
3、LinggleLinggle是一个可用来进行英语语法、句子写作的工具,可为学习者提供更准确的英文写作建议。
4、NetSpeakNetSpeak是一个提供免费线上单词、词组、语句翻译的工具,其特点是可以在线搜索和比较各种英文词汇、短句、语法、单词解释等内容,并且可以统计出这个用语的变化形态,还可以分析使用频率和情境,堪比谷歌翻译。
5、CNKI翻译CNKI翻译助手是一款专业的学术翻译工具,由“中国知网”开发制作,汇集了从CNKI系列数据库中挖掘的大量常用词汇、专业术语、成语俚语及双语例句等,形成海量中英在线词典和双语平行语料库。
6、LingoesLingoes是一款简明易用的词典与文本翻译软件,支持全球超过80多种语言的词典查询、全文翻译、屏幕取词、划词翻译、例句搜索、网络释义和真人语音朗读功能。
7、有道词典有道词典是个神器,尤其是查词、划词、取词的方面特别突出,词库中有所有专业用语的补充包,可以让你瞬间翻译出各种专业的英文单词,从复杂的有机化合物,到稀奇古怪的动物名,哪里不会点哪里。
8、Copy Translator比较适用于即时翻译,内置了谷歌翻译、百度翻译、有道翻译、搜狗翻译、彩云翻译和腾讯翻译几种不同的翻译引擎,随意切换,总有一个适合你。
英文文献翻译格式
英文文献翻译格式
英文文献翻译格式通常包括以下几个方面:
1. 文献信息:在翻译文献之前,首先要提供文献的基本信息,包括作者、标题、期刊/书名、出版日期等。
通常按照以下格式进行排列:
[作者姓氏],[作者名字]. [文章标题]. [期刊/书名]. [出版日期].
2. 翻译译者:如果是自己翻译的文献,应该标明自己是译者。
可以在文献信息的末尾加上"Translated by: [译者姓名]"或者"Translation by: [译者姓名]"。
3. 段落翻译:在翻译文献的内容时,可以将原文按照段落进行翻译,并使用缩进或者空行进行分隔。
可以使用换行符或者段落符号表示新的段落。
4. 引用翻译:如果在翻译过程中需要引用原文的内容,可以使用引号标注,同时在引用的末尾注明原文的出处。
可以使用段落符号或者引用标记来引用内容。
5. 注释:如果在翻译过程中需要添加注释或者补充说明,可以使用括号或者脚注来表示。
注释应该直接放在被注释的内容后面,使用合适的标点符号进行分割。
6. 标题翻译:如果文献中包含标题或子标题,应该将其翻译成对应的中文。
可以将翻译后的标题用加粗、斜体、下划线等格
式进行标记,以便读者区分。
7. 插图翻译:如果文献中包含插图或者图片,通常需要对其进行翻译。
可以在插图下方或者旁边加上对应的中文说明,以方便读者理解。
总体而言,英文文献翻译格式应该清晰明了,准确无误地传达原文的信息。
要注重原文内容的准确性,并注意译文的语法、词汇、结构等方面的准确性和流畅度。
英文文献翻译
外文翻译(原文)Catalytic wet peroxide oxidation of azo dye (Congo red) using modified Y zeolite as catalystAbstractThe present study explores the degradation of azo dye (Congo red) by catalytic wet peroxide oxidation using Fe exchanged commercial Y zeolite as a catalyst. The effects of various operating parameters like temperature, initial pH, hydrogen peroxide concentration and catalyst loading on the removal of dye,color and COD from an aqueous solution were studied at atmospheric pressure. The percent removals of dye, color and COD at optimum pH07, 90◦C using 0.6 ml H 2 O2/350 ml solution and 1 g/l catalyst was 97% (in 4 h), 100% (in 45 min) and 58% (in 4 h), respectively. The % dye removal has been found to be less in comparison to % color removal at all conditions, e.g. dye removal in 45 min and at above conditions was 82%, whereas the color removal was 100%. The results indicate that the Fe exchanged Y zeolite is a promising catalyst for dye removal. Fe exchanged catalyst is characterized using XRD, SEM/EDAX, surface area analyzer and FTIR. Though the dye, color and COD removals were maximum at pH02 but as the leaching of Fe from the catalyst was more in acidic pH range, pH0 7 was taken as operating pH due to almost comparable removals as of pH0 2 and no leaching of Fe ions.© 2008 Elsevier B.V. All rights reserved.1. IntroductionReactive azo dyes from textile and dyeing industries pose grave environmental problem. An estimate shows that textiles account for 14% of India’s industrial production and around 27% of its export earnings[1]. Production during 2006 registered a growth of about 3.5% at 29,500 tonnes and the textile industry accounts for the largest consumption of dyestuffs at nearly 80% [2]. The waste containing these azo dyes is non-degradable. The process of dyeing is a combination of bleaching and coloring, which generates huge quantities of wastewaters causing environmental problems. The effluents from these industries consist of large quantities of sodium, chloride, sulphate, hardness, carcinogenic dye ingredients and total dissolved solids with very high BOD and COD values over 1500 mg/l and over 5000 mg/l, respectively [3]. Various methods have been used for dye removal like adsorption, coagulation, electrocoagulation, Fenton’s reagent and combination of these processes. Though these treatment processes are efficient in dye removal, they generate adsorbed waste/sludge, etc. which further causes a secondary pollution. In wet oxidation the sludge is disposed off to a great extent by oxidizing the organic pollutant. Catalytic wet oxidation method (CWAO and CWPO) is gaining more popularity. CWPO process using H2O2, in particular has advantages like better oxidation ability thanusing oxygen,as the former is carried out at lower pressure (atmospheric pres-sure).WAO usually acts under high temperatures (200–325◦C)and pressure (50–150 bar). A comparable oxidation efficiency is obtained at a less temperature of 100–120◦C when using hydrogen peroxide as the oxi dizing agent instead of oxygen [4].WAO is capital intensive whereas WPO needs limited capital but generates little higher running costs [4].Rivas et al.[5] showed that the addition of H2O2(as a source of free radicals) enhanced wet air oxidation of phenol, a highly non-degradable substance and found that the combined addition of H2O2 and a bivalent metal (i.e. Cu, Co or Mn) enhanced the rate of phenol removal. Various oxidation catalysts have been studied for the removal of different compounds like phenol, benzoic acid, dyes, etc. by CWPO process. Catalysts like Fe2O3/CeO2and WO3/CeO2 in the removal of phenolic solution, (Al–Fe) pillared clay named FAZA in the removal of 4-hydroxy benzoic acid, mixed (Al–Fe) pillared clays in the removal of organic compounds have been used[6–8] .Removal of dyes by CWPO process is gaining importance in recent times with a large number of catalysts. Kim and Lee [9] used Cu/Al2O3 and copper plate in treatment of dye house effluents. Liu and Sun [10] removed acid orange 52, acid orange 7 and reactive black 5 using CeO2doped Fe2O3/ -Al2O3 from dye waste water. Kim and Lee [11] reported the treatment of reactive dye solutions by using Al–Cu pillared clays as catalyst.Among these catalysts, modified zeolites are preferred for improved efficiency, lower by-product formation and less severe experimental conditions (temperatures and pressures). Theimproved efficiency of the catalyst is ascribed to its structure and large surface area with the ability of forming complex compounds. Zeolites can be ion exchanged using transition metal ions like Fe,Cu, Mn and others like Ca, Ba, etc. Zeolites are negatively charged because of the substitution of Si(IV) by Al(III) in the tetrahedral accounts for a negative charge of the structure and hence the Si/Al ratio determines the properties of zeolites like ion exchange capacity [12] . These metal ions neutralizethe negative charge on zeolites and their position, size and number determine the properties of zeolite. These metal ions are fixed to the rigid zeolite framework which prevents leaching and precipitation in various reactions[13–21] .In this work, catalytic wet peroxide oxidation of Congo red azo dye using Fe exchanged Y zeolite has been presented. Effect of variables like temperature, initial pH, peroxide concentration and catalyst loading on catalytic wet peroxide oxidation were examined and the optimum conditions evaluated.2.Materials and methods2.1. ChemicalsHydrogen peroxide (30% analytical grade), manganese dioxide,sodium hydroxide pellets (AR) and hydrochloric acid were obtained from RFCL limited (Mumbai), India. Congo red was obtained from Loba Chemie Pvt. Ltd. (Mumbai) and were obtained from RFCL limited (Mumbai), India.Commercial Na–Y zeolite was obtained from Sud chemie Pvt.Ltd. (Baroda), India. Commercial catalyst was iron exchanged with excess 1 M Fe(NO3)3 at 80◦C for 6 h. The process was repeated three times and the sample was thoroughly washed with distilled water and dried in oven in air at60◦C for 10-12 h. The amount of iron exchanged was 1.53 wt% estimated by A.A.S.2.2. Apparatus and procedureThe experimental studies were carried out in a 0.5 l three-necked glass reactor equipped with a magnetic stirrer with heater and a total reflux (Fig. 13). Water containing Congo red dye was transferred to the three-necked glass reactor. Thereafter, the catalyst was added to the solution. The temperature of the reaction mixture was raised using heater to the desired value and maintained by a P.I.D. temperature controller, which was fitted in one of the necks through the thermocouple. The raising of the temperature of the reaction mixture to 90◦C from ambient took about 30 min.The total reflux prevents any loss of vapor and magnetic stirrer to agitate the mixture. Hydrogen peroxide was added, the runs were conducted at 90◦C and the samples were taken at periodic intervals. The samples after collection were raised to pH-11 by adding 0.1N NaOH (so that no further reaction takes place) and the residual hydrogen peroxide was removed by adding MnO2 which catalyzed the decomposition of peroxide to water and oxygen. The samples were allowed to settle for overnight or one day (or centrifuged) and filtered. The supernatant was tested for color and COD. After the completion of the run, the mixture was allowed to cool and settle overnight.2.3. CharacterizationThe determination of structure of the heterogeneous catalyst was done by X-ray diffractometer (Bruker AXS, Diffraktometer D8,Germany). The catalyst structure was confirm ed by using Cu Kα as a source and Ni as a filter. Goniometer speed was kept at 1cm/min and the chart speed was 1 cm/min. The range of scanning angle(2θ) was kept at 3–60◦. The intensity peaks indicate the values of2θ , where Bragg’s law is applicable. The formation of compounds was tested by comparing the XRD patternusing JCPDS files (1971).The determination of images and composition of catalyst were done by SEM/EDAX QUANTA 200 FEG. Scanning for zeolite samples was taken at various magnifications and voltage to account for the crystal structure and size. From EDAX, the composition of the elements in weight percentage and atomic percentage were obtained along with the spectra for overall compositions and particular local area compositions. BET surface area of the samples was analyzed by Micromeritics CHEMISORB 2720. The FTIR spectra of the catalyst was recorded on a FTIR Spectrometer (Thermo Nicolet, USA, Software used: NEXUS) in the 4000–480 cm−1wave number range using KBr pellets. The internal tetrahedra and external linkage of the zeolites formed are identified and confirmed by FTIR. The IR spectra data in Table 2 is taken from literature[22] .2.4. AnalysisThe amount of the dye present in the solution was analyzed by direct reading TVS 25 (A) Visible Spectrophotometer. The visible range absorbance at the characteristic wavelength of the sample at 497 nm was recorded to follow the progress of decolorization during wet peroxide oxidation.The COD of the dye solution was estimated by the Standard Dichromator Closed Reflux Method (APHA-1989) using a COD analyzer (Aqualytic, Germany). The color in Pt–Co unit was estimated using a color meter (Hanna HI93727, Hanna Instruments, Singapore) at 470 nm and the pH was measured using a Thermo Orion, USA make pH meter. The treated dye solutions were centrifuged (Model R24, Remi Instruments Pvt. Ltd., Mumbai, India) to obtain the supernatant free of solid MnO2.A.A.S (Avanta GBC, Australia) was used to find the amount of iron exchanged and leached.3. Results and discussionDue to the iron present after the exchange process, the Y peaks diminished along with the rise in Fe peaks. Similar phenomena has also been observed by Yee and Yaacob [23] who obtained zeolite iron oxide by adding NaOH and H2O2(drop wise) at 60◦C to Na–Y zeolite. XRD pattern ( Fig. 2) showed diminishing zeolite peaks along with evolution of peaks corresponding to y-Fe2O3 with increasing NaOH concentration. The IR assignments from FTIR (Fig. 3) remain satisfied even after iron exchanging. The EDAX data (Table 1) show clearly an increase in the value of Fe conc. after ion exchange of Y-zeolite. The BET surface area (Table 1) has been found to decrease from 433 to 423 m2/g after Fe exchange. SEM image is shown in Fig. 1 . Table 2 presents FTIR specifications of zeolites (common to all zeolites).The effect of temperature, initial pH, hydrogen peroxide concentration and catalyst loading on catalytic wet peroxide oxidation of azo dye Congo red were investigated in detail.Fig. 1. SEM image of Fe-exchanged Y zeolite.Fig. 2. XRD of commercial and Fe-exchanged commercial Y zeolite.BET surface area (commercial Na–Y): 433.4 m2/g.BET surface area (Fe exchanged commercial Na–Y): 423 m2/g.Table 2Zeolite IR assignments (common for all zeolites) from FTIR.3.1. Effect of temperature on dye, color and COD removalThe temperatures during the experiments were varied from50◦Cto100◦C. A maximum conversion of dye of 99.1% was observed at 100◦C in 4 h (and 97% at 90◦C). The dye rem ovals at 80◦C, 70◦C, 60◦C and 50◦C and at 4 h are 56%, 52%, 42% and 30%,respectively. Fig. 4 shows that at a particular temperature, the dye concentration gradually decreases with time. The initial red color of the dye solution decreased into brown color in due course and finally the brown color disappeared into a colorless solution. Dye concentration decreases at faster rates with temperatures for initial 30 min and thereafter it decreases from 1 h to 2 h. The initial concentrations of dye did not change after a brief contact period of dye solution with the Fe-exchanged zeolite catalyst (before CWPO)confirming that there is negligible adsorption of the dye by the catalyst.Fig. 5 shows the results obtained for color removal as a function of time and temperature. The maximum color removal (100%) is obtained at 100◦C in 30 min and also at 90◦C in 45 min. At a particular temperature, the color continuously decreases with time atFig. 3. FTIR of Fe-exchanged Y zeolite.Fig. 4. % dye removal as function of temperature.faster rate in first few minutes until a certain point ( t = 45 min) and then remaining almost unchanged. At 50◦C, the color removal is very low, whereas at 60◦C, there is a sudden shift towards its greater removal. The color removal is much higher at higher temperatures(70–100◦C).Fig. 6 depicts the results obtained for %COD removal as a function of time and temperature. A maximum COD removal of 66% was obtained at 100◦C (at 4 h) followed by 58% at 90◦C (at 4 h). Until60◦C, the rate of COD removal is less and during 70–100◦C, the rate is much faster.3.2. Effect of initial pH on dye, color and COD removalThe influence of initial pH on dye (Congo red) removal was studied at different pH (pH0 2, 4, 7, 8, 9 and 11) without any adjustment of pH during the experiments. A maximum conversion of 99% was obtained at pH0 2 followed by 97% at pH0 7. The dye removal at pH0 4, 8, 9 and 11 were 94%, 29%, 5% and 0.6%, respectively. All the runs were conducted for 4 h duration. The color of the solution is violet blue at pH0 2 (a colloidal solution) and greenish blue at pH0 4 (colloidal solution). In neutral and basic pH0(7, 8, 9 and 11) range, color of the solution did not change during treatment and was same as original solution, i.e. red color. Fe cations can leach out from zeolite structure into the solution causing secondary pollution. Leaching of Fe cations out of zeolitesFig. 5. % color removal as function of temperature.Fig. 6. %COD removal as function of temperature.Fig. 7. % color removal as function of pH0depends strongly on pH of the solution. The leaching of iron ions was enhanced at low pH values [24,25] . In order to determine dissolved Fe concentration, final pH values of the solutions were analyzed by A.A.S. At initial pH0 2 and 4, Fe detected in the solution was 7.8 ppm and 3.9 ppm, respectively. At pH0 7 and in alkaline range, there wasFig. 8. %COD removal as function of pH0.Fig. 9. % color removal as function of peroxide concentration.Fig. 10. %COD removal as function of peroxide concentration.almost no leaching. pH0 7, therefore, was chosen to be optimum pH for future experiments. The final pH values pH f after the reaction corresponding to pH0 2, 4, 6, 8, 9 and 11 were 2.1, 4.2, 7.2, 7.7 and 8.7, respectively. This show that the pH f tend to reach to neutral pH for all starting pH values.Fig. 7 presents the results obtained for color removal as a function of time and pH0. A maximum color removal of 100% was obtained at pH0 2 (in 10 min) and also at pH0 7 (in 45 min). The color removal at a particular pH0 decreases at a faster rateinitially (0–1 h) and thereafter it has a slower rate. The lowest removal was observed at pH0 11 with almost no removal.Fig. 11. % color removal as function of catalyst loading.Fig. 12. %COD removal as function of catalyst loading.The results obtained for COD removal as a function of time and pH0 are shown in Fig. 8 . A maximum COD removal of 69% was obtained at pH0 2 in 4 h followed by 63% at pH0 4 and 58% at pH0 7in4h.Fig. 8 shows maximum decrease in COD value in the initial 30 mines at all pH0. The decrease in COD is not appreciable thereafter. The COD removal is more in acidic range with a maximum removal of 69%, moderate in neutral region and least in basic region.3.3. Effect of peroxide concentration on dye, color and COD removalThe influence of H2O2 concentration on dye removal was investigated at different concentrations of hydrogen peroxide (in the range 0–6 ml). A maximum removal of 99.02% was obtained at H2O2 concentration of 3 ml per 350 ml of solution, followed by 98.3% at 1ml and 97% at 0.6 ml. The dye removal at H2O2concentrations of 6 ml,0.3 ml and 0 ml (and at 4 h) were 94%, 82% and 8%, respectively. The dye removal rate at 90◦C temperature is gradual at all conc entrations of peroxide. At peroxide concentration of 0 ml, there is very little removal of dye, hardly 8%. Hence, it can be inferred that catalytic thermolysis (a process of effluent treatment by heating the effluent with/without catalyst) is not active and cannot be applied for dye removal.At the beginning of the reaction, the OH•radicals which are produced additionally when peroxide concentration is increased,speeds up the azo dye degradation. After a particular peroxide concentration, on further increase of the peroxide, the dye removal isFig. 13. Schematic diagram of the reactor.not increased. This may be because of the presence of excess peroxide concentration, hydroperoxyl radicals (HO2•) are produced from hydroxyl radicals that are already formed. The hydroperoxyl radicals do not contribute to the oxidative degradation of the organic substrate and are much less reactive. The degradation of the organic substrate occurs only by reaction with HO•[26] .The % color removal at a particular peroxide concentration increases at a faster rate in the initial 45 min and then at slower rates afterwards (Fig. 9). As H2O2 concentration increases, the rate of removal is much faster, reaching 100% in 45 minusing 6 ml H2O2 per 350 ml solution, whereas it is 100% in 1 h for both 0.3 ml and3ml.Fig. 10 shows the results obtained for COD removal as a function of time and H2O2 concentration. The maximum COD removal, 63% is obtained for H2O2 conc. 3 ml at 90◦C, pH0 7 and 2 h duration.3.4. Effect of catalyst loading on dye, color and COD removalThe influence of catalyst concentration on dye removal was investigated at different concentrations (in the range 0.5–1.5 g/l). A maximum dye removal of 98.6% was observed at 1.5 g/l followed by 98.3% at 1 g/l and 87.3% at 0.5 g/l in 4 h duration. The % dye removal without catalyst was very low with only 36% dye removal in 4 h. By comparing the results for the dye removal without catalyst and1.5 g/l catalyst, the removal for 1.5 g/l is approximately three times to that of without catalyst. The rate of removal is also more for higher concentrations of catalyst and increases with it.Fig. 11 shows the results obtained for color removal as a function of time and catalyst concentration. The maximum color removal of 100% was obtained using 1.5 g/l catalyst conc. in 1.5 h and also using 1 g/l catalyst in 3 h.Fig. 12 presents the results obtained for %COD removal as a function of time and catalyst concentration. A maximum COD removal of 58% was obtained at catalyst conc. 1 g/l, 51.8% at 1.5 g/l and 50.5% at 0.5 g/l in 4 h. Without catalyst, the COD removal was only 35%.4. ConclusionsThe % removals of dye, color and COD by catalytic wet peroxide oxidation obtained at 100◦C, 4 h duration using 0.6 ml H2O2/350 ml solution, 1 g/l Fe–Y catalyst and pH0 7 were 99.1%, 100% (30 min)and 66%, respectively. As at 100◦C the solution has tendency to vaporize during the operation, 90◦C was taken as operating temperature. The corresponding % removals at 90◦C were 97% dy e, 100%color (in 45 min) and 58% COD. Acidic range gave higher % removals in comparison to neutral and alkaline range. At pH0 2, the dye, color and COD removals of 99%,100% (in 10 min) and 69% were observed after 4 h duration. As at pH0 2, the leaching of Fe ions from Y zeolite catalyst is predominant,pH0 7 was taken as operating pH. Fe concentration of 7.8 ppm was observed in the solution at pH0 2. The values of removals, however,are comparable to pH0 2, with dye removal of 97%, color removal of100% (in 45 min) and COD removal of 58% in 4 h.The H2O2concentration was found to be optimum at 3 ml/350 ml solution giving dye, color and COD removals of 99%,100% (in 1 h) and 63%, respectively.The study on the effect of catalyst loading revealed 1.5 g/l as best among the catalyst concentrations studied. The results with 1 g/l and 1.5 g/l catalyst concentration were almost comparable.外文翻译(译文)使用改性Y沸石为催化剂湿式催化过氧化氢氧化偶氮染料(刚果红)摘要本研究主要探讨了使用改性Y沸石固载铁离子作为催化剂湿式催化过氧化氢氧化降解偶氮染料(刚果红)。
英文参考文献及翻译
(二 零 一 五 年 六 月英文参考资料翻译 题 目:多层横机针织织物面料的开发 ****:*** 学 院:轻工与纺织学院 系 别:服装设计与工程系 专 业:服装设计与工程 班 级:服装设计与工程11-1班 指导教师:邱莉 讲师学校代码: 10128学 号:************Development of Multi-Layer Fabric on a Flat KnittingMachineAbstractThe loop transfer technique was used to develop the a splitable multi layer knit fabric on a computerized multi gauge flat knitting machine. The fabric consists of three layers: inner-single jersey, middle-1X1 purl and, outer-single jersey. By varying the loop length the multi layer knit fabric samples were produced,namely CCC-1, CCC-2 and CCC-3. The above multi layer fabrics were knitted using 24s Ne cotton of combined yarn feed in feeders 3, 4, and 4 respectively. The influence of loop length on wpc,cpc and tightness factor was studied using linear regression. The water vapor and air permeability properties of the produced multi layer knit fabrics were studied using ANOVA. The change of raw material in three individual layers could be useful for the production of fabric for functional, technical, and industrial applications.Keywords: multi layer fabric, loop length, loop transfer, permeability prope1.INTRODUCTIONcapable of manufacturing engineered fabric in two dimensional, three dimensional of bi-layer, and multilayer knit fabrics. The feature includes individual needle selection, the presences of holding down sinkers, racking, transfer, and adapted feeding devices combined with CAD. The layered fabrics are more suitable for functional and technical applications than single layer fabrics. These fabrics may be non-splitable (branching knit structure, plated fabric, spacer fabric) and splitable (bilayer,multilayer). The functional knitted structure of two different fabric layers based on different textile components (hydrophobic and hydrophilic textile material) is used to produce leisure wear, sportswear and protective clothing to improve the comfort. The separation layer is polypropylene in contact with skin, and the absorption layer will be the outside layer when cotton is used for the knit fabric .Garments made of plant branch structured knit fabrics facilitate the transport of sweat from the skin to the outer layer of the fabric very fast and make the wearer more comfortable. Qing Chen et al .reported that the branching knitted fabrics made from different combinations of polyester/cotton yarns with varying linear density and various knitted structure produced in rib machine improved water transport properties. The Moisture comfort characteristics of plated knitted fabric was good, reported by the authors findings from three layer plated fabric of cotton (40s), lycra (20 D) and superfine polypropylene (55 dtex/72 f) was used as a raw material in face, middle and ground layers respectively . The applications in wearable electronics for the multilayer fabric are wider. Novel multi-functional fibers structure include three layered knit fabric embedded with electronics for health monitoring. The knit fabric consists of 5% spandex and 95% polypropylene in –1stlayer, -2nd layer composed of metal fibers to conduct electric current to embedded electronics + PCM and the 3rd layer is composed of highly hydrophilic cotton . In flat knitting, two surface (U-,V-,M-,X- and Y-shaped) and three surface layers (U-face to face, U- zigzag and X-shaped) spacer fabric were developed from hybrid glass filament and polypropylene for light weight composite applications.HCebulla et al produced three dimensional performs like open cuboid and spherical shell using the needle parking method. The focus was in the area of individual needle selection in the machine for the production of near net-shape performs. The multi layered sandwich knit fabric of rectangular core structure (connecting layer: rib), triangular core structure (connecting layer:interlock), honeycomb core structure (connecting layer: jersey combined with rib), triple face structure 1 (connecting layers are not alternated), Triple face structure 2 (connecting layers are alternated) were developed on a flat-knitting machine for technical applications .In this direction the flat knitting machine was elected to produce splittable multi layer knit fabric with varying loop length and loop transfer techniques. The influence of loop length on wpc, cpc and tightness factor was studied for the three individual layers in the fabric. The important breathability properties of the fabric such as water vapor permeability and air permeability were studied. The production technique used for this fabric has wide applications such as in functional wear, technical textiles, and wearable textiles.2.MATERIALS AND METHODSThe production of multi-layer knit fabrics such as CCC-1, CCC-2 and CCC-3 cotton yarn with the linear density of 24s Ne was fed in the knit feeder.For layered fabric development, a computerized multi gauge flat knitting machine and combined yarn feed was selected like 3, 4 and 4 respectively, shown in the Table I. Q. M. Wang and H. Hu [9] was the selected yarn feed in the range of 4 –10 for the production of glass fiber yarn composite reinforcement on a flat knitting machine. The intermediate between integral and fully fashioned garment was produced using the “half gauging orneedle parking” technique. The use of only alternate needles on each bed of the flat knitting machine was used for stitch formation, The remaining needles did not participate in stitch formation in the same course,but the loops formed were kept in the needle head until employed for stitch formation again, thus freeing needles to be used as temporary parking places for loop transfer . For production of layered fabric and fully fashionedgarment, the loop transfer stitch is essential part of the panel. The running-on bars were used for transferring of loops either by hand or automatically from one needle to another needle depending on the machine. The principle of the loop transfer is shown in the Figure1.FIGURE. 1. Principle of loop transfer.(a)The delivering needle is raised by a cam in the carriage. The loop is stretched over the transfer spring. (b)The receiving needle is raised slightly from its needle bed. The receiving needle enters the transfer spring of delivering needle and penetrates the loop that will be transferred. (c)The delivering needle retreats leaving the loop on the receiving needle. The transfer spring opens to permits the receiving needleto move back from its closure. Finally, loop transference is completed.TABLE I. Machine & Fabric parameters.2.1 Fabric DevelopmentUsing STOLL M1.PLUS 5.1.034 software the needle selection pattern was simulated is shown in Figure 2.In Figure3, feeder 1, 2 and 3 are used for the formation of three layer fabric (inner-single jersey,middle-1X1 purl and outer-single jersey) respectively. With knit stitches the outer and inner layer knit fabrics are formed by means of selecting the alternate working needles in each bed. But the middle layer fabric is formed by free needles in each bed with the help of loop transfer and knit stitches.FIGURE 2. Selection of Machine & pattern parameters.FIGURE 3. Needle diagram for the multi-layer knit fabric.2.2TESTINGThe produced multi layer knit fabric was given a relaxation process and the following tests were carried out. The knitted fabric properties are given in Table II. and the cross sectional view of the fabrics is shown in Figure 4.FIGURE 4. Cross Sectional view of Multi-layer knit fabric.2.3 Stitch DensityThe courses and wale density of the samples in outer,middle and inner layer were calculated individually in the direction of the length and width of the fabric.The average density per square centimeter was taken for the discussion.2.4 Loop LengthIn outer, middle and inner layers of various combinations in multi layer fabric, 20 loops in a course were unraveled and the length of yarn in cm (LT) was measured. From the LT value the stitch length/loop length was measured by usingStitch length/loop length in cm (L) = (LT)/20 (1)The average loop length (cm) was taken and reported in Table II.2.5 Tightness Factor (K)The tightness of the knits was characterized by the tightness factor (K). It is known that K is a ratio of the area covered by the yarns in one loop to the area occupied by the loop. It is also an indication of the relative looseness or tightness of the knitted structure. For determination of TF the following formula was usedTightness Factor (K) = √T/l (2)Where T= Yarn linear density in Tex, l = loop length of fabric in cm. The TF of three layers (outer, midd le, and inner) were calculated separately is given in Table II.TABLE II. Multi-layer knitted fabric parameters3. RESULTS AND DISCUSSIONThe water vapor permeability of the multi layer knit fabrics were analyzed and shown in Figure 8. It can be observed that a linear trend is followed between water vapor permeability and loop length. With increases in loop length, there is less resistance per unit area, so, the permeability property of the fabric also increased. Anova data show increasesin loop length yield a significant difference in the water vapor permeability of the multi fabrics [F (2, 15) > Fcrit]. The regression analysis was done between CCC-1and CCC-2 and CCC-2 and CCC-3 for studying the influence of the number of yarn feeds.R2 values shows 0.755 for both comparisons. The water vapor permeability of the fabric is highly influenced by the length of the loop in the fabric and less by the number of yarn feed in the fabric.The air permeability of the multi layer knit fabricswas analyzed and is shown in Figure 8. It can be observed that the air permeability of the CCC-1,CCC-2, and CCC-3 fabrics is linear with loop length.FIGURE 8. Water Vapor Permeability & Air Permeability of fabric.As loop length in the fabric increased, air permeability also increased. The Anova- single factor analysis also proves that there is a significant difference at 5 % significance level between the air permeability characteristics of multi layer fabrics produced from various loop length [F (2, 15) > F crit] shown in Table IV. To study the influence of the combination yarn feed, the regression analysis was done between CCC-1 and CCC-2 andCCC-2 and CCC-3. It shows R2 =0.757. So, the air permeability of the fabric is may not be dependent on the number of yarns fed, but more influenced by the loop length.4. CONCLUSIONSIn flat knitting machine using a loop transfer technique, multi layer fabrics were developed with varying loop length. With respect to loop length, the loop density and tightness factor were analyzed.Based on analysis the following conclusions were made:TABLE III. Permeability Characteristics of Multi-layer knit fabrics.TABLE IV. ANOVA single factor data analysis.For multi-layer fabric produced with various basic structures (single jersey and 1x1 purl), the change of loop length between the layers has no significant difference.The wpc and cpc had an inverse relationship with the loop length produced from CCC combination multilayer fabrics.The combination yarn feed is an important factor affecting the tightness factor and loop lengths of the individual layers in knitted fabrics.The water vapor and air permeability properties of the multi layer knit fabrics were highly influenced by the change in loop length followed by the combination yarn feed.多层横机针织织物面料的开发摘要循环传输技术被用于开发一种计算机化多针距的横机上的一个多层编织织物。
英文文献翻译
燕京理工学院YANCHING INSTITUTE OF TECHNOLOGY外文文献原稿和译文学院:机电工程学院专业:机械工程学号:130310159姓名:李健鹏指导教师:王海鹏黄志强2017 年3月外文文献原稿和译文原稿Style of materialsMaterials may be grouped in several ways. Scientists often classify materials by their state: solid, liquid, or gas. They also separate them into organic (once living) and inorganic (never living) materials.For industrial purposes, materials are divided into engineering materials or nonengineering materials. Engineering materials are those used in manufacture and become parts of products.Nonengineering materials are the chemicals, fuels, lubricants, and other materials used in the manufacturing process, which do not become part of the product.Engineering materials may be further subdivided into: ①Metal ②Ceramics③Composite ④Polymers, etc.Metals and Metal AlloysMetals are elements that generally have good electrical and thermal conductivity. Many metals have high strength, high stiffness, and have good ductility.Some metals, such as iron, cobalt and nickel, are magnetic. At low temperatures, some metals and intermetallic compounds become superconductors.What is the difference between an alloy and a pure metal? Pure metals are elements which come from a particular area of the periodic table. Examples of pure metals include copper in electrical wires and aluminum in cooking foil and beverage cans.Alloys contain more than one metallic element. Their properties can be changed by changing the elements present in the alloy. Examples of metal alloys include stainless steel which is an alloy of iron, nickel, and chromium; and gold jewelry which usually contains an alloy of gold and nickel.Why are metals and alloys used? Many metals and alloys have high densities andare used in applications which require a high mass-to-volume ratio.Some metal alloys, such as those based on aluminum, have low densities and are used in aerospace applications for fuel economy. Many alloys also have high fracture toughness, which means they can withstand impact and are durable.What are some important properties of metals?Density is defined as a material’s mass divided by its volume. Most metal s have relatively high densities, especially compared to polymers.Materials with high densities often contain atoms with high atomic numbers, such as gold or lead. However, some metals such as aluminum or magnesium have low densities, and are used in applications that require other metallic properties but also require low weight.Fracture toughness can be described as a material’s ability to avoid fracture, especially when a flaw is introduced. Metals can generally contain nicks and dents without weakening very much, and are impact resistant. A football player counts on this when he trusts that his facemask won’t shatter.Plastic deformation is the ability of bend or deform before breaking. As engineers, we usuallydesign materials so that they don’t def orm under normal conditions. You don’t want your car to lean to the east after a strong west wind.However, sometimes we can take advantage of plastic deformation. The crumple zones in a car absorb energy by undergoing plastic deformation before they break.The atomic bonding of metals also affects their properties. In metals, the outer valence electrons are shared among all atoms, and are free to travel everywhere. Since electrons conduct heat and electricity, metals make good cooking pans and electrical wires.It is impossible to see through metals, since these valence electrons absorb any photons of light which reach the metal. No photons pass through.Alloys are compounds consisting of more than one metal. Adding other metals can affect the density, strength, fracture toughness, plastic deformation, electrical conductivity and environmental degradation.For example, adding a small amount of iron to aluminum will make it stronger. Also, adding some chromium to steel will slow the rusting process, but will make itmore brittle.Ceramics and GlassesA ceramic is often broadly defined as any inorganic nonmetallic material.By this definition, ceramic materials would also include glasses; however, many materials scientists add the stipulation that “ceramic” must also be crystalline.A glass is an inorganic nonmetallic material that does not have a crystalline structure. Such materials are said to be amorphous.Properties of Ceramics and GlassesSome of the useful properties of ceramics and glasses include high melting temperature, low density, high strength, stiffness, hardness, wear resistance, and corrosion resistance.Many ceramics are good electrical and thermal insulators. Some ceramics have special properties: some ceramics are magnetic materials; some are piezoelectric materials; and a few special ceramics are superconductors at very low temperatures. Ceramics and glasses have one major drawback: they are brittle.Ceramics are not typically formed from the melt. This is because most ceramics will crack extensively (i.e. form a powder) upon cooling from the liquid state.Hence, all the simple and efficient manufacturing techniques used for glass production such as casting and blowing, which involve the molten state, cannot be used for th e production of crystalline ceramics. Instead, “sintering” or “firing” is the process typically used.In sintering, ceramic powders are processed into compacted shapes and then heated to temperatures just below the melting point. At such temperatures, the powders react internally to remove porosity and fully dense articles can be obtained.An optical fiber contains three layers: a core made of highly pure glass with a high refractive index for the light to travel, a middle layer of glass with a lower refractive index known as the cladding which protects the core glass from scratches and other surface imperfections, and an out polymer jacket to protect the fiber from damage.In order for the core glass to have a higher refractive index than the cladding, the core glass is doped with a small, controlled amount of an impurity, or dopant, whichcauses light to travel slower, but does not absorb the light.Because the refractive index of the core glass is greater than that of the cladding, light traveling in the core glass will remain in the core glass due to total internal reflection as long as the light strikes the core/cladding interface at an angle greater than the critical angle. The total internal reflection phenomenon, as well as the high purity of the core glass, enables light to travel long distances with little loss of intensity.CompositesComposites are formed from two or more types of materials. Examples include polymer/ceramic and metal/ceramic composites. Composites are used because overall propertiesof the composites are superior to those of the individual components.For example: polymer/ceramic composites have a greater modulus than the polymer component, but aren’t as brittle as ceramics.Two types of composites are: fiber-reinforced composites and particle-reinforced composites.Fiber-reinforced CompositesReinforcing fibers can be made of metals, ceramics, glasses, or polymers that have been turned into graphite and known as carbon fibers. Fibers increase the modulus of the matrix material.The strong covalent bonds along the fiber’s length give them a very high modulus in this direction because to break or extend the fiber the bonds must also be broken or moved.Fibers are difficult to process into composites, making fiber-reinforced composites relatively expensive.Fiber-reinforced composites are used in some of the most advanced, and therefore most expensive sports equipment, such as a time-trial racing bicycle frame which consists of carbon fibers in a thermoset polymer matrix.Body parts of race cars and some automobiles are composites made of glass fibers (or fiberglass) in a thermoset matrix.Fibers have a very high modulus along their axis, but have a low modulusperpendicular to their axis. Fiber composite manufacturers often rotate layers of fibers to avoid directional variations in the modulus.Particle-reinforced compositesParticles used for reinforcing include ceramics and glasses such as small mineral particles, metal particles such as aluminum, and amorphous materials, including polymers and carbon black.Particles are used to increase the modulus of the matrix, to decrease the permeability of the matrix, to decrease the ductility of the matrix. An example of particle-reinforced composites is an automobile tire which has carbon black particles in a matrix of polyisobutylene elastomeric polymer.Polymers A polymer has a repeating structure, usually based on a carbon backbone. The repeating structure results in large chainlike molecules. Polymers are useful because they are lightweight, corrosion resistant, easy to process at low temperatures and generally inexpensive.Some important characteristics of polymers include their size (or molecular weight), softening and melting points, crystallinity, and structure. The mechanical properties of polymers generally include low strength and high toughness. Their strength is often improved using reinforced composite structures.Important Characteristics of PolymersSize. Single polymer molecules typically have molecular weights between10,000 and 1,000,000g/mol—that can be more than 2,000 repeating units depending on the polymer structure!The mechanical properties of a polymer are significantly affected by the molecular weight, with better engineering properties at higher molecular weights.Thermal transitions. The softening point (glass transition temperature) and the melting point of a polymer will determine which it will be suitable for applications. These temperatures usually determine the upper limit for which a polymer can be used.For example, many industrially important polymers have glass transition temperatures near the boiling point of water (100℃, 212℉), and they are most useful for room temperature applications. Some specially engineered polymers can withstandtemperatures as high as 300℃(572℉).Crystallinity. Polymers can be crystalline or amorphous, but they usuallyhave a combination of crystalline and amorphous structures (semi-crystalline).Interchain interactions. The polymer chains can be free to slide past one another (thermo-plastic) or they can be connected to each other with crosslinks (thermoset or elastomer). Thermo-plastics can be reformed and recycled, while thermosets and elastomers are not reworkable.Intrachain structure. The chemical structure of the chains also has a tremendous effect on the properties. Depending on the structure the polymer may be hydrophilic or hydrophobic (likes or hates water), stiff or flexible, crystalline or amorphous, reactive or unreactive.The understanding of heat treatment is embraced by the broader study of metallurgy. Metallurgy is the physics, chemistry, and engineering related to metals from ore extraction to the final product.Heat treatment is the operation of heating and cooling a metal in its solid state to change its physical properties. According to the procedure used, steel can be hardened to resist cutting action and abrasion, or it can be softened to permit machining.With the proper heat treatment internal stresses may be removed, grain size reduced, toughness increased, or a hard surface produced on a ductile interior. The analysis of the steel must be known because small percentages of certain elements, notably carbon, greatly affect the physical properties.Alloy steel owe their properties to the presence of one or more elements other than carbon, namely nickel, chromium, manganese, molybdenum, tungsten, silicon, vanadium, and copper. Because of their improved physical properties they are used commercially in many ways not possible with carbon steels.The following discussion applies principally to the heat treatment of ordinary commercial steels known as plain carbon steels. With this process the rate ofcooling is the controlling factor, rapid cooling from above the critical range results in hard structure, whereas very slow cooling produces the opposite effect.A Simplified Iron-carbon DiagramIf we focus only on the materials normally known as steels, a simplifieddiagram is often used.Those portions of the iron-carbon diagram near the delta region and those above 2% carbon content are of little importance to the engineer and are deleted. A simplified diagram, such as the one in Fig.2.1, focuses on the eutectoid region and is quite useful in understanding the properties and processing of steel.The key transition described in this diagram is the decomposition of single-phase austenite(γ) to the two-phase ferrite plus carbide structure as temperature drops.Control of this reaction, which arises due to the drastically different carbon solubility of austenite and ferrite, enables a wide range of properties to be achieved through heat treatment.To begin to understand these processes, consider a steel of the eutectoid composition, 0.77% carbon, being slow cooled along line x-x’ in Fig.2.1. At the upper temperatures, only austenite is present, the 0.77% carbon being dissolved in solid solution with the iron. When the steel cools to 727℃(1341℉), several changes occur simultaneously.The iron wants to change from the FCC austenite structure to the BCC ferrite structure, but the ferrite can only contain 0.02% carbon in solid solution.The rejected carbon forms the carbon-rich cementite intermetallic with composition Fe3C. In essence, the net reaction at the eutectoid is austenite0.77%C→ferrite 0.02%C+cementite 6.67%C.Since this chemical separation of the carbon component occurs entirely in the solid state, the resulting structure is a fine mechanical mixture of ferrite and cementite. Specimens prepared by polishing and etching in a weak solution of nitric acid and alcohol reveal the lamellar structure of alternating plates that forms on slow cooling.This structure is composed of two distinct phases, but has its own set of characteristic properties and goes by the name pearlite, because oits resemblance to mother- of- pearl at low magnification.Steels having less than the eutectoid amount of carbon (less than 0.77%) are known as hypo-eutectoid steels. Consider now the transformation of such a material represented by cooling along line y-y’ in Fig.2.1.At high temperatures, the material is entirely austenite, but upon cooling enters aregion where the stable phases are ferrite and austenite. Tie-line and level-law calculations show that low-carbon ferrite nucleates and grows, leaving the remaining austenite richer in carbon.At 727℃(1341℉), the austenite is of eutectoid composition (0.77% carbon) and further cooling transforms the remaining austenite to pearlite. The resulting structure is a mixture of primary or pro-eutectoid ferrite (ferrite that formed above the eutectoid reaction) and regions of pearlite.Hypereutectoid steels are steels that contain greater than the eutectoid amount of carbon. When such steel cools, as shown in z-z’ of Fig.2.1 the process is similar to the hypo-eutectoid case, except that the primary or pro-eutectoid phase is now cementite instead of ferrite.As the carbon-rich phase forms, the remaining austenite decreases in carbon content, reaching the eutectoid composition at 727℃(1341℉). As before, any remaining austenite transforms to pearlite upon slow cooling through this temperature.It should be remembered that the transitions that have been described by the phase diagrams are for equilibrium conditions, which can be approximated by slow cooling. With slow heating, these transitions occur in the reverse manner.However, when alloys are cooled rapidly, entirely different results may be obtained, because sufficient time is not provided for the normal phase reactions to occur, in such cases, the phase diagram is no longer a useful tool for engineering analysis.HardeningHardening is the process of heating a piece of steel to a temperature within or above its critical range and then cooling it rapidly.If the carbon content of the steel is known, the proper temperature to which the steel should be heated may be obtained by reference to the iron-iron carbide phase diagram. However, if the composition of the steel is unknown, a little preliminary experimentation may be necessary to determine the range.译文材料的类型材料可以按多种方法分类。
外文文献及翻译
((英文参考文献及译文)二〇一六年六月本科毕业论文 题 目:STATISTICAL SAMPLING METHOD, USED INTHE AUDIT学生姓名:王雪琴学 院:管理学院系 别:会计系专 业:财务管理班 级:财管12-2班 学校代码: 10128 学 号: 201210707016Statistics and AuditRomanian Statistical Review nr. 5 / 2010STATISTICAL SAMPLING METHOD, USED IN THE AUDIT - views, recommendations, fi ndingsPhD Candidate Gabriela-Felicia UNGUREANUAbstractThe rapid increase in the size of U.S. companies from the earlytwentieth century created the need for audit procedures based on the selectionof a part of the total population audited to obtain reliable audit evidence, tocharacterize the entire population consists of account balances or classes oftransactions. Sampling is not used only in audit – is used in sampling surveys,market analysis and medical research in which someone wants to reach aconclusion about a large number of data by examining only a part of thesedata. The difference is the “population” from which the sample is selected, iethat set of data which is intended to draw a conclusion. Audit sampling appliesonly to certain types of audit procedures.Key words: sampling, sample risk, population, sampling unit, tests ofcontrols, substantive procedures.Statistical samplingCommittee statistical sampling of American Institute of CertifiedPublic Accountants of (AICPA) issued in 1962 a special report, titled“Statistical sampling and independent auditors’ which allowed the use ofstatistical sampling method, in accordance with Generally Accepted AuditingStandards (GAAS). During 1962-1974, the AICPA published a series of paperson statistical sampling, “Auditor’s Approach to Statistical Sampling”, foruse in continuing professional education of accountants. During 1962-1974,the AICPA published a series of papers on statistical sampling, “Auditor’sApproach to Statistical Sampling”, for use in continuing professional educationof accountants. In 1981, AICPA issued the professional standard, “AuditSampling”, which provides general guidelines for both sampling methods,statistical and non-statistical.Earlier audits included checks of all transactions in the period coveredby the audited financial statements. At that time, the literature has not givenparticular attention to this subject. Only in 1971, an audit procedures programprinted in the “Federal Reserve Bulletin (Federal Bulletin Stocks)” includedseveral references to sampling such as selecting the “few items” of inventory.Statistics and Audit The program was developed by a special committee, which later became the AICPA, that of Certified Public Accountants American Institute.In the first decades of last century, the auditors often applied sampling, but sample size was not in related to the efficiency of internal control of the entity. In 1955, American Institute of Accountants has published a study case of extending the audit sampling, summarizing audit program developed by certified public accountants, to show why sampling is necessary to extend the audit. The study was important because is one of the leading journal on sampling which recognize a relationship of dependency between detail and reliability testing of internal control.In 1964, the AICPA’s Auditing Standards Board has issued a report entitled “The relationship between statistical sampling and Generally Accepted Auditing Standards (GAAS)” which illustrated the relationship between the accuracy and reliability in sampling and provisions of GAAS.In 1978, the AICPA published the work of Donald M. Roberts,“Statistical Auditing”which explains the underlying theory of statistical sampling in auditing.In 1981, AICPA issued the professional standard, named “Audit Sampling”, which provides guidelines for both sampling methods, statistical and non-statistical.An auditor does not rely solely on the results of a single procedure to reach a conclusion on an account balance, class of transactions or operational effectiveness of the controls. Rather, the audit findings are based on combined evidence from several sources, as a consequence of a number of different audit procedures. When an auditor selects a sample of a population, his objective is to obtain a representative sample, ie sample whose characteristics are identical with the population’s characteristics. This means that selected items are identical with those remaining outside the sample.In practice, auditors do not know for sure if a sample is representative, even after completion the test, but they “may increase the probability that a sample is representative by accuracy of activities made related to design, sample selection and evaluation” [1]. Lack of specificity of the sample results may be given by observation errors and sampling errors. Risks to produce these errors can be controlled.Observation error (risk of observation) appears when the audit test did not identify existing deviations in the sample or using an inadequate audit technique or by negligence of the auditor.Sampling error (sampling risk) is an inherent characteristic of the survey, which results from the fact that they tested only a fraction of the total population. Sampling error occurs due to the fact that it is possible for Revista Română de Statistică nr. 5 / 2010Statistics and Auditthe auditor to reach a conclusion, based on a sample that is different from the conclusion which would be reached if the entire population would have been subject to audit procedures identical. Sampling risk can be reduced by adjusting the sample size, depending on the size and population characteristics and using an appropriate method of selection. Increasing sample size will reduce the risk of sampling; a sample of the all population will present a null risk of sampling.Audit Sampling is a method of testing for gather sufficient and appropriate audit evidence, for the purposes of audit. The auditor may decide to apply audit sampling on an account balance or class of transactions. Sampling audit includes audit procedures to less than 100% of the items within an account balance or class of transactions, so all the sample able to be selected. Auditor is required to determine appropriate ways of selecting items for testing. Audit sampling can be used as a statistical approach and a non- statistical.Statistical sampling is a method by which the sample is made so that each unit consists of the total population has an equal probability of being included in the sample, method of sample selection is random, allowed to assess the results based on probability theory and risk quantification of sampling. Choosing the appropriate population make that auditor’ findings can be extended to the entire population.Non-statistical sampling is a method of sampling, when the auditor uses professional judgment to select elements of a sample. Since the purpose of sampling is to draw conclusions about the entire population, the auditor should select a representative sample by choosing sample units which have characteristics typical of that population. Results will not extrapolate the entire population as the sample selected is representative.Audit tests can be applied on the all elements of the population, where is a small population or on an unrepresentative sample, where the auditor knows the particularities of the population to be tested and is able to identify a small number of items of interest to audit. If the sample has not similar characteristics for the elements of the entire population, the errors found in the tested sample can not extrapolate.Decision of statistical or non-statistical approach depends on the auditor’s professional judgment which seeking sufficient appropriate audits evidence on which to completion its findings about the audit opinion.As a statistical sampling method refer to the random selection that any possible combination of elements of the community is equally likely to enter the sample. Simple random sampling is used when stratification was not to audit. Using random selection involves using random numbers generated byRomanian Statistical Review nr. 5 / 2010Statistics and Audit a computer. After selecting a random starting point, the auditor found the first random number that falls within the test document numbers. Only when the approach has the characteristics of statistical sampling, statistical assessments of risk are valid sampling.In another variant of the sampling probability, namely the systematic selection (also called random mechanical) elements naturally succeed in office space or time; the auditor has a preliminary listing of the population and made the decision on sample size. “The auditor calculated a counting step, and selects the sample element method based on step size. Step counting is determined by dividing the volume of the community to sample the number of units desired. Advantages of systematic screening are its usability. In most cases, a systematic sample can be extracted quickly and method automatically arranges numbers in successive series.”[2].Selection by probability proportional to size - is a method which emphasizes those population units’recorded higher values. The sample is constituted so that the probability of selecting any given element of the population is equal to the recorded value of the item;Stratifi ed selection - is a method of emphasis of units with higher values and is registered in the stratification of the population in subpopulations. Stratification provides a complete picture of the auditor, when population (data table to be analyzed) is not homogeneous. In this case, the auditor stratifies a population by dividing them into distinct subpopulations, which have common characteristics, pre-defined. “The objective of stratification is to reduce the variability of elements in each layer and therefore allow a reduction in sample size without a proportionate increase in the risk of sampling.” [3] If population stratification is done properly, the amount of sample size to come layers will be less than the sample size that would be obtained at the same level of risk given sample with a sample extracted from the entire population. Audit results applied to a layer can be designed only on items that are part of that layer.I appreciated as useful some views on non-statistical sampling methods, which implies that guided the selection of the sample selecting each element according to certain criteria determined by the auditor. The method is subjective; because the auditor selects intentionally items containing set features him.The selection of the series is done by selecting multiple elements series (successive). Using sampling the series is recommended only if a reasonable number of sets used. Using just a few series there is a risk that the sample is not representative. This type of sampling can be used in addition to other samples, where there is a high probability of occurrence of errors. At the arbitrary selection, no items are selected preferably from the auditor, Revista Română de Statistică nr. 5 / 2010Statistics and Auditthat regardless of size or source or characteristics. Is not the recommended method, because is not objective.That sampling is based on the auditor’s professional judgment, which may decide which items can be part or not sampled. Because is not a statistical method, it can not calculate the standard error. Although the sample structure can be constructed to reproduce the population, there is no guarantee that the sample is representative. If omitted a feature that would be relevant in a particular situation, the sample is not representative.Sampling applies when the auditor plans to make conclusions about population, based on a selection. The auditor considers the audit program and determines audit procedures which may apply random research. Sampling is used by auditors an internal control systems testing, and substantive testing of operations. The general objectives of tests of control system and operations substantive tests are to verify the application of pre-defined control procedures, and to determine whether operations contain material errors.Control tests are intended to provide evidence of operational efficiency and controls design or operation of a control system to prevent or detect material misstatements in financial statements. Control tests are necessary if the auditor plans to assess control risk for assertions of management.Controls are generally expected to be similarly applied to all transactions covered by the records, regardless of transaction value. Therefore, if the auditor uses sampling, it is not advisable to select only high value transactions. Samples must be chosen so as to be representative population sample.An auditor must be aware that an entity may change a special control during the course of the audit. If the control is replaced by another, which is designed to achieve the same specific objective, the auditor must decide whether to design a sample of all transactions made during or just a sample of transactions controlled again. Appropriate decision depends on the overall objective of the audit test.Verification of internal control system of an entity is intended to provide guidance on the identification of relevant controls and design evaluation tests of controls.Other tests:In testing internal control system and testing operations, audit sample is used to estimate the proportion of elements of a population containing a characteristic or attribute analysis. This proportion is called the frequency of occurrence or percentage of deviation and is equal to the ratio of elements containing attribute specific and total number of population elements. WeightRomanian Statistical Review nr. 5 / 2010Statistics and Audit deviations in a sample are determined to calculate an estimate of the proportion of the total population deviations.Risk associated with sampling - refers to a sample selection which can not be representative of the population tested. In other words, the sample itself may contain material errors or deviations from the line. However, issuing a conclusion based on a sample may be different from the conclusion which would be reached if the entire population would be subject to audit.Types of risk associated with sampling:Controls are more effective than they actually are or that there are not significant errors when they exist - which means an inappropriate audit opinion. Controls are less effective than they actually are that there are significant errors when in fact they are not - this calls for additional activities to establish that initial conclusions were incorrect.Attributes testing - the auditor should be defining the characteristics to test and conditions for misconduct. Attributes testing will make when required objective statistical projections on various characteristics of the population. The auditor may decide to select items from a population based on its knowledge about the entity and its environment control based on risk analysis and the specific characteristics of the population to be tested.Population is the mass of data on which the auditor wishes to generalize the findings obtained on a sample. Population will be defined compliance audit objectives and will be complete and consistent, because results of the sample can be designed only for the population from which the sample was selected.Sampling unit - a unit of sampling may be, for example, an invoice, an entry or a line item. Each sample unit is an element of the population. The auditor will define the sampling unit based on its compliance with the objectives of audit tests.Sample size - to determine the sample size should be considered whether sampling risk is reduced to an acceptable minimum level. Sample size is affected by the risk associated with sampling that the auditor is willing to accept it. The risk that the auditor is willing to accept lower, the sample will be higher.Error - for detailed testing, the auditor should project monetary errors found in the sample population and should take into account the projected error on the specific objective of the audit and other audit areas. The auditor projects the total error on the population to get a broad perspective on the size of the error and comparing it with tolerable error.For detailed testing, tolerable error is tolerable and misrepresentations Revista Română de Statistică nr. 5 / 2010Statistics and Auditwill be a value less than or equal to materiality used by the auditor for the individual classes of transactions or balances audited. If a class of transactions or account balances has been divided into layers error is designed separately for each layer. Design errors and inconsistent errors for each stratum are then combined when considering the possible effect on the total classes of transactions and account balances.Evaluation of sample results - the auditor should evaluate the sample results to determine whether assessing relevant characteristics of the population is confirmed or needs to be revised.When testing controls, an unexpectedly high rate of sample error may lead to an increase in the risk assessment of significant misrepresentation unless it obtained additional audit evidence to support the initial assessment. For control tests, an error is a deviation from the performance of control procedures prescribed. The auditor should obtain evidence about the nature and extent of any significant changes in internal control system, including the staff establishment.If significant changes occur, the auditor should review the understanding of internal control environment and consider testing the controls changed. Alternatively, the auditor may consider performing substantive analytical procedures or tests of details covering the audit period.In some cases, the auditor might not need to wait until the end audit to form a conclusion about the effectiveness of operational control, to support the control risk assessment. In this case, the auditor might decide to modify the planned substantive tests accordingly.If testing details, an unexpectedly large amount of error in a sample may cause the auditor to believe that a class of transactions or account balances is given significantly wrong in the absence of additional audit evidence to show that there are not material misrepresentations.When the best estimate of error is very close to the tolerable error, the auditor recognizes the risk that another sample have different best estimate that could exceed the tolerable error.ConclusionsFollowing analysis of sampling methods conclude that all methods have advantages and disadvantages. But the auditor is important in choosing the sampling method is based on professional judgment and take into account the cost / benefit ratio. Thus, if a sampling method proves to be costly auditor should seek the most efficient method in view of the main and specific objectives of the audit.Romanian Statistical Review nr. 5 / 2010Statistics and Audit The auditor should evaluate the sample results to determine whether the preliminary assessment of relevant characteristics of the population must be confirmed or revised. If the evaluation sample results indicate that the relevant characteristics of the population needs assessment review, the auditor may: require management to investigate identified errors and likelihood of future errors and make necessary adjustments to change the nature, timing and extent of further procedures to take into account the effect on the audit report.Selective bibliography:[1] Law no. 672/2002 updated, on public internal audit[2] Arens, A şi Loebbecke J - Controve …Audit– An integrate approach”, 8th edition, Arc Publishing House[3] ISA 530 - Financial Audit 2008 - International Standards on Auditing, IRECSON Publishing House, 2009- Dictionary of macroeconomics, Ed C.H. Beck, Bucharest, 2008Revista Română de Statistică nr. 5 / 2010Statistics and Audit摘要美国公司的规模迅速增加,从第二十世纪初创造了必要的审计程序,根据选定的部分总人口的审计,以获得可靠的审计证据,以描述整个人口组成的帐户余额或类别的交易。
英文文献翻译
International Trade and Income Distribution: Reconsidering theEvidenceSébastien JeanSUMMARYWhether trade liberalization is associated with narrowing or widening income disparities within countries is still a matter of controversy.According to the standard factor proportions theory,openness should exert an equalizing effect in poor countries and raise income inequality in rich countries (if the skilled to unskilled relative wage is to be considered as a good proxy for income inequality).But this prediction is not systematically borne out by the data.While increased trade openness in several East Asian economies paralleled lowered inequalities,it is well documented that Latin American countries experienced a deterioration of their income distribution following liberalization.The publication in 1996 of a comprehensive data set on income inequality paved the way for more systematic empirical investigations than previously.However,the studies failed to deliver a convincing answer as to the link between openness and inequality.Empirical results are mixed:depending upon the sample,the econometric method or the estimation period,it is shown that openness has either no impact on inequality,or has an equalizing effect, or worsens the income distribution.In addition, the conclusions do not fit the underlying theoretical models.This study reconsiders the evidence concerning the influence of internationa l trade on income distribution,motivated by serious concerns about data consistency,empirical specification,as well as theoretical framework.The theoretical model used as a background for the analysis is fairly general and mainly based on the assumption of general equilibrium under perfect competition on product and factor markets. The number of goods and factors is not specified and no assumption is made about the rest of the world. In particular,we do not make the restrictive assumption that the impact of trade liberalization on income distribution is only conditional on factor endowments.The model shows that factor price changes are correlated with an indicator of the factor content of net export changes,relative to the country factor endowments.In order to derive from this model a testable relationship between foreign trade and income inequality,we then restrict the model to the case where three production factors are considered,namely two types of labor (non-educated workers and other workers), in addition to physical capital. Non-educated workers are assumed to be employed only in the non-tradable sector, because producinggoods well suited for the export sector requires matching relatively high standards of quality, which call for a certain level of skill.We then show that the change in income distribution is related to the change in the factor content of net exports, relative to the country’s factor endowments.This relationship,which is the base for subsequent econometric estimates, turns out to be conditional on the share of households drawing their income from non-educated labor.The empirical implementation acknowledges country specificities in production technologies, and puts special emphasis on data consistency requirements for inequality index.Our estimations concern the impact of international trade change on the change in income distribution, instead of the relationship in levels generally estimated in the literature so far.Indeed, the interesting issue is not whether countries with different degrees of openness exhibit different levels of inequality, but rather whether an increase in a country’s trade openness is associated with an increase or a decrease of inequality.Our main empirical finding is that the factor content of net export changes, expressed relatively to the country's factor endowment, does have a significant impact on income distribution and the sign of this impact is conditional on country’s income level or on the share of non-educated in the population over 15. An increase in the labor content (relatively to capital) of exports thus decrease inequalities for rich enough countries (or countries with high enough education level), but it will increase inequalities in the poorest countries. Indeed, such increased exports are likely to be reflected in higher wages, but this only concerns workers endowed with the basic education required to be employed in the export- oriented manufacturing sector. While such workers, with at least basic education, represent the bulk of low-income households in many countries, this is not the case for countries where education is scarce.Moreover, the resulting impact of international integration on inequality depends on the sign and magnitude of the factor content of net export changes. On average, the factor- content of trade increased in poor countries (i.e. with a PPP GDP per capita approximately below $5,000) and decreased in middle-income and rich countries, thus resulting in both cases in a widening of income inequality. When middle- and high-income countries are considered separately (by arbitrarily setting $15,000 PPP GDP per capita as the cut point between these two categories), the impact of trade is still found to be increased inequalities in rich countries, but the reverse is true for middle-income countries.It is worth emphasizing that our results are to be interpreted with caution, as ouranalysis does not look for a systematic impact of trade liberalization on income distribution. The change in the factor content of trade is not only related to trade policies but also to technology or consumer taste changes. The interpretation is more suited the other way round: trade liberalization is likely to affect the factor content of net exports, and this is the indicator to look at in order to gain valuable insights about the induced impact on income distribution. This shows that the way trade liberalization is handled may have significant repercussions for income distribution. While inequalities are better tackled with direct policy instruments, in particular fiscal redistribution, in poor countries the implementation of such policies is far from being an easy task. Our results also recalls the vital role of basic education, which is often a necessary condition for workers to benefit, directly or indirectly, from the gains associated with new trade opportunities.1. BACKGROUND AND MOTIVATIONWhether trade liberalization is associated with narrowing or widening income disparities within countries is still a matter of controversy. According to the Heckscher-Ohlin- Samuelson (HOS) theoretical framework (with two types of labor), poor countries tend to specialize in unskill-intensive goods, because they are relatively well endowed with unskilled labor. As a result, openness should exert an equalizing effect in poor countries,and raise income inequality in rich countries.But this prediction is not systematically borne out by the data. While increased trade openness in several East Asian economies paralleled lowered inequalities, it is well documented that Latin American countries experienced a widening of their income distribution following liberalization .Evidence on the impact of trade liberalization on inequality has until recently been seriously hindered by data limitations. However, the publication in 1996 by K. Deininger and L. Squire of a comprehensive data set on income inequality paved the way for more systematic empirical investigations. Roughly speaking, studies can be divided into two categories.The first approach consists of simply evaluating whether openness reduces or strengthens inequality.The corresponding works generally do not rely explicitly on a given theoretical framework. Rather, the HOS theory is referred to in order to justify the test for different effects in developed and developing countries. Results are mixed.Depending upon the sample, the econometric method or the estimation period, it is shown that openness has either no impact on inequality, or has an equalizing effect, or worsens the income distribution.The second set of studies is more in line with international trade theory, in the sense that a country’s relative factor endowment i s set to be a determinant of the impact of trade openness on inequality. Fisher’s motivation to renounce to HOS is that this theoretical approach is inconsistent with the fact that trade liberalization affects LDC’s differentially.Furthermore, two drawbacks are worth mentioning. The first one has to do with the consistency of data on inequality. Due to data limitations, Gini coefficients based on different income definitions(income/expenditure,gross/net…)and different recipient units (individual/household…) are used, as in most cross-country studies on inequality. Even when some adjustment is done to improve data comparability, these differences result in serious data inconsistency, as shown by Knowles (2001) about the link between growth and inequality. The second drawback concerns the econometric specification adopted in Spilimbergo et al. work, which is expressed in levels instead of changes in inequality. Trying to explain cross-country differences in levels of inequality is a challenging task, since a number of idiosyncratic factors cannot be properly taken into account. Fiscal redistribution, labor market devices or distribution of factor ownership, for instance, are not well documented for most countries. As a consequence, econometric estimates are likely to be flawed with omitted variable bias. In addition, the interesting issue from a policy perspective is not whether countries with different degrees of openness exhibit different levels of inequality, but rather whether an increase in a country’s tr ade openness is associated with an increase or a decrease in inequality. Even from a theoretical perspective, the predictions from the HOS framework do not refer to cross-country comparison of levels of inequality, but rather to their changes as countries open up to trade.In order to test for the sensitivity of results with regards to these issues of data consistency and econometric specification,we run the same estimation as Spilimbergo et al., introducing two changes: we specified the econometric model in changes instead of levels; we imposed additional data consistency requirements, by using only changes computed as the difference between two Gini indices based on the same income concept and the same recipient unit.Hence, while these studies appeared promising, they failed to deliver a4convincing answer as to the link between openness and inequality: in addition to the gap between results and underlying theoretical models, robustness is in both cases challenged. This calls for an alternative approach. Our motivation for reconsidering this evidence is consequently to bring up improvement in three respects: theoretical approach, data consistency and econometric specification.As to the theoretical framework, we argue that the standard HOS model is too restrictive, in several ways.The assumption that the impact of liberalization on income distribution is only conditional on factor endowments implicitly or explicitly stems from the direct link between factor content of trade and factor endowment, as described by the Heckscher-Ohlin-Vanek relationship.Since Trefler (1995) emphasized the "case of the missing trade", a long way has been traveled toward making clear the conditions under which Vanek’s prediction is borne out by the data (see e.g. Davis and Weinstein, 2003, for a survey, and Trefler and Zhu, 2005, for a recent important contribution). Among these conditions are in particular the assumption of consumption similarity across countries, and the absence of any transaction cost (either linked to transportation or to border protection). Since we want to use a more general framework, and in particular acknowledge the potential influence of trade policy, we do not want to make such assumptions. This is why we do not assume the HOV relationship to hold. As a consequence, we cannot rely on factor endowments only to study the impact of foreign trade on income distribution.Another concern with the theoretical framework is dimensionality.As already convincingly emphasized for instance by Wood (1994), we argue that three production factors are required, at least, to gain valuable insights about the distributional impact of trade in developing countries. Indeed, a large part of the labor force in poor countries does not have any education, even basic, and is employed in the traditional or craft sector. It is strongly questionable whether their output corresponds to tradable goods, as far as manufacturing industries are concerned. Moreover their mobility toward the “modern”sector is hindered by the lack of basic education. Even in an economy where the export-oriented manufacturing sector is intensive in low-skilled labor, such non-educated workers are thus unlikely to receive any direct benefit from the development of the export sector or from an5increase in the price of exports. The positive impact on the relative price of unskilled labor, admittedly considered as the abundant factor for developing countries, might thus be restricted, in practice, to a fraction of unskilled workers only, namely those enjoying at least basic education, and likely to work in the “modern” sector. As soon as the share of non-educated labor in the labor force is large enough, the alleged positive impact of trade openness on unskilled (but somewhat educated) labor does not reduce inequalities. On the contrary, the deterioration of the relative position of non-educated workers would increase income inequalities. Of course, such effect is not expected to hold in more developed countries, where the share of non-educated workers is relatively small and in poor countries only specialized in agriculture.In order to address these different issues, we adopt a general theoretical framework in which the number of goods and factors is not specified, and in which no assumption is made about the rest of the world. In particular, no assumption is made about factor price equalization. Mainly based on the assumption of general equilibrium under perfect competition on product and factor markets, the model shows that factor price changes are correlated with an indicator of net export changes. Although this indicator can be termed a specific definition of the factor content of trade, it should be clear that this only comes out from the analysis of the link between foreign trade and relative wages. Our purpose is not to elaborate upon the validity of Vanek prediction on the link between factor endowments and the factor content of trade.In order to derive from this model a testable relationship between foreign trade and income inequality, we then restrict the model to the case where three production factors are considered, namely two types of labor (non-educated workers and other workers), in addition to physical capital. Assuming that non-educated workers are only employed in non-tradable goods production, we show that the change in income distribution is related to the change in an indicator of the factor content of net exports, relative to the country’s factor endowments. This relationship, which is the base for subsequent econometric estimates, turns out to be conditional on the share of non-educated workers.Our model compares two equilibria of a given economy, across which6technology and consumer preferences are held constant. The nature of the shock considered is not specified explicitly, but the analysis applies to trade policy changes. As the factor content of net export changes embodies, among other things, the impact of possible trade policy changes, these trade policy changes need not be explicitly added as determinants of factor prices. The difficulty of properly measuring each country’s trade policy is thus sidestepped in the empirical analysis.This means that the results should be interpreted with much care. The impact of our indicator of factor content of net export changes does not only reflect the impact of trade policies. But our approach suggests that the impact of trade policy on income distribution can be studied through its impact on the factor content of net export changes.Our theoretical and empirical approach does not make any restrictive assumption on cross- country differences in preferences, technology nor choice of technique, which have been shown to be of special relevance by recent works (Davis and Weinstein, 2003; Trefler and Zhu, 2005). The counterpart of such an approach is that it is very data demanding. In particular, we make use of a country-specific technology coefficient matrix. For countries where data on capital stock at the industry level is missing, we assume capital intensity at the sector level to be the same as in countries found to be similar in terms of capital abundance and technology in a clustering analysis. Our empirical implementation brings up improvement in two other respects. We put special emphasis on data consistency requirements for inequality index and we analyze the impact of international trade change on the change in income distribution (instead of differences in levels of income inequality across countries due to differences in degrees of openness).Our main empirical finding is that the factor content of net export changes, expressed relatively to the country's factor endowment, does have a significant impact on income distribution, but this impact is conditional on country’s income level or to the share of non- educated in the population over 15. Taking into account the sign and magnitude of the factor content of net export changes, we find that on average international trade led to a widening of income inequality both in poor and rich countries, and to a reduction in middle-income countries. While for poor countries7this result runs counter to the prediction of standard trade theory, it is in accordance with the theoretical model developed here. Furthermore, it is consistent with recent empirical findings obtained in slightly different contexts (Milanovic, 2002; Barro, 2000; Lundberg and Squire, 1999; see Table 1), but, contrary to these studies, it relies on a theoretical foundation explaining how trade can lead to an increase in inequality in low-income countries.2. A MODEL OF OPENNESS AND INEQUALITYWe begin with a fairly general setup, in which the changes between two equilibria of an economy are described. The point is to relate net export changes to factor price changes. A more specific case is then considered, with three production factors. Finally, the link with income distribution is established.3.CONCLUDING REMARKSIn this paper, we reconsider the evidence concerning the influence of international trade on income distribution, motivated by serious concerns about data consistency, empirical specification, as well as theoretical framework. Our approach differs substantially from the ones used so far in the literature.Our main empirical finding is that the factor content of net export changes, expressed relatively to the country's factor endowments, does have a significant impact on income distribution, but the sign of this impact is conditional on country’s income level or to the share of non-educated in the population over 15.The resulting impact of international trade on inequality depends on the sign and magnitude of the factor content of net export changes. On average, trade led to a widening of income inequality both in poor and rich countries but to a reduction in middle-income countries. Such results are to be interpreted with caution. Firstly, they only reflect average results, and the contribution of the factor content of net export changes can be of opposite sign in countries belonging to the same group. Secondly, the factor content of net export changes is not an indicator of liberalization, nor even of trade openness. The interpretation is more suited the other way round: trade liberalization is likely to affect the factor content of net exports, and this is the indicator to look at in order to gain valuable insights about the induced impact on income distribution. Still, this shows that the way liberalization is handled may have significant repercussions for income distribution. While inequalities are better tackled with direct policy instruments, in particular fiscal redistrib ution, in poor countries the implementation of such policies is far from being an easy task. Our results also recalls the vital role of basic education, which is often a necessary condition for workers to benefit, directly or indirectly,from the gains associated with new trade opportunities.8国际贸易与收入分配:反思证据塞巴斯蒂安·吉恩综述无论是贸易自由化还是收入差距的缩小或扩大在国家内仍然是一个有争议的问题。
英文文献翻译(1)
英文文献翻译二〇一四年月日科技文章摘译Preventing electricity-stolen smart metersWith the development, it has been increasingly used in smart instrumentation equipment, so that the instrument performance have been greatly improved. This article describes the preventing electricity-stolen smart meters is to the ATMEL AT89C51 microcontroller as the core of the design, it achieves 32 power measurement and touring shows, and other functions, but also preventing electricity-stolen, anti-submarine-moving, high-precision, long-life And low power consumption and other characteristics of the new residential areas and is the preferred meter in the urban network reform.Hardware design(l) Signal acquisition and conversion of the electric circuit is more complicated measures, the traditional way is to sample the respective current, voltage, the AID conversion after their multiplication. This approach is not only to analog circuit design of high demand, the software programming requirements are also high, but it is difficult to achieve multiple users on the measure. Therefore, we choose BL0932B as a signal acquisition and conversion circuit core, it is an electronic power meter ASIC. BL0932B design based on the signal acquisition and conversion external circuit board with simple, high precision and stability, and other characteristics, especially for single-phase two-line power users of energy metering.BLO932B within the buffer amplifier, analog multiplier, VIF converters, counting circuit and drive circuit, can accurately measure positive and negative direction of the two active power and computing power in the same direction. The output in two ways: rapid pulse output and slow output for the former computer data processing, the latter used to drive pulse motor work.As the signal acquisition and conversion circuit board as well as the high-voltage 220 v, there Baidoa v order of magnitude of the small-signal, which requires the printed circuit board design and production process to be very scientific and rational. In addition, in order to protect the motherboard, BL0932B rapid pulse of the photoelectric sent to isolation after the SCM.(2) MCU control circuitSCM control circuit, including analog switch arrays, display and keypad circuit, datastorage, serial communications interface and watchdog circuit.l) analog switch array Preventing electricity-stolen smart meters are centralized meter, the MCU to the multi-pulse signals in real-time detection, therefore, it uses an Analog Switches CD405I of four eight-select and a 3 to 8 decoder 74 LS138 common Composed of analog switch arrays,ang it achieve a 32-way pulse of the cycle of detection.2) And show circuit as a key focus on smart meters, need to show the contents of many. Main form of households, electricity, the status of various instructions and error information. To this end, we designed the LED display, from 10 strings and static converters 74 LSl64 drive so you can at least take up the MCU resources. In addition, the signal input terminal also designed the 25 LED indicator, to display the 25 electricity capacity.The meters are "checking" and "cleared" two function keys are directly linked to the P3 in 89 C51 on the mouth. Through a combination of the two keys, can easily achieve the MCU cleared meter, single households cleared, online check, such as locking and unlocking operation.3) Data storage because of the configuration of the table need to record a large number of important data, in order to ensure data security, we designed the two data memory: parallel data memory and serial data memory. Parallel data memory by 6264, it has SK bytes of storage space, to fully meet the requirements of the table. In order to prevent power-down when the data loss, to the 6264 allocation of the 3.6 v backup battery. Backup battery switch and the 6264 election signals the film, by special worship P MAx691 provide monitoring chip. Serial data memory by 24 LC65, it also has a SK bytes of storage space, and through IZC bus connected with the MCU. Although there is no IZC 89C51 microcontroller bus interface, but through software programming, P1 I can simulate the two lines of its timing, completion of the 24 LC65 read and write operations. 24LC65 is a serial EZPROM, without battery backup, data can be safely stored in 200 years.4) Serial communication interface 89 C51 has a full-duplex serial interface, used in this meter for meter reading and communication interface. In order to achieve far more concentrated form clusters copied, in the serial interface on the basis of plus RS485 driver chips 75 LBC184. This can be through various meter RS485 bus and data acquisition system for communication links, and concentrate meter reading, remote meter reading.4) watchdog circuit watchdog circuit used for monitoring chip mix P MAX691, it has a power-on reset, brownout detection, backup battery switch and watchdog timer input output, and other functions. To determine whether the cumulative electricity. This part of the programming is mainly used in order to achieve the operation, with fewer bytes RAMoccupation, the code simple and fast, and other advantages.(3) Data validation and multi-site storage of data directly related to electricity users and property management departments of vital interests, is the most important data, we must ensure that its security is absolutely right and, therefore, in the real data storage, all of the electricity Check to ensure the accuracy of the data. Data in 6264 and 2465 have been taken in the multi-site storage, backup each other to ensure that data foolproof. Practice has proved that these measures, the data will no longer be an error or lost, the effect is very obvious.(4) of electricity and stepping roving show that the normal operation procedure, the pulse measurement, shows that various tour operators, and its power, when the last one shows that the electricity consumption, to calculate the unit's total electricity consumption and display, and then To start from scratch cycle show. In order to facilitate the spot meter reading, specially designed step show: that is, each press a button detection, household electricity consumption, and also shows the integral part.防偷电智能电表随发展,它已被越来越多地用于仪器仪表中构成智能仪器,从而使仪器仪表的性能得到极大改善。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Characterization of production of Paclitaxel and related Taxanes in Taxus Cuspidata Densiformis suspension cultures by LC,LC/MS, and LC/MS/MSCHAPTER THEREPLANT TISSUE CULTUREⅠ. Potential of Plant cell Culture for Taxane ProductionSeveral alternative sources of paclitaxel have been identified and are currently the subjects of considerable investigation worldwide. These include the total synthesis and biosynthesis of paclitaxel, the agriculture supply of taxoids from needles of Taxus species, hemisynthesis (the attachment of a side chain to biogenetic precursors of paclitaxel such as baccatin Ⅲ or 10-deacetylbaccatin Ⅲ), fungus production, and the production of taxoids by cell and tissue culture. This reciew will concentrate only on the latter possibility.Plant tissue culture is one approach under investigation to provide large amounts and a stable supply of this compound exhibiting antineoplastic activity. A process to produce paclitaxel or paclitaxel-like compounds in cell culture has already been parented. The development of fast growing cell lines capable of producing paclitaxel would not only solve the limitations in paclitaxel supplies presently needed for clinical use, but would also help conserve the large number of trees that need to be harvested in order to isolate it. Currently, scientists and researchers have been successful in initiating fast plant growth but with limited paclitaxel production or vice versa. Therefore, it is the objective of researchers to find a method that will promote fast plant growth and also produce a large amount of paclitaxel at the same time.Ⅱ. Factors Influencing Growth Paclitaxel ContentA.Choice of Media for GrowthGamborg's (B5) and Murashige & Skoog's (MS) media seem to be superior for callus growth compared to White's (WP) medium. The major difference between these two media is that the MS medium contains 40 mM nitrate and 20mM ammonium, compared to 25mM nitrate and 2mM ammonium. Many researchers have selected the B5 medium over the MS medium for all subsequent studies, although they achieve similar results.Gamborg's B5 media was used throughout our experiments for initiation of callus cultures and suspension cultures due to successful published results. It was supplemented with 2% sucrose, 2 g/L casein hydrolysate, 2.4 mg/L picloram, and 1.8 mg/L α-naphthalene acetic acid. Agar (8 g/L) was used for solid cultures.B. Initiation of Callus CulturesPrevious work indicated that bark explants seem to be the most useful for establishing callus. The age of the tree did not appear to affect the ability to initiate callus when comparing both young and old tree materials grown on Gamborg's B5 medium supplemented with 1-2 mg/L of 2,4-dichlorophenoxyacetic acid. Callus cultures initiated and maintained in total darkness were generally pale-yellow to light brown in color. This resulted in sufficient masses of friable callus necessary for subculture within 3-4 weeks. However, the growth rate can decline substantially following the initial subculture and result in very slow-growing, brown-colored clumps of callus. It has been presumed that these brown-colored exudates are phenolic in nature and can eventually lead to cell death. This common phenomenon is totally random and unpredictable. Once this phenomenon has been triggered, the cells could not be saved by placing them in fresh media. However, adding polyvinylpyrrolidone to the culture media can help keep the cells alive and growing. Our experience withcallus initiation was similar to those studies.Our studies have found that callus which initiated early (usually within 2 weeks ) frequently did not proliferate when subcultured and turned brown and necrotic. In contrast, calli which developed from 4 weeks to 4 months after explants were fist placed on initiation media were able to be continuously subcultured when transferred at 1-2 month intervals. The presence of the survival of callus after subsequent subculturing. The relationship between paclitaxel concentration and callus initiation, however, has not been clarified.C. Effect of SugarSucrose is the preferred carbon source for growth in plant cell cultures, although the presence of more rapidly metabolized sugar such as glucose favors fast growth. Other sugars such as lactose, galactose, glucose, and fructose also support cell growth to some extent. On the other hand, sugar alcohols such as mannitol and sorbital which are generally used to raise the sugars added play a major role in the production of paclitaxel. In general, raising the initial sugar levels lead to an increase of secondary metabolite production. High initial levels of sugar increase the osmotic potential, although the role of osmotic pressure on the synthesis of secondary metabolites is not cleat. Kim and colleagues have shown that the highest level of paclitaxel was obtained with fructosel. The optimum concentration of each sugar for paclitaxel production was found to be the same at 6% in all cases. Wickremesinhe and Arteca have provided additional support that fructose is the most effective for paclitaxel production. However, other combinations of sugars such as sucrose combined with glucose also increased paclitaxel production.The presence of extracellular invertase activity and rapid extracellular sucrose hydrolysis has been observed in many cell cultures. These reports suggest that cells secrete or possess on their surface excess amounts of invertase, which result in the hydrolysis of sucrose at a much faster rate. The hydrolysis of sucrose coupled with the rapid utilization of fructose in the medium during the latter period of cell growth. This period of increased fructose availability coincided with the faster growth phase of the cells.D. Effect of Picloram and Methyl JasmonatePicloram (4-amino-3.5.6-trichloropicolinic acid) increases growth rate while methyl jasmonate has been reported to be an effective elicitor in the production of paclitaxel and other taxanes. However, little is known about the mechanisms or pathways that stimulate these secondary metabolites.Picloram had been used by Furmanowa and co-workers and Ketchum and Gibson but no details on the effect of picloram on growth rates were given. Furmanowa and hid colleagues observed growth of callus both in the presence and absence of light. The callus grew best in the dark showing a 9.3 fold increase, whereas there was only a 2-4 fold increase in the presence of light. Without picloram, callus growth was 0.9 fold. Unfortunately,this auxin had no effect on taxane production and the high callus growth rate was very unstable.Jasmonates exhibit various morphological and physiological activities when applied exogenously to plants. They induce transcriptional activation of genes involved in the formation of secondary metabolites. Methyl jasmonate was shown to stimulate paclitaxel and cephalomannine (taxane derivative) production in callus and suspension cultures. However, taxane production was best with White's medium compared to Gamborg's B5 medium. This may be due to the reduced concentration of potassium nitrate and a lack of ammonium sulfate with White's medium.E. Effect of Copper Sulfate and Mercuric ChlorideMetal ions have shown to play significant roles in altering the expression of secondary metabolic pathways in plant cell culture. Secondary metabolites,such as furano-terpenes, have been production by treatment of sweet potato root tissue with mercuric chloride. The results for copper sulfate, however, have not been reported. F. Growth Kinetics and Paclitaxel ProductionLow yields of paclitaxel may be attributed to the kinetics of taxane production that is not fully understood. Many reports stated inconclusive results on the kinetics of taxane production. More studies are needed in order to quantitate the taxane production. According to Nett-Fetto, the maximum instantaneous rate of paclitaxel production occurred at the third week upon further incubation. The paclitaxel level either declined or was not expected to increase upon further incubation. Paclitaxel production was very sensitive to slight variations in culture conditions. Due to this sensitivity, cell maintenance conditions, especially initial cell density, length of subculture interval, and temperature must be maintained as possible.Recently, Byun and co-workers have made a very detailed study on the kinetics of cell growth and taxane production. In their investigation, it was observed that the highest cell weight occurred at day 7 after inoculation. Similarly, the maximum concentration for 10-deacetyl baccatin Ⅲ and baccatin Ⅲ were detected at days 5 and 7, respectively. This result indicated that they are metabolic intermediates of paclitaxel. However, paclitaxel's maximum concentration was detected at day 22 but gradually declined. Byun and his colleagues suggested that paxlitaxel could be a metabolic intermediate like 10-deacetyl baccatin Ⅲ and baccatin Ⅲ or that pacliltaxel could be decomposed due to cellular morphological changes or DNA degradation characteristic of cell death.Pedtchanker's group also studied the kinetics of paclitaxel production by comparing the suspension cultures in shake flasks and Wilson-type reactors where bubbled air provided agitation and mixing. It was concluded that these cultures of Taxus cuspidata produced high levels of paclitaxel within three weeks (1.1 mg/L per day ). It was also determined that both cultures of the shake flask and Wilson-type reactor produced similar paclitaxel content. However, the Wilson-type reactor had a more rapid uptake of the nutrients (i.e. sugars, phosphate, calcium, and nitrate). This was probably due to the presence of the growth ring in the Wilson reactor. Therefor, the growth rate for the cultures from the Wilson reactor was only 135 mg./L while the shake flasks grew to 310 mg/L in three weeks.In retrospect, strictly controlled culture conditions are essential to consistent production and yield. Slight alterations in media formulations can have significant effects upon the physiology of cells, thereby affecting growth and product formation. All of the manipulations that affect growth and production of plant cells must be carefully integrated and controlled in order to maintain cell viability and stability.利用LC,LC/MS和LC/MS/MS悬浮培养生产紫杉醇及邓西佛米斯红豆杉中相关紫杉醇类的特征描述第三章植物组织培养Ⅰ.利用植物细胞培养生产紫杉的可能性紫杉醇的几个备选的来源已被确定,而且目前是全球大量调查的主题。