外文参考文献翻译-中文
外文参考文献翻译-中文

外⽂参考⽂献翻译-中⽂基于4G LTE技术的⾼速铁路移动通信系统KS Solanki教授,Kratika ChouhanUjjain⼯程学院,印度Madhya Pradesh的Ujjain摘要:随着时间发展,⾼速铁路(HSR)要求可靠的,安全的列车运⾏和乘客通信。
为了实现这个⽬标,HSR的系统需要更⾼的带宽和更短的响应时间,⽽且HSR的旧技术需要进⾏发展,开发新技术,改进现有的架构和控制成本。
为了满⾜这⼀要求,HSR采⽤了GSM的演进GSM-R技术,但它并不能满⾜客户的需求。
因此采⽤了新技术LTE-R,它提供了更⾼的带宽,并且在⾼速下提供了更⾼的客户满意度。
本⽂介绍了LTE-R,给出GSM-R与LTE-R之间的⽐较结果,并描述了在⾼速下哪种铁路移动通信系统更好。
关键词:⾼速铁路,LTE,GSM,通信和信令系统⼀介绍⾼速铁路需要提⾼对移动通信系统的要求。
随着这种改进,其⽹络架构和硬件设备必须适应⾼达500公⾥/⼩时的列车速度。
HSR还需要快速切换功能。
因此,为了解决这些问题,HSR 需要⼀种名为LTE-R的新技术,基于LTE-R的HSR提供⾼数据传输速率,更⾼带宽和低延迟。
LTE-R能够处理⽇益增长的业务量,确保乘客安全并提供实时多媒体信息。
随着列车速度的不断提⾼,可靠的宽带通信系统对于⾼铁移动通信⾄关重要。
HSR的应⽤服务质量(QOS)测量,包括如数据速率,误码率(BER)和传输延迟。
为了实现HSR的运营需求,需要⼀个能够与 LTE保持⼀致的能⼒的新系统,提供新的业务,但仍能够与GSM-R长时间共存。
HSR系统选择合适的⽆线通信系统时,需要考虑性能,服务,属性,频段和⼯业⽀持等问题。
4G LTE系统与第三代(3G)系统相⽐,它具有简单的扁平架构,⾼数据速率和低延迟。
在LTE的性能和成熟度⽔平上,LTE- railway(LTE-R)将可能成为下⼀代HSR通信系统。
⼆ LTE-R系统描述考虑LTE-R的频率和频谱使⽤,对为⾼速铁路(HSR)通信提供更⾼效的数据传输⾮常重要。
毕业论文(设计)外文文献翻译及原文

金融体制、融资约束与投资——来自OECD的实证分析R.SemenovDepartment of Economics,University of Nijmegen,Nijmegen(荷兰内梅亨大学,经济学院)这篇论文考查了OECD的11个国家中现金流量对企业投资的影响.我们发现不同国家之间投资对企业内部可获取资金的敏感性具有显著差异,并且银企之间具有明显的紧密关系的国家的敏感性比银企之间具有公平关系的国家的低.同时,我们发现融资约束与整体金融发展指标不存在关系.我们的结论与资本市场信息和激励问题对企业投资具有重要作用这种观点一致,并且紧密的银企关系会减少这些问题从而增加企业获取外部融资的渠道。
一、引言各个国家的企业在显著不同的金融体制下运行。
金融发展水平的差别(例如,相对GDP的信用额度和相对GDP的相应股票市场的资本化程度),在所有者和管理者关系、企业和债权人的模式中,企业控制的市场活动水平可以很好地被记录.在完美资本市场,对于具有正的净现值投资机会的企业将一直获得资金。
然而,经济理论表明市场摩擦,诸如信息不对称和激励问题会使获得外部资本更加昂贵,并且具有盈利投资机会的企业不一定能够获取所需资本.这表明融资要素,例如内部产生资金数量、新债务和权益的可得性,共同决定了企业的投资决策.现今已经有大量考查外部资金可得性对投资决策的影响的实证资料(可参考,例如Fazzari(1998)、 Hoshi(1991)、 Chapman(1996)、Samuel(1998)).大多数研究结果表明金融变量例如现金流量有助于解释企业的投资水平。
这项研究结果解释表明企业投资受限于外部资金的可得性。
很多模型强调运行正常的金融中介和金融市场有助于改善信息不对称和交易成本,减缓不对称问题,从而促使储蓄资金投着长期和高回报的项目,并且提高资源的有效配置(参看Levine(1997)的评论文章)。
因而我们预期用于更加发达的金融体制的国家的企业将更容易获得外部融资.几位学者已经指出建立企业和金融中介机构可进一步缓解金融市场摩擦。
毕业论文英文参考文献与译文

Inventory managementInventory ControlOn the so-called "inventory control", many people will interpret it as a "storage management", which is actually a big distortion.The traditional narrow view, mainly for warehouse inventory control of materials for inventory, data processing, storage, distribution, etc., through the implementation of anti-corrosion, temperature and humidity control means, to make the custody of the physical inventory to maintain optimum purposes. This is just a form of inventory control, or can be defined as the physical inventory control. How, then, from a broad perspective to understand inventory control? Inventory control should be related to the company's financial and operational objectives, in particular operating cash flow by optimizing the entire demand and supply chain management processes (DSCM), a reasonable set of ERP control strategy, and supported by appropriate information processing tools, tools to achieved in ensuring the timely delivery of the premise, as far as possible to reduce inventory levels, reducing inventory and obsolescence, the risk of devaluation. In this sense, the physical inventory control to achieve financial goals is just a means to control the entire inventory or just a necessary part; from the perspective of organizational functions, physical inventory control, warehouse management is mainly the responsibility of The broad inventory control is the demand and supply chain management, and the whole company's responsibility.Why until now many people's understanding of inventory control, limited physical inventory control? The following two reasons can not be ignored:First, our enterprises do not attach importance to inventory control. Especially those who benefit relatively good business, as long as there is money on the few people to consider the problem of inventory turnover. Inventory control is simply interpreted as warehouse management, unless the time to spend money, it may have been to see the inventory problem, and see the results are often very simple procurement to buy more, or did not do warehouse departments .Second, ERP misleading. Invoicing software is simple audacity to call it ERP, companies on their so-called ERP can reduce the number of inventory, inventory control, seems to rely on their small software can get. Even as SAP, BAAN ERP world, the field ofthese big boys, but also their simple modules inside the warehouse management functionality is defined as "inventory management" or "inventory control." This makes the already not quite understand what our inventory control, but not sure what is inventory control.In fact, from the perspective of broadly understood, inventory control, shouldinclude the following:First, the fundamental purpose of inventory control. We know that the so-called world-class manufacturing, two key assessment indicators (KPI) is, customer satisfaction and inventory turns, inventory turns and this is actually the fundamental objective of inventory control.Second, inventory control means. Increase inventory turns, relying solely on the so-called physical inventory control is not enough, it should be the demand and supply chain management process flow of this large output, and this big warehouse management processes in addition to including this link, the more important The section also includes: forecasting and order processing, production planning and control, materials planning and purchasing control, inventory planning and forecasting in itself, as well as finished products, raw materials, distribution and delivery of the strategy, and even customs management processes.And with the demand and supply chain management processes throughout the process, it is the information flow and capital flow management. In other words, inventory itself is across the entire demand and supply management processes in all aspects of inventory control in order to achieve the fundamental purpose, it must control all aspects of inventory, rather than just manage the physical inventory at hand.Third, inventory control, organizational structure and assessment.Since inventory control is the demand and supply chain management processes, output, inventory control to achieve the fundamental purpose of this process must be compatible with a rational organizational structure. Until now, we can see that many companies have only one purchasing department, purchasing department following pipe warehouse. This is far short of inventory control requirements. From the demand and supply chain management process analysis, we know that purchasing and warehouse management is the executive arm of the typical, and inventory control should focus on prevention, the executive branch is very difficult to "prevent inventory" for the simple reason that they assessment indicatorsin large part to ensure supply (production, customer). How the actual situation, a reasonable demand and supply chain management processes, and thus set the corresponding rational organizational structure and is a question many of our enterprisesto exploreThe role of inventory controlInventory management is an important part of business management. In the production and operation activities, inventory management must ensure that both the production plant for raw materials, spare parts demand, but also directly affect the purchasing, sales of share, sales activities. To make an inventory of corporate liquidity, accelerate cash flow, the security of supply under the premise of minimizing Yaku funds, directly affects the operational efficiency. Ensure the production and operation needs of the premise, so keep inventories at a reasonable level; dynamic inventory control, timely, appropriate proposed order to avoid over storage or out of stock; reduce inventory footprint, lower total cost of inventory; control stock funds used to accelerate cash flow.Problems arising from excessive inventory: increased warehouse space andinventory storage costs, thereby increasing product costs; take a lot of liquidity, resultingin sluggish capital, not only increased the burden of payment of interest, etc., would affect the time value of money and opportunity income; finished products and raw materials caused by physical loss and intangible losses; a large number of enterprise resource idle, affecting their rational allocation and optimization; cover the production, operation of the whole process of the various contradictions and problems, is not conducive to improve the management level.Inventory is too small the resulting problems: service levels caused a decline in the profit impact of marketing and corporate reputation; production system caused by inadequate supply of raw materials or other materials, affecting the normal production process; to shorten lead times, increase the number of orders, so order (production) costs; affect the balance of production and assembly of complete sets.NotesInventory management should particularly consider the following two questions:First, according to sales plans, according to the planned production of the goods circulated in the market, we should consider where, how much storage.Second, starting from the level of service and economic benefits to determine howto ensure inventories and supplementary questions.The two problems with the inventory in the logistics process functions.In general, the inventory function:(1)to prevent interrupted. Received orders to shorten the delivery of goods fromthe time in order to ensure quality service, at the same time to prevent out of stock.(2)to ensure proper inventory levels, saving inventory costs.(3)to reduce logistics costs. Supplement with the appropriate time interval compatible with the reasonable demand of the cargo in order to reduce logistics costs, eliminate or avoid sales fluctuations.(4)ensure the production planning, smooth to eliminate or avoid sales fluctuations.(5)display function.(6)reserve. Mass storage when the price falls, reduce losses, to respond to disasters and other contingencies.About the warehouse (inventory) on what the question, we must consider the number and location. If the distribution center, it should be possible according to customer needs, set at an appropriate place; if it is stored in central places to minimize the complementary principle to the distribution centers, there is no place certain requirements. When the stock base is established, will have to take into account are stored in various locations in what commodities.库存管理库存控制在谈到所谓“库存控制”的时候,很多人将其理解为“仓储管理”,这实际上是个很大的曲解。
外文文献翻译(图片版)

本科毕业论文外文参考文献译文及原文学院经济与贸易学院专业经济学(贸易方向)年级班别2007级 1 班学号3207004154学生姓名欧阳倩指导教师童雪晖2010 年 6 月 3 日目录1 外文文献译文(一)中国银行业的改革和盈利能力(第1、2、4部分) (1)2 外文文献原文(一)CHINA’S BANKING REFORM AND PROFITABILITY(Part 1、2、4) (9)1概述世界银行(1997年)曾声称,中国的金融业是其经济的软肋。
当一国的经济增长的可持续性岌岌可危的时候,金融业的改革一直被认为是提高资金使用效率和消费型经济增长重新走向平衡的必要(Lardy,1998年,Prasad,2007年)。
事实上,不久前,中国的国有银行被视为“技术上破产”,它们的生存需要依靠充裕的国家流动资金。
但是,在银行改革开展以来,最近,强劲的盈利能力已恢复到国有商业银行的水平。
但自从中国的国有银行在不久之前已经走上了改革的道路,它可能过早宣布银行业的改革尚未取得完全的胜利。
此外,其坚实的财务表现虽然强劲,但不可持续增长。
随着经济增长在2008年全球经济衰退得带动下已经开始软化,银行预计将在一个比以前更加困难的经济形势下探索。
本文的目的不是要评价银行业改革对银行业绩的影响,这在一个完整的信贷周期后更好解决。
相反,我们的目标是通过审查改革的进展和银行改革战略,并分析其近期改革后的强劲的财务表现,但是这不能完全从迄今所进行的改革努力分离。
本文有三个部分。
在第二节中,我们回顾了中国的大型国有银行改革的战略,以及其执行情况,这是中国银行业改革的主要目标。
第三节中分析了2007年的财务表现集中在那些在市场上拥有浮动股份的四大国有商业银行:中国工商银行(工商银行),中国建设银行(建行),对中国银行(中银)和交通银行(交通银行)。
引人注目的是中国农业银行,它仍然处于重组上市过程中得适当时候的后期。
第四节总结一个对银行绩效评估。
最新3000字英文参考文献及其翻译范例

3000字英文参考文献及其翻译【注意:选用的英文一定要与自己的论文题目相关。
如果文章太长,可以节选(用省略号省略一些段略)。
如果字数不够,可以选2至3篇,但要逐一注明详细出处。
英文集中在一起放前面,对应的中文翻译放后面。
中文翻译也要将出处翻译,除非是网页。
对文献的翻译一定要认真!对英文文献及其翻译的排版也要和论文正文一样!特别注意:英文文献应该放在你的参考文献中。
】TOY RECALLS——IS CHINA THE PROBLEM?Hari. Bapuji Paul W. BeamishChina exports about 20 billion toys per year and they are the second most commonly imported item by U.S. and Canada. It is estimated that about 10,000 factories in China manufacture toys for export. Considering this mutual dependence, it is important that the problems resulting in recalls are addressed carefully.Although the largest portion of recalls by Mattel involved design flaws, the CEO of Mattel blamed the Chinese manufacturers by saying that the problem resulted ‘in this case (because)one of our manufacturers did not follow the rules’. Several analysts too blamed the Chinese manufacturers. By placing blame where it did not belong, thereis a danger of losing the opportunity to learn from the errors that have occurred. The first step to learn from errors is to know why and where the error occurred. Further, the most critical step in preventing the recurrence of errors is to find out what and who can prevent it.……From:/loadpage.aspx?Page=ShowDoc&Category Alias=zonghe/ggmflm_zh&BlockAlias=sjhwsd&filename=/doc/sjhwsd/2 00709281954.xml, Sep. 2007玩具召回——是中国的问题吗?哈里·巴普基保罗·比密什中国每年大约出口20亿美元的玩具,最常见是从美国和加拿大进口项目。
冲压模具成型外文翻译参考文献

冲压模具成型外文翻译参考文献(文档含中英文对照即英文原文和中文翻译)4 Sheet metal forming and blanking4.1 Principles of die manufacture4.1.1 Classification of diesIn metalforming,the geometry of the workpiece is established entirely or partially by the geometry of the die.In contrast to machining processes,ignificantly greater forces are necessary in forming.Due to the complexity of the parts,forming is often not carried out in a single operation.Depending on the geometry of the part,production is carried out in several operational steps via one or several production processes such as forming or blanking.One operation can also include several processes simultaneously(cf.Sect.2.1.4).During the design phase,the necessary manufacturing methods as well as the sequence and number of production steps are established in a processing plan(Fig.4.1.1).In this plan,theavailability of machines,the planned production volumes of the part and other boundary conditions are taken into account.The aim is to minimize the number of dies to be used while keeping up a high level of operational reliability.The parts are greatly simplified right from their design stage by close collaboration between the Part Design and Production Departments in order to enable several forming and related blanking processes to be carried out in one forming station.Obviously,the more operations which are integrated into a single die,the more complex the structure of the die becomes.The consequences are higher costs,a decrease in output and a lower reliability.Fig.4.1.1 Production steps for the manufacture of an oil sumpTypes of diesThe type of die and the closely related transportation of the part between dies is determined in accordance with the forming procedure,the size of the part in question and the production volume of parts to be produced.The production of large sheet metal parts is carried out almost exclusively using single sets of dies.Typical parts can be found in automotive manufacture,the domestic appliance industry and radiator production.Suitable transfer systems,for example vacuum suction systems,allow the installation of double-action dies in a sufficiently large mounting area.In this way,for example,the right and left doors of a car can be formed jointly in one working stroke(cf.Fig.4.4.34).Large size single dies are installed in large presses.The transportation of the parts from oneforming station to another is carried out mechanically.In a press line with single presses installed one behind the other,feeders or robots can be used(cf.Fig.4.4.20 to 4.4.22),whilst in large-panel transfer presses,systems equipped with gripper rails(cf.Fig.4.4.29)or crossbar suction systems(cf.Fig.4.4.34)are used to transfer the parts.Transfer dies are used for the production of high volumes of smaller and medium size parts(Fig.4.1.2).They consist of several single dies,which are mounted on a common base plate.The sheet metal is fed through mostly in blank form and also transported individually from die to die.If this part transportation is automated,the press is called a transfer press.The largest transfer dies are used together with single dies in large-panel transfer presses(cf.Fig.4.4.32).In progressive dies,also known as progressive blanking dies,sheet metal parts are blanked in several stages;generally speaking no actual forming operation takes place.The sheet metal is fed from a coil or in the form of metal ing an appropriate arrangement of the blanks within the available width of the sheet metal,an optimal material usage is ensured(cf.Fig.4.5.2 to 4.5.5). The workpiece remains fixed to the strip skeleton up until the laFig.4.1.2 Transfer die set for the production of an automatic transmission for an automotive application-st operation.The parts are transferred when the entire strip is shifted further in the work flow direction after the blanking operation.The length of the shift is equal to the center line spacing of the dies and it is also called the step width.Side shears,very precise feeding devices or pilot pins ensure feed-related part accuracy.In the final production operation,the finished part,i.e.the last part in the sequence,is disconnected from the skeleton.A field of application for progressive blanking tools is,for example,in the production of metal rotors or stator blanks for electric motors(cf.Fig.4.6.11 and 4.6.20).In progressive compound dies smaller formed parts are produced in several sequential operations.In contrast to progressive dies,not only blanking but also forming operations areperformed.However, the workpiece also remains in the skeleton up to the last operation(Fig.4.1.3 and cf.Fig.4.7.2).Due to the height of the parts,the metal strip must be raised up,generally using lifting edges or similar lifting devices in order to allow the strip metal to be transported mechanically.Pressed metal parts which cannot be produced within a metal strip because of their geometrical dimensions are alternatively produced on transfer sets.Fig.4.1.3 Reinforcing part of a car produced in a strip by a compound die setNext to the dies already mentioned,a series of special dies are available for special individual applications.These dies are,as a rule,used separately.Special operations make it possible,however,for special dies to be integrated into an operational Sequence.Thus,for example,in flanging dies several metal parts can be joined together positively through the bending of certain metal sections(Fig.4.1.4and cf.Fig.2.1.34).During this operation reinforcing parts,glue or other components can be introduced.Other special dies locate special connecting elements directly into the press.Sorting and positioning elements,for example,bring stamping nuts synchronised with the press cycles into the correct position so that the punch heads can join them with the sheet metal part(Fig.4.1.5).If there is sufficient space available,forming and blanking operations can be carried out on the same die.Further examples include bending,collar-forming,stamping,fine blanking,wobble blanking and welding operations(cf.Fig.4.7.14 and4.7.15).Fig.4.1.4 A hemming dieFig.4.1.5 A pressed part with an integrated punched nut4.1.2 Die developmentTraditionally the business of die engineering has been influenced by the automotive industry.The following observations about the die development are mostly related to body panel die construction.Essential statements are,however,made in a fundamental context,so that they are applicable to all areas involved with the production of sheet-metal forming and blanking dies.Timing cycle for a mass produced car body panelUntil the end of the 1980s some car models were still being produced for six to eight years more or less unchanged or in slightly modified form.Today,however,production time cycles are set for only five years or less(Fig.4.1.6).Following the new different model policy,the demands ondie makers have also changed prehensive contracts of much greater scope such as Simultaneous Engineering(SE)contracts are becoming increasingly common.As a result,the die maker is often involved at the initial development phase of the metal part as well as in the planning phase for the production process.Therefore,a muchbroader involvement is established well before the actual die development is initiated.Fig.4.1.6 Time schedule for a mass produced car body panelThe timetable of an SE projectWithin the context of the production process for car body panels,only a minimal amount of time is allocated to allow for the manufacture of the dies.With large scale dies there is a run-up period of about 10 months in which design and die try-out are included.In complex SE projects,which have to be completed in 1.5 to 2 years,parallel tasks must be carried out.Furthermore,additional resources must be provided before and after delivery of the dies.These short periods call for pre-cise planning,specific know-how,available capacity and the use of the latest technological and communications systems.The timetable shows the individual activities during the manufacturing of the dies for the production of the sheet metal parts(Fig.4.1.7).The time phases for large scale dies are more or less similar so that this timetable can be considered to be valid in general.Data record and part drawingThe data record and the part drawing serve as the basis for all subsequent processing steps.They describe all the details of the parts to be produced. The information given in theFig.4.1.7 Timetable for an SE projectpart drawing includes: part identification,part numbering,sheet metal thickness,sheet metal quality,tolerances of the finished part etc.(cf.Fig.4.7.17).To avoid the production of physical models(master patterns),the CAD data should describe the geometry of the part completely by means of line,surface or volume models.As a general rule,high quality surface data with a completely filleted and closed surface geometry must be made available to all the participants in a project as early as possible.Process plan and draw developmentThe process plan,which means the operational sequence to be followed in the production of the sheet metal component,is developed from the data record of the finished part(cf.Fig.4.1.1).Already at this point in time,various boundary conditions must be taken into account:the sheet metal material,the press to be used,transfer of the parts into the press,the transportation of scrap materials,the undercuts as well as thesliding pin installations and their adjustment.The draw development,i.e.the computer aided design and layout of the blank holder area of the part in the first forming stage–if need bealso the second stage–,requires a process planner with considerable experience(Fig.4.1.8).In order to recognize and avoid problems in areas which are difficult to draw,it is necessary to manufacture a physical analysis model of the draw development.With this model,theforming conditions of the drawn part can be reviewed and final modifications introduced,which are eventually incorporated into the data record(Fig.4.1.9).This process is being replaced to some extent by intelligent simulation methods,through which the potential defects of the formed component can be predicted and analysed interactively on the computer display.Die designAfter release of the process plan and draw development and the press,the design of the die can be started.As a rule,at this stage,the standards and manufacturing specifications required by the client must be considered.Thus,it is possible to obtain a unified die design and to consider the particular requests of the customer related to warehousing of standard,replacement and wear parts.Many dies need to be designed so that they can be installed in different types of presses.Dies are frequently installed both in a production press as well as in two different separate back-up presses.In this context,the layout of the die clamping elements,pressure pins and scrap disposal channels on different presses must be taken into account.Furthermore,it must be noted that drawing dies working in a single-action press may be installed in a double-action press(cf.Sect.3.1.3 and Fig.4.1.16).Fig.4.1.8 CAD data record for a draw developmentIn the design and sizing of the die,it is particularly important to consider the freedom of movement of the gripper rail and the crossbar transfer elements(cf.Sect.4.1.6).These describe the relative movements between the components of the press transfer system and the die components during a complete press working stroke.The lifting movement of the press slide,the opening and closing movements of the gripper rails and the lengthwise movement of the whole transfer are all superimposed.The dies are designed so that collisions are avoided and a minimum clearance of about 20 mm is set between all the moving parts.4 金属板料的成形及冲裁4. 模具制造原理4.1.1模具的分类在金属成形的过程中,工件的几何形状完全或部分建立在模具几何形状的基础上的。
外文文献翻译(英文+中文对照)

外文文献翻译 例如例如::下面是一个样板下面是一个样板,,如需要更多的机械相关专业的外文文献可以联系QQ: 763077177 (非诚勿扰) Coating thickness effects on diamond coated cutting tools F. Qin, Y.K. Chou,D. Nolen and R.G. ThompsonAvailable online 12 June 2009. Abstract :Chemical vapor deposition (CVD)-grown diamond films have found applications as a hard coating for cutting tools. Even though the use of conventional diamond coatings seems to be accepted in the cutting tool industry, selections of proper coating thickness for different machining operations have not been often studied. Coating thickness affects the characteristics of diamond coated cutting tools in different perspectives that may mutually impact the tool performance in machining in a complex way.In this study, coating thickness effects on the deposition residual stresses, particularly around a cutting edge, and on coating failure modes were numerically investigated. On the other hand, coating thickness effects on tool surface smoothness and cutting edge radii were experimentally investigated. In addition, machining Al matrix composites using diamond coated tools with varied coating thicknesses was conducted to evaluate the effects on cutting forces, part surface finish and tool wear.The results are summarized as follows. Increasing coating thickness will increase the residual stresses at the coating–substrate interface. On the other hand, increasing coating thickness will generally increase the resistance of coating cracking and delamination. Thicker coatings will result in larger edge radii; however, the extent of the effect on cutting forces also depends upon the machining condition. For the thickness range tested, the life of diamond coated tools increases with the coating thickness because of delay of delaminations. Keywords: Coating thickness; Diamond coating; Finite element; Machining; Tool wear1. IntroductionDiamond coatings produced by chemical vapor deposition (CVD) technologies have been increasingly explored for cutting tool applications. Diamond coated tools have great potential in various machining applications and an advantage in fabrications of cutting tools with complex geometry such as drills. Increased usages of lightweight high-strength components have also resulted in significant interests in diamond coating tools. Hot-filament CVD is one of common processes of diamond coatings and diamond films as thick as 50 µm have been deposited on various materials including cobalt-cemented tungsten carbide (WC-Co) . There have also been different CVD technologies, e.g., microwave plasma assisted CVD , developed to enhance the deposition process as well as the film quality too. However, despite the superior tribological and mechanical properties, the practical applications of diamond coated tools are still limited.Coating thickness is one of the most important attributes to the coating system performance. Coating thickness effects on tribological performance have been widely studied. In general, thicker coatings exhibited better scratch/wear resistance performance than thinner ones due to their better load-carrying capacity. However, there are also reports that claim otherwise and . For example, Dorner et al. discovered, that the thickness of diamond-like-coating (DLC), in a range of 0.7–3.5 µm, does not influence the wear resistance of the DLC–Ti6Al4V . For cutting tool applications, however, coating thickness may have a more complicated role since its effects may be augmented around the cutting edge. Coating thickness effects on diamond coated tools are not frequently reported. Kanda et al. conducted cutting tests using diamond-coated tooling . The author claimed that the increased film thickness is generally favorable to tool life. However, thicker films will result in the decrease in the transverse rupture strength that greatly impacts the performance in high speed or interrupted machining. In addition, higher cutting forces were observed for the tools with increased diamond coating thickness due to the increased cutting edge radius. Quadrini et al. studied diamond coated small mills for dental applications . The authors tested different coating thickness and noted that thick coatings induce high cutting forces due to increased coating surface roughness and enlarged edge rounding. Such effects may contribute to the tool failure in milling ceramic materials. The authors further indicated tools with thin coatings results in optimal cutting of polymer matrix composite . Further, Torres et al. studied diamondcoated micro-endmills with two levels of coating thickness . The authors also indicated that the thinner coating can further reduce cutting forces which are attributed to the decrease in the frictional force and adhesion.Coating thickness effects of different coating-material tools have also been studied. For single layer systems, an optimal coating thickness may exist for machining performance. For example, Tuffy et al. reported that an optimal coating thickness of TiN by PVD technology exists for specific machining conditions . Based on testing results, for a range from 1.75 to 7.5 µm TiN coating, thickness of 3.5 µm exhibit the best turning performance. In a separate study, Malik et al. also suggested that there is an optimal thickness of TiN coating on HSS cutting tools when machining free cutting steels . However, for multilayer coating systems, no such an optimum coating thickness exists for machining performance .The objective of this study was to experimentally investigate coating thickness effects of diamond coated tools on machining performance — tool wear and cutting forces. Diamond coated tools were fabricated, by microwave plasma assisted CVD, with different coating thicknesses. The diamond coated tools were examined in morphology and edge radii by white-light interferometry. The diamond coated tools were then evaluated by machining aluminum matrix composite in dry. In addition, deposition thermal residual stresses and critical load for coating failures that affect the performance of diamond coated tools were analytically examined.2. Experimental investigationThe substrates used for diamond coating experiments, square-shaped inserts (SPG422), were fine-grain WC with 6 wt.% cobalt. The edge radius and surface textures of cutting inserts prior to coating was measured by a white-light interferometer, NT1100 from Veeco Metrology.Prior to the deposition, chemical etching treatment was conducted on inserts to remove the surface cobalt and roughen substrate surface. Moreover, all tool inserts were ultrasonically vibrated in diamond/water slurry to increase the nucleation density. For the coating process, diamond films were deposited using a high-power microwave plasma-assisted CVD process.A gas mixture of methane in hydrogen, 750–1000 sccm with 4.4–7.3% of methane/hydrogen ratio, was used as the feedstock gas. Nitrogen gas, 2.75–5.5 sccm, was inserted to obtain nanostructures by preventing columnar growth. The pressure was about 30–55 Torr and the substrate temperature was about 685–830 °C. A forward power of 4.5–5.0 kW with a low deposition rate obtained a thin coating; a greater forward power of 8.0–8.5 kW with a highdeposition rate obtained thick coatings, two thicknesses by varying deposition time. The coated inserts were further inspected by the interferometer.A computer numerical control lathe, Hardinge Cobra 42, was used to perform machining experiments, outer diameter turning, to evaluate the tool wear of diamond coated tools. With the tool holder used, the diamond coated cutting inserts formed a 0° rake angle, 11° relief angle, and 75° lead angle. The workpieces were round bars made of A359/SiC-20p composite. The machining conditions used were 4 m/s cutting speed, 0.15 mm/rev feed, 1 mm depth of cut and no coolant was applied. The selection of machining parameters was based upon previous experiences. For each coating thickness, two tests were repeated. During machining testing, the cutting inserts were periodically inspected by optical microscopy to measure the flank wear-land size. Worn tools after testing were also examined by scanning electron microscopy (SEM). In addition, cutting forces were monitored during machining using a Kistler dynamometer.5. ConclusionsIn this study, the coating thickness effects on diamond coated cutting tools were studied from different perspectives. Deposition residual stresses in the tool due to thermal mismatch were investigated by FE simulations and coating thickness effects on the interface stresses were quantified. In addition, indentation simulations of a diamond coated WC substrate with the interface modeled by the cohesive zone were applied to analyze the coating system failures. Moreover, diamond coated tools with different thicknesses were fabricated and experimentally investigated on surface morphology, edge rounding, as well as tool wear and cutting forces in machining. The major results are summarized as follows.(1) Increase of coating thickness significantly increases the interface residual stresses, though little change in bulk surface stresses.(2) For thick coatings, the critical load for coating failure decreases with increasing coating thickness. However, such a trend is opposite for thin coatings, for which radial cracking is the coating failure mode. Moreover, thicker coatings have greater delamination resistance.(3) In addition, increasing the coating thickness will increase the edge radius. However, for the coating thickness range studied, 4–29 µm, and with the large feed used, cutting forces were affected only marginally.(4) Despite of greater interface residual stresses, increasing the diamond coating thickness, for the range studied, seem to increase tool life by delay of coating delaminations.AcknowledgementsThis research is supported by National Science Foundation, Grant No.: CMMI 0728228. P. Lu provided assistance in some analyses.金刚石涂层刀具的涂层厚度的影响作者:F. Qin, Y.K. Chou,D. Nolen and R.G. Thompson发表日期:2009摘要:化学气相沉积法(CVD),金刚石薄膜的发现,作为涂层刀具的应用。
参考文献中文的英文对照

参考文献中文的英文对照在学术论文中,参考文献是非常重要的一部分,它可以为论文的可信度和学术性增添分数,其中包括中文和英文文献。
以下是一些常见的参考文献中文和英文对照:1. 书籍 Book中文:王小明. 计算机网络技术. 北京:清华大学出版社,2018.英文:Wang, X. Computer Network Technology. Beijing: Tsinghua University Press, 2018.2. 学术期刊 Article in Academic Journal中文:张婷婷,李伟. 基于深度学习的影像分割方法. 计算机科学与探索,2019,13(1):61-67.英文:Zhang, T. T., Li, W. Image Segmentation Method Based on Deep Learning. Computer Science and Exploration, 2019, 13(1): 61-67.3. 会议论文 Conference Paper中文:王维,李丽. 基于云计算的智慧物流管理系统设计. 2019年国际物流与采购会议论文集,2019:112-117.英文:Wang, W., Li, L. Design of Smart Logistics Management System Based on Cloud Computing. Proceedings of the 2019 International Conference on Logistics and Procurement, 2019: 112-117.4. 学位论文 Thesis/Dissertation中文:李晓华. 基于模糊神经网络的水质评价模型研究. 博士学位论文,长春:吉林大学,2018.英文:Li, X. H. Research on Water Quality Evaluation Model Based on Fuzzy Neural Network. Doctoral Dissertation, Changchun: Jilin University, 2018.5. 报告 Report中文:国家统计局. 2019年国民经济和社会发展统计公报. 北京:中国统计出版社,2019.英文:National Bureau of Statistics. Statistical Communique of the People's Republic of China on the 2019 National Economic and Social Development. Beijing: China Statistics Press, 2019.以上是一些常见的参考文献中文和英文对照,希望对大家写作有所帮助。
云计算外文翻译参考文献

云计算外文翻译参考文献(文档含中英文对照即英文原文和中文翻译)原文:Technical Issues of Forensic Investigations in Cloud Computing EnvironmentsDominik BirkRuhr-University BochumHorst Goertz Institute for IT SecurityBochum, GermanyRuhr-University BochumHorst Goertz Institute for IT SecurityBochum, GermanyAbstract—Cloud Computing is arguably one of the most discussedinformation technologies today. It presents many promising technological and economical opportunities. However, many customers remain reluctant to move their business IT infrastructure completely to the cloud. One of their main concerns is Cloud Security and the threat of the unknown. Cloud Service Providers(CSP) encourage this perception by not letting their customers see what is behind their virtual curtain. A seldomly discussed, but in this regard highly relevant open issue is the ability to perform digital investigations. This continues to fuel insecurity on the sides of both providers and customers. Cloud Forensics constitutes a new and disruptive challenge for investigators. Due to the decentralized nature of data processing in the cloud, traditional approaches to evidence collection and recovery are no longer practical. This paper focuses on the technical aspects of digital forensics in distributed cloud environments. We contribute by assessing whether it is possible for the customer of cloud computing services to perform a traditional digital investigation from a technical point of view. Furthermore we discuss possible solutions and possible new methodologies helping customers to perform such investigations.I. INTRODUCTIONAlthough the cloud might appear attractive to small as well as to large companies, it does not come along without its own unique problems. Outsourcing sensitive corporate data into the cloud raises concerns regarding the privacy and security of data. Security policies, companies main pillar concerning security, cannot be easily deployed into distributed, virtualized cloud environments. This situation is further complicated by the unknown physical location of the companie’s assets. Normally,if a security incident occurs, the corporate security team wants to be able to perform their own investigation without dependency on third parties. In the cloud, this is not possible anymore: The CSP obtains all the power over the environmentand thus controls the sources of evidence. In the best case, a trusted third party acts as a trustee and guarantees for the trustworthiness of the CSP. Furthermore, the implementation of the technical architecture and circumstances within cloud computing environments bias the way an investigation may be processed. In detail, evidence data has to be interpreted by an investigator in a We would like to thank the reviewers for the helpful comments and Dennis Heinson (Center for Advanced Security Research Darmstadt - CASED) for the profound discussions regarding the legal aspects of cloud forensics. proper manner which is hardly be possible due to the lackof circumstantial information. For auditors, this situation does not change: Questions who accessed specific data and information cannot be answered by the customers, if no corresponding logs are available. With the increasing demand for using the power of the cloud for processing also sensible information and data, enterprises face the issue of Data and Process Provenance in the cloud [10]. Digital provenance, meaning meta-data that describes the ancestry or history of a digital object, is a crucial feature for forensic investigations. In combination with a suitable authentication scheme, it provides information about who created and who modified what kind of data in the cloud. These are crucial aspects for digital investigations in distributed environments such as the cloud. Unfortunately, the aspects of forensic investigations in distributed environment have so far been mostly neglected by the research community. Current discussion centers mostly around security, privacy and data protection issues [35], [9], [12]. The impact of forensic investigations on cloud environments was little noticed albeit mentioned by the authors of [1] in 2009: ”[...] to our knowledge, no research has been published on how cloud computing environments affect digital artifacts,and on acquisition logistics and legal issues related to cloud computing env ironments.” This statement is also confirmed by other authors [34], [36], [40] stressing that further research on incident handling, evidence tracking and accountability in cloud environments has to be done. At the same time, massive investments are being made in cloud technology. Combined with the fact that information technology increasingly transcendents peoples’ private and professional life, thus mirroring more and more of peoples’actions, it becomes apparent that evidence gathered from cloud environments will be of high significance to litigation or criminal proceedings in the future. Within this work, we focus the notion of cloud forensics by addressing the technical issues of forensics in all three major cloud service models and consider cross-disciplinary aspects. Moreover, we address the usability of various sources of evidence for investigative purposes and propose potential solutions to the issues from a practical standpoint. This work should be considered as a surveying discussion of an almost unexplored research area. The paper is organized as follows: We discuss the related work and the fundamental technical background information of digital forensics, cloud computing and the fault model in section II and III. In section IV, we focus on the technical issues of cloud forensics and discuss the potential sources and nature of digital evidence as well as investigations in XaaS environments including thecross-disciplinary aspects. We conclude in section V.II. RELATED WORKVarious works have been published in the field of cloud security and privacy [9], [35], [30] focussing on aspects for protecting data in multi-tenant, virtualized environments. Desired security characteristics for current cloud infrastructures mainly revolve around isolation of multi-tenant platforms [12], security of hypervisors in order to protect virtualized guest systems and secure network infrastructures [32]. Albeit digital provenance, describing the ancestry of digital objects, still remains a challenging issue for cloud environments, several works have already been published in this field [8], [10] contributing to the issues of cloud forensis. Within this context, cryptographic proofs for verifying data integrity mainly in cloud storage offers have been proposed,yet lacking of practical implementations [24], [37], [23]. Traditional computer forensics has already well researched methods for various fields of application [4], [5], [6], [11], [13]. Also the aspects of forensics in virtual systems have been addressed by several works [2], [3], [20] including the notionof virtual introspection [25]. In addition, the NIST already addressed Web Service Forensics [22] which has a huge impact on investigation processes in cloud computing environments. In contrast, the aspects of forensic investigations in cloud environments have mostly been neglected by both the industry and the research community. One of the first papers focusing on this topic was published by Wolthusen [40] after Bebee et al already introduced problems within cloud environments [1]. Wolthusen stressed that there is an inherent strong need for interdisciplinary work linking the requirements and concepts of evidence arising from the legal field to what can be feasibly reconstructed and inferred algorithmically or in an exploratory manner. In 2010, Grobauer et al [36] published a paper discussing the issues of incident response in cloud environments - unfortunately no specific issues and solutions of cloud forensics have been proposed which will be done within this work.III. TECHNICAL BACKGROUNDA. Traditional Digital ForensicsThe notion of Digital Forensics is widely known as the practice of identifying, extracting and considering evidence from digital media. Unfortunately, digital evidence is both fragile and volatile and therefore requires the attention of special personnel and methods in order to ensure that evidence data can be proper isolated and evaluated. Normally, the process of a digital investigation can be separated into three different steps each having its own specificpurpose:1) In the Securing Phase, the major intention is the preservation of evidence for analysis. The data has to be collected in a manner that maximizes its integrity. This is normally done by a bitwise copy of the original media. As can be imagined, this represents a huge problem in the field of cloud computing where you never know exactly where your data is and additionallydo not have access to any physical hardware. However, the snapshot technology, discussed in section IV-B3, provides a powerful tool to freeze system states and thus makes digital investigations, at least in IaaS scenarios, theoretically possible.2) We refer to the Analyzing Phase as the stage in which the data is sifted and combined. It is in this phase that the data from multiple systems or sources is pulled together to create as complete a picture and event reconstruction as possible. Especially in distributed system infrastructures, this means that bits and pieces of data are pulled together for deciphering the real story of what happened and for providing a deeper look into the data.3) Finally, at the end of the examination and analysis of the data, the results of the previous phases will be reprocessed in the Presentation Phase. The report, created in this phase, is a compilation of all the documentation and evidence from the analysis stage. The main intention of such a report is that it contains all results, it is complete and clear to understand. Apparently, the success of these three steps strongly depends on the first stage. If it is not possible to secure the complete set of evidence data, no exhaustive analysis will be possible. However, in real world scenarios often only a subset of the evidence data can be secured by the investigator. In addition, an important definition in the general context of forensics is the notion of a Chain of Custody. This chain clarifies how and where evidence is stored and who takes possession of it. Especially for cases which are brought to court it is crucial that the chain of custody is preserved.B. Cloud ComputingAccording to the NIST [16], cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications and services) that can be rapidly provisioned and released with minimal CSP interaction. The new raw definition of cloud computing brought several new characteristics such as multi-tenancy, elasticity, pay-as-you-go and reliability. Within this work, the following three models are used: In the Infrastructure asa Service (IaaS) model, the customer is using the virtual machine provided by the CSP for installing his own system on it. The system can be used like any other physical computer with a few limitations. However, the additive customer power over the system comes along with additional security obligations. Platform as a Service (PaaS) offerings provide the capability to deploy application packages created using the virtual development environment supported by the CSP. For the efficiency of software development process this service model can be propellent. In the Software as a Service (SaaS) model, the customer makes use of a service run by the CSP on a cloud infrastructure. In most of the cases this service can be accessed through an API for a thin client interface such as a web browser. Closed-source public SaaS offers such as Amazon S3 and GoogleMail can only be used in the public deployment model leading to further issues concerning security, privacy and the gathering of suitable evidences. Furthermore, two main deployment models, private and public cloud have to be distinguished. Common public clouds are made available to the general public. The corresponding infrastructure is owned by one organization acting as a CSP and offering services to its customers. In contrast, the private cloud is exclusively operated for an organization but may not provide the scalability and agility of public offers. The additional notions of community and hybrid cloud are not exclusively covered within this work. However, independently from the specific model used, the movement of applications and data to the cloud comes along with limited control for the customer about the application itself, the data pushed into the applications and also about the underlying technical infrastructure.C. Fault ModelBe it an account for a SaaS application, a development environment (PaaS) or a virtual image of an IaaS environment, systems in the cloud can be affected by inconsistencies. Hence, for both customer and CSP it is crucial to have the ability to assign faults to the causing party, even in the presence of Byzantine behavior [33]. Generally, inconsistencies can be caused by the following two reasons:1) Maliciously Intended FaultsInternal or external adversaries with specific malicious intentions can cause faults on cloud instances or applications. Economic rivals as well as former employees can be the reason for these faults and state a constant threat to customers and CSP. In this model, also a malicious CSP is included albeit he isassumed to be rare in real world scenarios. Additionally, from the technical point of view, the movement of computing power to a virtualized, multi-tenant environment can pose further threads and risks to the systems. One reason for this is that if a single system or service in the cloud is compromised, all other guest systems and even the host system are at risk. Hence, besides the need for further security measures, precautions for potential forensic investigations have to be taken into consideration.2) Unintentional FaultsInconsistencies in technical systems or processes in the cloud do not have implicitly to be caused by malicious intent. Internal communication errors or human failures can lead to issues in the services offered to the costumer(i.e. loss or modification of data). Although these failures are not caused intentionally, both the CSP and the customer have a strong intention to discover the reasons and deploy corresponding fixes.IV. TECHNICAL ISSUESDigital investigations are about control of forensic evidence data. From the technical standpoint, this data can be available in three different states: at rest, in motion or in execution. Data at rest is represented by allocated disk space. Whether the data is stored in a database or in a specific file format, it allocates disk space. Furthermore, if a file is deleted, the disk space is de-allocated for the operating system but the data is still accessible since the disk space has not been re-allocated and overwritten. This fact is often exploited by investigators which explore these de-allocated disk space on harddisks. In case the data is in motion, data is transferred from one entity to another e.g. a typical file transfer over a network can be seen as a data in motion scenario. Several encapsulated protocols contain the data each leaving specific traces on systems and network devices which can in return be used by investigators. Data can be loaded into memory and executed as a process. In this case, the data is neither at rest or in motion but in execution. On the executing system, process information, machine instruction and allocated/de-allocated data can be analyzed by creating a snapshot of the current system state. In the following sections, we point out the potential sources for evidential data in cloud environments and discuss the technical issues of digital investigations in XaaS environmentsas well as suggest several solutions to these problems.A. Sources and Nature of EvidenceConcerning the technical aspects of forensic investigations, the amount of potential evidence available to the investigator strongly diverges between thedifferent cloud service and deployment models. The virtual machine (VM), hosting in most of the cases the server application, provides several pieces of information that could be used by investigators. On the network level, network components can provide information about possible communication channels between different parties involved. The browser on the client, acting often as the user agent for communicating with the cloud, also contains a lot of information that could be used as evidence in a forensic investigation. Independently from the used model, the following three components could act as sources for potential evidential data.1) Virtual Cloud Instance: The VM within the cloud, where i.e. data is stored or processes are handled, contains potential evidence [2], [3]. In most of the cases, it is the place where an incident happened and hence provides a good starting point for a forensic investigation. The VM instance can be accessed by both, the CSP and the customer who is running the instance. Furthermore, virtual introspection techniques [25] provide access to the runtime state of the VM via the hypervisor and snapshot technology supplies a powerful technique for the customer to freeze specific states of the VM. Therefore, virtual instances can be still running during analysis which leads to the case of live investigations [41] or can be turned off leading to static image analysis. In SaaS and PaaS scenarios, the ability to access the virtual instance for gathering evidential information is highly limited or simply not possible.2) Network Layer: Traditional network forensics is knownas the analysis of network traffic logs for tracing events that have occurred in the past. Since the different ISO/OSI network layers provide several information on protocols and communication between instances within as well as with instances outside the cloud [4], [5], [6], network forensics is theoretically also feasible in cloud environments. However in practice, ordinary CSP currently do not provide any log data from the network components used by the customer’s instances or applications. For instance, in case of a malware infection of an IaaS VM, it will be difficult for the investigator to get any form of routing information and network log datain general which is crucial for further investigative steps. This situation gets even more complicated in case of PaaS or SaaS. So again, the situation of gathering forensic evidence is strongly affected by the support the investigator receives from the customer and the CSP.3) Client System: On the system layer of the client, it completely depends on the used model (IaaS, PaaS, SaaS) if and where potential evidence could beextracted. In most of the scenarios, the user agent (e.g. the web browser) on the client system is the only application that communicates with the service in the cloud. This especially holds for SaaS applications which are used and controlled by the web browser. But also in IaaS scenarios, the administration interface is often controlled via the browser. Hence, in an exhaustive forensic investigation, the evidence data gathered from the browser environment [7] should not be omitted.a) Browser Forensics: Generally, the circumstances leading to an investigation have to be differentiated: In ordinary scenarios, the main goal of an investigation of the web browser is to determine if a user has been victim of a crime. In complex SaaS scenarios with high client-server interaction, this constitutes a difficult task. Additionally, customers strongly make use of third-party extensions [17] which can be abused for malicious purposes. Hence, the investigator might want to look for malicious extensions, searches performed, websites visited, files downloaded, information entered in forms or stored in local HTML5 stores, web-based email contents and persistent browser cookies for gathering potential evidence data. Within this context, it is inevitable to investigate the appearance of malicious JavaScript [18] leading to e.g. unintended AJAX requests and hence modified usage of administration interfaces. Generally, the web browser contains a lot of electronic evidence data that could be used to give an answer to both of the above questions - even if the private mode is switched on [19].B. Investigations in XaaS EnvironmentsTraditional digital forensic methodologies permit investigators to seize equipment and perform detailed analysis on the media and data recovered [11]. In a distributed infrastructure organization like the cloud computing environment, investigators are confronted with an entirely different situation. They have no longer the option of seizing physical data storage. Data and processes of the customer are dispensed over an undisclosed amount of virtual instances, applications and network elements. Hence, it is in question whether preliminary findings of the computer forensic community in the field of digital forensics apparently have to be revised and adapted to the new environment. Within this section, specific issues of investigations in SaaS, PaaS and IaaS environments will be discussed. In addition, cross-disciplinary issues which affect several environments uniformly, will be taken into consideration. We also suggest potential solutions to the mentioned problems.1) SaaS Environments: Especially in the SaaS model, the customer does notobtain any control of the underlying operating infrastructure such as network, servers, operating systems or the application that is used. This means that no deeper view into the system and its underlying infrastructure is provided to the customer. Only limited userspecific application configuration settings can be controlled contributing to the evidences which can be extracted fromthe client (see section IV-A3). In a lot of cases this urges the investigator to rely on high-level logs which are eventually provided by the CSP. Given the case that the CSP does not run any logging application, the customer has no opportunity to create any useful evidence through the installation of any toolkit or logging tool. These circumstances do not allow a valid forensic investigation and lead to the assumption that customers of SaaS offers do not have any chance to analyze potential incidences.a) Data Provenance: The notion of Digital Provenance is known as meta-data that describes the ancestry or history of digital objects. Secure provenance that records ownership and process history of data objects is vital to the success of data forensics in cloud environments, yet it is still a challenging issue today [8]. Albeit data provenance is of high significance also for IaaS and PaaS, it states a huge problem specifically for SaaS-based applications: Current global acting public SaaS CSP offer Single Sign-On (SSO) access control to the set of their services. Unfortunately in case of an account compromise, most of the CSP do not offer any possibility for the customer to figure out which data and information has been accessed by the adversary. For the victim, this situation can have tremendous impact: If sensitive data has been compromised, it is unclear which data has been leaked and which has not been accessed by the adversary. Additionally, data could be modified or deleted by an external adversary or even by the CSP e.g. due to storage reasons. The customer has no ability to proof otherwise. Secure provenance mechanisms for distributed environments can improve this situation but have not been practically implemented by CSP [10]. Suggested Solution: In private SaaS scenarios this situation is improved by the fact that the customer and the CSP are probably under the same authority. Hence, logging and provenance mechanisms could be implemented which contribute to potential investigations. Additionally, the exact location of the servers and the data is known at any time. Public SaaS CSP should offer additional interfaces for the purpose of compliance, forensics, operations and security matters to their customers. Through an API, the customers should have the ability to receive specific information suchas access, error and event logs that could improve their situation in case of aninvestigation. Furthermore, due to the limited ability of receiving forensic information from the server and proofing integrity of stored data in SaaS scenarios, the client has to contribute to this process. This could be achieved by implementing Proofs of Retrievability (POR) in which a verifier (client) is enabled to determine that a prover (server) possesses a file or data object and it can be retrieved unmodified [24]. Provable Data Possession (PDP) techniques [37] could be used to verify that an untrusted server possesses the original data without the need for the client to retrieve it. Although these cryptographic proofs have not been implemented by any CSP, the authors of [23] introduced a new data integrity verification mechanism for SaaS scenarios which could also be used for forensic purposes.2) PaaS Environments: One of the main advantages of the PaaS model is that the developed software application is under the control of the customer and except for some CSP, the source code of the application does not have to leave the local development environment. Given these circumstances, the customer obtains theoretically the power to dictate how the application interacts with other dependencies such as databases, storage entities etc. CSP normally claim this transfer is encrypted but this statement can hardly be verified by the customer. Since the customer has the ability to interact with the platform over a prepared API, system states and specific application logs can be extracted. However potential adversaries, which can compromise the application during runtime, should not be able to alter these log files afterwards. Suggested Solution:Depending on the runtime environment, logging mechanisms could be implemented which automatically sign and encrypt the log information before its transfer to a central logging server under the control of the customer. Additional signing and encrypting could prevent potential eavesdroppers from being able to view and alter log data information on the way to the logging server. Runtime compromise of an PaaS application by adversaries could be monitored by push-only mechanisms for log data presupposing that the needed information to detect such an attack are logged. Increasingly, CSP offering PaaS solutions give developers the ability to collect and store a variety of diagnostics data in a highly configurable way with the help of runtime feature sets [38].3) IaaS Environments: As expected, even virtual instances in the cloud get compromised by adversaries. Hence, the ability to determine how defenses in the virtual environment failed and to what extent the affected systems havebeen compromised is crucial not only for recovering from an incident. Also forensic investigations gain leverage from such information and contribute to resilience against future attacks on the systems. From the forensic point of view, IaaS instances do provide much more evidence data usable for potential forensics than PaaS and SaaS models do. This fact is caused throughthe ability of the customer to install and set up the image for forensic purposes before an incident occurs. Hence, as proposed for PaaS environments, log data and other forensic evidence information could be signed and encrypted before itis transferred to third-party hosts mitigating the chance that a maliciously motivated shutdown process destroys the volatile data. Although, IaaS environments provide plenty of potential evidence, it has to be emphasized that the customer VM is in the end still under the control of the CSP. He controls the hypervisor which is e.g. responsible for enforcing hardware boundaries and routing hardware requests among different VM. Hence, besides the security responsibilities of the hypervisor, he exerts tremendous control over how customer’s VM communicate with the hardware and theoretically can intervene executed processes on the hosted virtual instance through virtual introspection [25]. This could also affect encryption or signing processes executed on the VM and therefore leading to the leakage of the secret key. Although this risk can be disregarded in most of the cases, the impact on the security of high security environments is tremendous.a) Snapshot Analysis: Traditional forensics expect target machines to be powered down to collect an image (dead virtual instance). This situation completely changed with the advent of the snapshot technology which is supported by all popular hypervisors such as Xen, VMware ESX and Hyper-V.A snapshot, also referred to as the forensic image of a VM, providesa powerful tool with which a virtual instance can be clonedby one click including also the running system’s mem ory. Due to the invention of the snapshot technology, systems hosting crucial business processes do not have to be powered down for forensic investigation purposes. The investigator simply creates and loads a snapshot of the target VM for analysis(live virtual instance). This behavior is especially important for scenarios in which a downtime of a system is not feasible or practical due to existing SLA. However the information whether the machine is running or has been properly powered down is crucial [3] for the investigation. Live investigations of running virtual instances become more common providing evidence data that。
云计算外文翻译参考文献

云计算外文翻译参考文献(文档含中英文对照即英文原文和中文翻译)原文:Technical Issues of Forensic Investigations in Cloud Computing EnvironmentsDominik BirkRuhr-University BochumHorst Goertz Institute for IT SecurityBochum, GermanyRuhr-University BochumHorst Goertz Institute for IT SecurityBochum, GermanyAbstract—Cloud Computing is arguably one of the most discussedinformation technologies today. It presents many promising technological and economical opportunities. However, many customers remain reluctant to move their business IT infrastructure completely to the cloud. One of their main concerns is Cloud Security and the threat of the unknown. Cloud Service Providers(CSP) encourage this perception by not letting their customers see what is behind their virtual curtain. A seldomly discussed, but in this regard highly relevant open issue is the ability to perform digital investigations. This continues to fuel insecurity on the sides of both providers and customers. Cloud Forensics constitutes a new and disruptive challenge for investigators. Due to the decentralized nature of data processing in the cloud, traditional approaches to evidence collection and recovery are no longer practical. This paper focuses on the technical aspects of digital forensics in distributed cloud environments. We contribute by assessing whether it is possible for the customer of cloud computing services to perform a traditional digital investigation from a technical point of view. Furthermore we discuss possible solutions and possible new methodologies helping customers to perform such investigations.I. INTRODUCTIONAlthough the cloud might appear attractive to small as well as to large companies, it does not come along without its own unique problems. Outsourcing sensitive corporate data into the cloud raises concerns regarding the privacy and security of data. Security policies, companies main pillar concerning security, cannot be easily deployed into distributed, virtualized cloud environments. This situation is further complicated by the unknown physical location of the companie’s assets. Normally,if a security incident occurs, the corporate security team wants to be able to perform their own investigation without dependency on third parties. In the cloud, this is not possible anymore: The CSP obtains all the power over the environmentand thus controls the sources of evidence. In the best case, a trusted third party acts as a trustee and guarantees for the trustworthiness of the CSP. Furthermore, the implementation of the technical architecture and circumstances within cloud computing environments bias the way an investigation may be processed. In detail, evidence data has to be interpreted by an investigator in a We would like to thank the reviewers for the helpful comments and Dennis Heinson (Center for Advanced Security Research Darmstadt - CASED) for the profound discussions regarding the legal aspects of cloud forensics. proper manner which is hardly be possible due to the lackof circumstantial information. For auditors, this situation does not change: Questions who accessed specific data and information cannot be answered by the customers, if no corresponding logs are available. With the increasing demand for using the power of the cloud for processing also sensible information and data, enterprises face the issue of Data and Process Provenance in the cloud [10]. Digital provenance, meaning meta-data that describes the ancestry or history of a digital object, is a crucial feature for forensic investigations. In combination with a suitable authentication scheme, it provides information about who created and who modified what kind of data in the cloud. These are crucial aspects for digital investigations in distributed environments such as the cloud. Unfortunately, the aspects of forensic investigations in distributed environment have so far been mostly neglected by the research community. Current discussion centers mostly around security, privacy and data protection issues [35], [9], [12]. The impact of forensic investigations on cloud environments was little noticed albeit mentioned by the authors of [1] in 2009: ”[...] to our knowledge, no research has been published on how cloud computing environments affect digital artifacts,and on acquisition logistics and legal issues related to cloud computing env ironments.” This statement is also confirmed by other authors [34], [36], [40] stressing that further research on incident handling, evidence tracking and accountability in cloud environments has to be done. At the same time, massive investments are being made in cloud technology. Combined with the fact that information technology increasingly transcendents peoples’ private and professional life, thus mirroring more and more of peoples’actions, it becomes apparent that evidence gathered from cloud environments will be of high significance to litigation or criminal proceedings in the future. Within this work, we focus the notion of cloud forensics by addressing the technical issues of forensics in all three major cloud service models and consider cross-disciplinary aspects. Moreover, we address the usability of various sources of evidence for investigative purposes and propose potential solutions to the issues from a practical standpoint. This work should be considered as a surveying discussion of an almost unexplored research area. The paper is organized as follows: We discuss the related work and the fundamental technical background information of digital forensics, cloud computing and the fault model in section II and III. In section IV, we focus on the technical issues of cloud forensics and discuss the potential sources and nature of digital evidence as well as investigations in XaaS environments including thecross-disciplinary aspects. We conclude in section V.II. RELATED WORKVarious works have been published in the field of cloud security and privacy [9], [35], [30] focussing on aspects for protecting data in multi-tenant, virtualized environments. Desired security characteristics for current cloud infrastructures mainly revolve around isolation of multi-tenant platforms [12], security of hypervisors in order to protect virtualized guest systems and secure network infrastructures [32]. Albeit digital provenance, describing the ancestry of digital objects, still remains a challenging issue for cloud environments, several works have already been published in this field [8], [10] contributing to the issues of cloud forensis. Within this context, cryptographic proofs for verifying data integrity mainly in cloud storage offers have been proposed,yet lacking of practical implementations [24], [37], [23]. Traditional computer forensics has already well researched methods for various fields of application [4], [5], [6], [11], [13]. Also the aspects of forensics in virtual systems have been addressed by several works [2], [3], [20] including the notionof virtual introspection [25]. In addition, the NIST already addressed Web Service Forensics [22] which has a huge impact on investigation processes in cloud computing environments. In contrast, the aspects of forensic investigations in cloud environments have mostly been neglected by both the industry and the research community. One of the first papers focusing on this topic was published by Wolthusen [40] after Bebee et al already introduced problems within cloud environments [1]. Wolthusen stressed that there is an inherent strong need for interdisciplinary work linking the requirements and concepts of evidence arising from the legal field to what can be feasibly reconstructed and inferred algorithmically or in an exploratory manner. In 2010, Grobauer et al [36] published a paper discussing the issues of incident response in cloud environments - unfortunately no specific issues and solutions of cloud forensics have been proposed which will be done within this work.III. TECHNICAL BACKGROUNDA. Traditional Digital ForensicsThe notion of Digital Forensics is widely known as the practice of identifying, extracting and considering evidence from digital media. Unfortunately, digital evidence is both fragile and volatile and therefore requires the attention of special personnel and methods in order to ensure that evidence data can be proper isolated and evaluated. Normally, the process of a digital investigation can be separated into three different steps each having its own specificpurpose:1) In the Securing Phase, the major intention is the preservation of evidence for analysis. The data has to be collected in a manner that maximizes its integrity. This is normally done by a bitwise copy of the original media. As can be imagined, this represents a huge problem in the field of cloud computing where you never know exactly where your data is and additionallydo not have access to any physical hardware. However, the snapshot technology, discussed in section IV-B3, provides a powerful tool to freeze system states and thus makes digital investigations, at least in IaaS scenarios, theoretically possible.2) We refer to the Analyzing Phase as the stage in which the data is sifted and combined. It is in this phase that the data from multiple systems or sources is pulled together to create as complete a picture and event reconstruction as possible. Especially in distributed system infrastructures, this means that bits and pieces of data are pulled together for deciphering the real story of what happened and for providing a deeper look into the data.3) Finally, at the end of the examination and analysis of the data, the results of the previous phases will be reprocessed in the Presentation Phase. The report, created in this phase, is a compilation of all the documentation and evidence from the analysis stage. The main intention of such a report is that it contains all results, it is complete and clear to understand. Apparently, the success of these three steps strongly depends on the first stage. If it is not possible to secure the complete set of evidence data, no exhaustive analysis will be possible. However, in real world scenarios often only a subset of the evidence data can be secured by the investigator. In addition, an important definition in the general context of forensics is the notion of a Chain of Custody. This chain clarifies how and where evidence is stored and who takes possession of it. Especially for cases which are brought to court it is crucial that the chain of custody is preserved.B. Cloud ComputingAccording to the NIST [16], cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications and services) that can be rapidly provisioned and released with minimal CSP interaction. The new raw definition of cloud computing brought several new characteristics such as multi-tenancy, elasticity, pay-as-you-go and reliability. Within this work, the following three models are used: In the Infrastructure asa Service (IaaS) model, the customer is using the virtual machine provided by the CSP for installing his own system on it. The system can be used like any other physical computer with a few limitations. However, the additive customer power over the system comes along with additional security obligations. Platform as a Service (PaaS) offerings provide the capability to deploy application packages created using the virtual development environment supported by the CSP. For the efficiency of software development process this service model can be propellent. In the Software as a Service (SaaS) model, the customer makes use of a service run by the CSP on a cloud infrastructure. In most of the cases this service can be accessed through an API for a thin client interface such as a web browser. Closed-source public SaaS offers such as Amazon S3 and GoogleMail can only be used in the public deployment model leading to further issues concerning security, privacy and the gathering of suitable evidences. Furthermore, two main deployment models, private and public cloud have to be distinguished. Common public clouds are made available to the general public. The corresponding infrastructure is owned by one organization acting as a CSP and offering services to its customers. In contrast, the private cloud is exclusively operated for an organization but may not provide the scalability and agility of public offers. The additional notions of community and hybrid cloud are not exclusively covered within this work. However, independently from the specific model used, the movement of applications and data to the cloud comes along with limited control for the customer about the application itself, the data pushed into the applications and also about the underlying technical infrastructure.C. Fault ModelBe it an account for a SaaS application, a development environment (PaaS) or a virtual image of an IaaS environment, systems in the cloud can be affected by inconsistencies. Hence, for both customer and CSP it is crucial to have the ability to assign faults to the causing party, even in the presence of Byzantine behavior [33]. Generally, inconsistencies can be caused by the following two reasons:1) Maliciously Intended FaultsInternal or external adversaries with specific malicious intentions can cause faults on cloud instances or applications. Economic rivals as well as former employees can be the reason for these faults and state a constant threat to customers and CSP. In this model, also a malicious CSP is included albeit he isassumed to be rare in real world scenarios. Additionally, from the technical point of view, the movement of computing power to a virtualized, multi-tenant environment can pose further threads and risks to the systems. One reason for this is that if a single system or service in the cloud is compromised, all other guest systems and even the host system are at risk. Hence, besides the need for further security measures, precautions for potential forensic investigations have to be taken into consideration.2) Unintentional FaultsInconsistencies in technical systems or processes in the cloud do not have implicitly to be caused by malicious intent. Internal communication errors or human failures can lead to issues in the services offered to the costumer(i.e. loss or modification of data). Although these failures are not caused intentionally, both the CSP and the customer have a strong intention to discover the reasons and deploy corresponding fixes.IV. TECHNICAL ISSUESDigital investigations are about control of forensic evidence data. From the technical standpoint, this data can be available in three different states: at rest, in motion or in execution. Data at rest is represented by allocated disk space. Whether the data is stored in a database or in a specific file format, it allocates disk space. Furthermore, if a file is deleted, the disk space is de-allocated for the operating system but the data is still accessible since the disk space has not been re-allocated and overwritten. This fact is often exploited by investigators which explore these de-allocated disk space on harddisks. In case the data is in motion, data is transferred from one entity to another e.g. a typical file transfer over a network can be seen as a data in motion scenario. Several encapsulated protocols contain the data each leaving specific traces on systems and network devices which can in return be used by investigators. Data can be loaded into memory and executed as a process. In this case, the data is neither at rest or in motion but in execution. On the executing system, process information, machine instruction and allocated/de-allocated data can be analyzed by creating a snapshot of the current system state. In the following sections, we point out the potential sources for evidential data in cloud environments and discuss the technical issues of digital investigations in XaaS environmentsas well as suggest several solutions to these problems.A. Sources and Nature of EvidenceConcerning the technical aspects of forensic investigations, the amount of potential evidence available to the investigator strongly diverges between thedifferent cloud service and deployment models. The virtual machine (VM), hosting in most of the cases the server application, provides several pieces of information that could be used by investigators. On the network level, network components can provide information about possible communication channels between different parties involved. The browser on the client, acting often as the user agent for communicating with the cloud, also contains a lot of information that could be used as evidence in a forensic investigation. Independently from the used model, the following three components could act as sources for potential evidential data.1) Virtual Cloud Instance: The VM within the cloud, where i.e. data is stored or processes are handled, contains potential evidence [2], [3]. In most of the cases, it is the place where an incident happened and hence provides a good starting point for a forensic investigation. The VM instance can be accessed by both, the CSP and the customer who is running the instance. Furthermore, virtual introspection techniques [25] provide access to the runtime state of the VM via the hypervisor and snapshot technology supplies a powerful technique for the customer to freeze specific states of the VM. Therefore, virtual instances can be still running during analysis which leads to the case of live investigations [41] or can be turned off leading to static image analysis. In SaaS and PaaS scenarios, the ability to access the virtual instance for gathering evidential information is highly limited or simply not possible.2) Network Layer: Traditional network forensics is knownas the analysis of network traffic logs for tracing events that have occurred in the past. Since the different ISO/OSI network layers provide several information on protocols and communication between instances within as well as with instances outside the cloud [4], [5], [6], network forensics is theoretically also feasible in cloud environments. However in practice, ordinary CSP currently do not provide any log data from the network components used by the customer’s instances or applications. For instance, in case of a malware infection of an IaaS VM, it will be difficult for the investigator to get any form of routing information and network log datain general which is crucial for further investigative steps. This situation gets even more complicated in case of PaaS or SaaS. So again, the situation of gathering forensic evidence is strongly affected by the support the investigator receives from the customer and the CSP.3) Client System: On the system layer of the client, it completely depends on the used model (IaaS, PaaS, SaaS) if and where potential evidence could beextracted. In most of the scenarios, the user agent (e.g. the web browser) on the client system is the only application that communicates with the service in the cloud. This especially holds for SaaS applications which are used and controlled by the web browser. But also in IaaS scenarios, the administration interface is often controlled via the browser. Hence, in an exhaustive forensic investigation, the evidence data gathered from the browser environment [7] should not be omitted.a) Browser Forensics: Generally, the circumstances leading to an investigation have to be differentiated: In ordinary scenarios, the main goal of an investigation of the web browser is to determine if a user has been victim of a crime. In complex SaaS scenarios with high client-server interaction, this constitutes a difficult task. Additionally, customers strongly make use of third-party extensions [17] which can be abused for malicious purposes. Hence, the investigator might want to look for malicious extensions, searches performed, websites visited, files downloaded, information entered in forms or stored in local HTML5 stores, web-based email contents and persistent browser cookies for gathering potential evidence data. Within this context, it is inevitable to investigate the appearance of malicious JavaScript [18] leading to e.g. unintended AJAX requests and hence modified usage of administration interfaces. Generally, the web browser contains a lot of electronic evidence data that could be used to give an answer to both of the above questions - even if the private mode is switched on [19].B. Investigations in XaaS EnvironmentsTraditional digital forensic methodologies permit investigators to seize equipment and perform detailed analysis on the media and data recovered [11]. In a distributed infrastructure organization like the cloud computing environment, investigators are confronted with an entirely different situation. They have no longer the option of seizing physical data storage. Data and processes of the customer are dispensed over an undisclosed amount of virtual instances, applications and network elements. Hence, it is in question whether preliminary findings of the computer forensic community in the field of digital forensics apparently have to be revised and adapted to the new environment. Within this section, specific issues of investigations in SaaS, PaaS and IaaS environments will be discussed. In addition, cross-disciplinary issues which affect several environments uniformly, will be taken into consideration. We also suggest potential solutions to the mentioned problems.1) SaaS Environments: Especially in the SaaS model, the customer does notobtain any control of the underlying operating infrastructure such as network, servers, operating systems or the application that is used. This means that no deeper view into the system and its underlying infrastructure is provided to the customer. Only limited userspecific application configuration settings can be controlled contributing to the evidences which can be extracted fromthe client (see section IV-A3). In a lot of cases this urges the investigator to rely on high-level logs which are eventually provided by the CSP. Given the case that the CSP does not run any logging application, the customer has no opportunity to create any useful evidence through the installation of any toolkit or logging tool. These circumstances do not allow a valid forensic investigation and lead to the assumption that customers of SaaS offers do not have any chance to analyze potential incidences.a) Data Provenance: The notion of Digital Provenance is known as meta-data that describes the ancestry or history of digital objects. Secure provenance that records ownership and process history of data objects is vital to the success of data forensics in cloud environments, yet it is still a challenging issue today [8]. Albeit data provenance is of high significance also for IaaS and PaaS, it states a huge problem specifically for SaaS-based applications: Current global acting public SaaS CSP offer Single Sign-On (SSO) access control to the set of their services. Unfortunately in case of an account compromise, most of the CSP do not offer any possibility for the customer to figure out which data and information has been accessed by the adversary. For the victim, this situation can have tremendous impact: If sensitive data has been compromised, it is unclear which data has been leaked and which has not been accessed by the adversary. Additionally, data could be modified or deleted by an external adversary or even by the CSP e.g. due to storage reasons. The customer has no ability to proof otherwise. Secure provenance mechanisms for distributed environments can improve this situation but have not been practically implemented by CSP [10]. Suggested Solution: In private SaaS scenarios this situation is improved by the fact that the customer and the CSP are probably under the same authority. Hence, logging and provenance mechanisms could be implemented which contribute to potential investigations. Additionally, the exact location of the servers and the data is known at any time. Public SaaS CSP should offer additional interfaces for the purpose of compliance, forensics, operations and security matters to their customers. Through an API, the customers should have the ability to receive specific information suchas access, error and event logs that could improve their situation in case of aninvestigation. Furthermore, due to the limited ability of receiving forensic information from the server and proofing integrity of stored data in SaaS scenarios, the client has to contribute to this process. This could be achieved by implementing Proofs of Retrievability (POR) in which a verifier (client) is enabled to determine that a prover (server) possesses a file or data object and it can be retrieved unmodified [24]. Provable Data Possession (PDP) techniques [37] could be used to verify that an untrusted server possesses the original data without the need for the client to retrieve it. Although these cryptographic proofs have not been implemented by any CSP, the authors of [23] introduced a new data integrity verification mechanism for SaaS scenarios which could also be used for forensic purposes.2) PaaS Environments: One of the main advantages of the PaaS model is that the developed software application is under the control of the customer and except for some CSP, the source code of the application does not have to leave the local development environment. Given these circumstances, the customer obtains theoretically the power to dictate how the application interacts with other dependencies such as databases, storage entities etc. CSP normally claim this transfer is encrypted but this statement can hardly be verified by the customer. Since the customer has the ability to interact with the platform over a prepared API, system states and specific application logs can be extracted. However potential adversaries, which can compromise the application during runtime, should not be able to alter these log files afterwards. Suggested Solution:Depending on the runtime environment, logging mechanisms could be implemented which automatically sign and encrypt the log information before its transfer to a central logging server under the control of the customer. Additional signing and encrypting could prevent potential eavesdroppers from being able to view and alter log data information on the way to the logging server. Runtime compromise of an PaaS application by adversaries could be monitored by push-only mechanisms for log data presupposing that the needed information to detect such an attack are logged. Increasingly, CSP offering PaaS solutions give developers the ability to collect and store a variety of diagnostics data in a highly configurable way with the help of runtime feature sets [38].3) IaaS Environments: As expected, even virtual instances in the cloud get compromised by adversaries. Hence, the ability to determine how defenses in the virtual environment failed and to what extent the affected systems havebeen compromised is crucial not only for recovering from an incident. Also forensic investigations gain leverage from such information and contribute to resilience against future attacks on the systems. From the forensic point of view, IaaS instances do provide much more evidence data usable for potential forensics than PaaS and SaaS models do. This fact is caused throughthe ability of the customer to install and set up the image for forensic purposes before an incident occurs. Hence, as proposed for PaaS environments, log data and other forensic evidence information could be signed and encrypted before itis transferred to third-party hosts mitigating the chance that a maliciously motivated shutdown process destroys the volatile data. Although, IaaS environments provide plenty of potential evidence, it has to be emphasized that the customer VM is in the end still under the control of the CSP. He controls the hypervisor which is e.g. responsible for enforcing hardware boundaries and routing hardware requests among different VM. Hence, besides the security responsibilities of the hypervisor, he exerts tremendous control over how customer’s VM communicate with the hardware and theoretically can intervene executed processes on the hosted virtual instance through virtual introspection [25]. This could also affect encryption or signing processes executed on the VM and therefore leading to the leakage of the secret key. Although this risk can be disregarded in most of the cases, the impact on the security of high security environments is tremendous.a) Snapshot Analysis: Traditional forensics expect target machines to be powered down to collect an image (dead virtual instance). This situation completely changed with the advent of the snapshot technology which is supported by all popular hypervisors such as Xen, VMware ESX and Hyper-V.A snapshot, also referred to as the forensic image of a VM, providesa powerful tool with which a virtual instance can be clonedby one click including also the running system’s mem ory. Due to the invention of the snapshot technology, systems hosting crucial business processes do not have to be powered down for forensic investigation purposes. The investigator simply creates and loads a snapshot of the target VM for analysis(live virtual instance). This behavior is especially important for scenarios in which a downtime of a system is not feasible or practical due to existing SLA. However the information whether the machine is running or has been properly powered down is crucial [3] for the investigation. Live investigations of running virtual instances become more common providing evidence data that。
外文参考文献(带中文翻译)

外文资料原文涂敏之会计学 8051208076Title:Future of SME finance(c)Background – the environment for SME finance has changedFuture economic recovery will depend on the possibility of Crafts, Trades and SMEs to exploit their potential for growth and employment creation.SMEs make a major contribution to growth and employment in the EU and are at the heart of the Lisbon Strategy, whose main objective is to turn Europe into the most competitive and dynamic knowledge-based economy in the world. However, the ability of SMEs to grow depends highly on their potential to invest in restructuring, innovation and qualification. All of these investments need capital and therefore access to finance.Against this background the consistently repeated complaint of SMEs about their problems regarding access to finance is a highly relevant constraint that endangers the economic recovery of Europe.Changes in the finance sector influence the behavior of credit institutes towards Crafts, Trades and SMEs. Recent and ongoing developments in the banking sector add to the concerns of SMEs and will further endanger their access to finance. The main changes in the banking sector which influence SME finance are:•Globalization and internationalization have increased the competition and the profit orientation in the sector;•worsening of the economic situations in some institutes (burst of the ITC bubble, insolvencies) strengthen the focus on profitability further;•Mergers and restructuring created larger structures and many local branches, which had direct and personalized contacts with small enterprises, were closed;•up-coming implementation of new capital adequacy rules (Basel II) will also change SME business of the credit sector and will increase its administrative costs;•Stricter interpretation of State-Aide Rules by the European Commission eliminates the support of banks by public guarantees; many of the effected banks are very active in SME finance.All these changes result in a higher sensitivity for risks and profits in the financesector.The changes in the finance sector affect the accessibility of SMEs to finance.Higher risk awareness in the credit sector, a stronger focus on profitability and the ongoing restructuring in the finance sector change the framework for SME finance and influence the accessibility of SMEs to finance. The most important changes are: •In order to make the higher risk awareness operational, the credit sector introduces new rating systems and instruments for credit scoring;•Risk assessment of SMEs by banks will force the enterprises to present more and better quality information on their businesses;•Banks will try to pass through their additional costs for implementing and running the new capital regulations (Basel II) to their business clients;•due to the increase of competition on interest rates, the bank sector demands more and higher fees for its services (administration of accounts, payments systems, etc.), which are not only additional costs for SMEs but also limit their liquidity;•Small enterprises will lose their personal relationship with decision-makers in local branches –the credit application process will become more formal and anonymous and will probably lose longer;•the credit sector will lose more and more i ts “public function” to provide access to finance for a wide range of economic actors, which it has in a number of countries, in order to support and facilitate economic growth; the profitability of lending becomes the main focus of private credit institutions.All of these developments will make access to finance for SMEs even more difficult and / or will increase the cost of external finance. Business start-ups and SMEs, which want to enter new markets, may especially suffer from shortages regarding finance. A European Code of Conduct between Banks and SMEs would have allowed at least more transparency in the relations between Banks and SMEs and UEAPME regrets that the bank sector was not able to agree on such a commitment.Towards an encompassing policy approach to improve the access of Crafts, Trades and SMEs to financeAll analyses show that credits and loans will stay the main source of finance for the SME sector in Europe. Access to finance was always a main concern for SMEs, but the recent developments in the finance sector worsen the situation even more.Shortage of finance is already a relevant factor, which hinders economic recovery in Europe. Many SMEs are not able to finance their needs for investment.Therefore, UEAPME expects the new European Commission and the new European Parliament to strengthen their efforts to improve the framework conditions for SME finance. Europe’s Crafts, Trades and SMEs ask for an encompassing policy approach, which includes not only the conditions for SMEs’ access to l ending, but will also strengthen their capacity for internal finance and their access to external risk capital.From UEAPME’s point of view such an encompassing approach should be based on three guiding principles:•Risk-sharing between private investors, financial institutes, SMEs and public sector;•Increase of transparency of SMEs towards their external investors and lenders;•improving the regulatory environment for SME finance.Based on these principles and against the background of the changing environment for SME finance, UEAPME proposes policy measures in the following areas:1. New Capital Requirement Directive: SME friendly implementation of Basel IIDue to intensive lobbying activities, UEAPME, together with other Business Associations in Europe, has achieved some improvements in favour of SMEs regarding the new Basel Agreement on regulatory capital (Basel II). The final agreement from the Basel Committee contains a much more realistic approach toward the real risk situation of SME lending for the finance market and will allow the necessary room for adaptations, which respect the different regional traditions and institutional structures.However, the new regulatory system will influence the relations between Banks and SMEs and it will depend very much on the way it will be implemented into European law, whether Basel II becomes burdensome for SMEs and if it will reduce access to finance for them.The new Capital Accord form the Basel Committee gives the financial market authorities and herewith the European Institutions, a lot of flexibility. In about 70 areas they have room to adapt the Accord to their specific needs when implementing itinto EU law. Some of them will have important effects on the costs and the accessibility of finance for SMEs.UEAPME expects therefore from the new European Commission and the new European Parliament:•The implementation of the new Capital Requirement Directive will be costly for the Finance Sector (up to 30 Billion Euro till 2006) and its clients will have to pay for it. Therefore, the implementation – especially for smaller banks, which are often very active in SME finance –has to be carried out with as little administrative burdensome as possible (reporting obligations, statistics, etc.).•The European Regulators must recognize traditional instruments for collaterals (guarantees, etc.) as far as possible.•The European Commission and later the Member States should take over the recommendations from the European Parliament with regard to granularity, access to retail portfolio, maturity, partial use, adaptation of thresholds, etc., which will ease the burden on SME finance.2. SMEs need transparent rating proceduresDue to higher risk awareness of the finance sector and the needs of Basel II, many SMEs will be confronted for the first time with internal rating procedures or credit scoring systems by their banks. The bank will require more and better quality information from their clients and will assess them in a new way. Both up-coming developments are already causing increasing uncertainty amongst SMEs.In order to reduce this uncertainty and to allow SMEs to understand the principles of the new risk assessment, UEAPME demands transparent rating procedures –rating procedures may not become a “Black Box” for SMEs: •The bank should communicate the relevant criteria affecting the rating of SMEs.•The bank should inform SMEs about its assessment in order to allow SMEs to improve.The negotiations on a European Code of Conduct between Banks and SMEs , which would have included a self-commitment for transparent rating procedures by Banks, failed. Therefore, UEAPME expects from the new European Commission and the new European Parliament support for:•binding rules in the framework of the new Capital Adequacy Directive,which ensure the transparency of rating procedures and credit scoring systems for SMEs;•Elaboration of national Codes of Conduct in order to improve the relations between Banks and SMEs and to support the adaptation of SMEs to the new financial environment.3. SMEs need an extension of credit guarantee systems with a special focus on Micro-LendingBusiness start-ups, the transfer of businesses and innovative fast growth SMEs also depended in the past very often on public support to get access to finance. Increasing risk awareness by banks and the stricter interpretation of State Aid Rules will further increase the need for public support.Already now, there are credit guarantee schemes in many countries on the limit of their capacity and too many investment projects cannot be realized by SMEs.Experiences show that Public money, spent for supporting credit guarantees systems, is a very efficient instrument and has a much higher multiplying effect than other instruments. One Euro form the European Investment Funds can stimulate 30 Euro investments in SMEs (for venture capital funds the relation is only 1:2).Therefore, UEAPME expects the new European Commission and the new European Parliament to support:•The extension of funds for national credit guarantees schemes in the framework of the new Multi-Annual Programmed for Enterprises;•The development of new instruments for securitizations of SME portfolios;•The recognition of existing and well functioning credit guarantees schemes as collateral;•More flexibility within the European Instruments, because of national differences in the situation of SME finance;•The development of credit guarantees schemes in the new Member States;•The development of an SBIC-like scheme in the Member States to close the equity gap (0.2 – 2.5 Mio Euro, according to the expert meeting on PACE on April 27 in Luxemburg).•the development of a financial support scheme to encourage the internalizations of SMEs (currently there is no scheme available at EU level: termination of JOP, fading out of JEV).4. SMEs need company and income taxation systems, whichstrengthen their capacity for self-financingMany EU Member States have company and income taxation systems with negative incentives to build-up capital within the company by re-investing their profits. This is especially true for companies, which have to pay income taxes. Already in the past tax-regimes was one of the reasons for the higher dependence of Europe’s SMEs on bank lending. In future, the result of rating w ill also depend on the amount of capital in the company; the high dependence on lending will influence the access to lending. This is a vicious cycle, which has to be broken.Even though company and income taxation falls under the competence of Member States, UEAPME asks the new European Commission and the new European Parliament to publicly support tax-reforms, which will strengthen the capacity of Crafts, Trades and SME for self-financing. Thereby, a special focus on non-corporate companies is needed.5. Risk Capital – equity financingExternal equity financing does not have a real tradition in the SME sector. On the one hand, small enterprises and family business in general have traditionally not been very open towards external equity financing and are not used to informing transparently about their business.On the other hand, many investors of venture capital and similar forms of equity finance are very reluctant regarding investing their funds in smaller companies, which is more costly than investing bigger amounts in larger companies. Furthermore it is much more difficult to set out of such investments in smaller companies.Even though equity financing will never become the main source of financing for SMEs, it is an important instrument for highly innovative start-ups and fast growing companies and it has therefore to be further developed. UEAPME sees three pillars for such an approach where policy support is needed:Availability of venture capital•The Member States should review their taxation systems in order to create incentives to invest private money in all forms of venture capital.•Guarantee instruments for equity financing should be further developed.Improve the conditions for investing venture capital into SMEs•The development of secondary markets for venture capital investments in SMEs should be supported.•Accounting Standards for SMEs should be revised in order to easetransparent exchange of information between investor and owner-manager.Owner-managers must become more aware about the need for transparency towards investors•SME owners will have to realise that in future access to external finance (venture capital or lending) will depend much more on a transparent and open exchange of information about the situation and the perspectives of their companies.•In order to fulfil the new needs for transparency, SMEs will have to use new information instruments (business plans, financial reporting, etc.) and new management instruments (risk-management, financial management, etc.).外文资料翻译涂敏之会计学 8051208076题目:未来的中小企业融资背景:中小企业融资已经改变未来的经济复苏将取决于能否工艺品,贸易和中小企业利用其潜在的增长和创造就业。
数据库外文参考文献及翻译

数据库外文参考文献及翻译数据库外文参考文献及翻译SQL ALL-IN-ONE DESK REFERENCE FOR DUMMIESData Files and DatabasesI. Irreducible complexityAny software system that performs a useful function is going to be complex. The more valuable the function, the more complex its implementation will be. Regardless of how the data is stored, the complexity remains. The only question is where that complexity resides. Any non-trivial computer application has two major components: the program the data. Although an application’s level of complexity depends on the task to be performed, developers have some control over the location of that complexity. The complexity may reside primarily in the program part of the overall system, or it may reside in the data part.Operations on the data can be fast. Because the programinteracts directly with the data, with no DBMS in the middle, well-designed applications can run as fast as the hardware permits. What could be better? A data organization that minimizes storage requirements and at the same time maximizes speed of operation seems like the best of all possible worlds. But wait a minute . Flat file systems came into use in the 1940s. We have known about them for a long time, and yet today they have been almost entirely replaced by database s ystems. What’s up with that? Perhaps it is the not-so-beneficial consequences。
外文参考文献(带中文翻译)

外文资料原文涂敏之会计学 8051208076Title:Future of SME finance(/docs/pos_papers/2004/041027_SME-finance_final.do c)Background – the environment for SME finance has changedFuture economic recovery will depend on the possibility of Crafts, Trades and SMEs to exploit their potential for growth and employment creation.SMEs make a major contribution to growth and employment in the EU and are at the heart of the Lisbon Strategy, whose main objective is to turn Europe into the most competitive and dynamic knowledge-based economy in the world. However, the ability of SMEs to grow depends highly on their potential to invest in restructuring, innovation and qualification. All of these investments need capital and therefore access to finance.Against this background the consistently repeated complaint of SMEs about their problems regarding access to finance is a highly relevant constraint that endangers the economic recovery of Europe.Changes in the finance sector influence the behavior of credit institutes towards Crafts, Trades and SMEs. Recent and ongoing developments in the banking sector add to the concerns of SMEs and will further endanger their access to finance. The main changes in the banking sector which influence SME finance are:•Globalization and internationalization have increased the competition and the profit orientation in the sector;•worsening of the economic situations in some institutes (burst of the ITC bubble, insolvencies) strengthen the focus on profitability further;•Mergers and restructuring created larger structures and many local branches, which had direct and personalized contacts with small enterprises, were closed;•up-coming implementation of new capital adequacy rules (Basel II) will also change SME business of the credit sector and will increase its administrative costs;•Stricter interpretation of State-Aide Rules by the European Commission eliminates the support of banks by public guarantees; many of the effected banks are very active in SME finance.All these changes result in a higher sensitivity for risks and profits in the finance sector.The changes in the finance sector affect the accessibility of SMEs to finance.Higher risk awareness in the credit sector, a stronger focus on profitability and the ongoing restructuring in the finance sector change the framework for SME finance and influence the accessibility of SMEs to finance. The most important changes are: •In order to make the higher risk awareness operational, the credit sector introduces new rating systems and instruments for credit scoring;•Risk assessment of SMEs by banks will force the enterprises to present more and better quality information on their businesses;•Banks will try to pass through their additional costs for implementing and running the new capital regulations (Basel II) to their business clients;•due to the increase of competition on interest rates, the bank sector demands more and higher fees for its services (administration of accounts, payments systems, etc.), which are not only additional costs for SMEs but also limit their liquidity;•Small enterprises will lose their personal relationship with decision-makers in local branches –the credit application process will become more formal and anonymous and will probably lose longer;•the credit sector will lose more and more its “public function” to provide access to finance for a wide range of economic actors, which it has in a number of countries, in order to support and facilitate economic growth; the profitability of lending becomes the main focus of private credit institutions.All of these developments will make access to finance for SMEs even more difficult and / or will increase the cost of external finance. Business start-ups and SMEs, which want to enter new markets, may especially suffer from shortages regarding finance. A European Code of Conduct between Banks and SMEs would have allowed at least more transparency in the relations between Banks and SMEs and UEAPME regrets that the bank sector was not able to agree on such a commitment.Towards an encompassing policy approach to improve the access of Crafts, Trades and SMEs to financeAll analyses show that credits and loans will stay the main source of finance for the SME sector in Europe. Access to finance was always a main concern for SMEs,but the recent developments in the finance sector worsen the situation even more. Shortage of finance is already a relevant factor, which hinders economic recovery in Europe. Many SMEs are not able to finance their needs for investment.Therefore, UEAPME expects the new European Commission and the new European Parliament to strengthen their efforts to improve the framework conditions for SME finance. Europe’s Crafts, Trades and SMEs ask for an encompassing policy approach, which includes not only the conditions for SMEs’ access to lending, but will also strengthen their capacity for internal finance and their access to external risk capital.From UEAPM E’s point of view such an encompassing approach should be based on three guiding principles:•Risk-sharing between private investors, financial institutes, SMEs and public sector;•Increase of transparency of SMEs towards their external investors and lenders;•improving the regulatory environment for SME finance.Based on these principles and against the background of the changing environment for SME finance, UEAPME proposes policy measures in the following areas:1. New Capital Requirement Directive: SME friendly implementation of Basel IIDue to intensive lobbying activities, UEAPME, together with other Business Associations in Europe, has achieved some improvements in favour of SMEs regarding the new Basel Agreement on regulatory capital (Basel II). The final agreement from the Basel Committee contains a much more realistic approach toward the real risk situation of SME lending for the finance market and will allow the necessary room for adaptations, which respect the different regional traditions and institutional structures.However, the new regulatory system will influence the relations between Banks and SMEs and it will depend very much on the way it will be implemented into European law, whether Basel II becomes burdensome for SMEs and if it will reduce access to finance for them.The new Capital Accord form the Basel Committee gives the financial market authorities and herewith the European Institutions, a lot of flexibility. In about 70areas they have room to adapt the Accord to their specific needs when implementing it into EU law. Some of them will have important effects on the costs and the accessibility of finance for SMEs.UEAPME expects therefore from the new European Commission and the new European Parliament:•The implementation of the new Capital Requirement Directive will be costly for the Finance Sector (up to 30 Billion Euro till 2006) and its clients will have to pay for it. Therefore, the implementation – especially for smaller banks, which are often very active in SME finance –has to be carried out with as little administrative burdensome as possible (reporting obligations, statistics, etc.).•The European Regulators must recognize traditional instruments for collaterals (guarantees, etc.) as far as possible.•The European Commission and later the Member States should take over the recommendations from the European Parliament with regard to granularity, access to retail portfolio, maturity, partial use, adaptation of thresholds, etc., which will ease the burden on SME finance.2. SMEs need transparent rating proceduresDue to higher risk awareness of the finance sector and the needs of Basel II, many SMEs will be confronted for the first time with internal rating procedures or credit scoring systems by their banks. The bank will require more and better quality information from their clients and will assess them in a new way. Both up-coming developments are already causing increasing uncertainty amongst SMEs.In order to reduce this uncertainty and to allow SMEs to understand the principles of the new risk assessment, UEAPME demands transparent rating procedures –rating procedures may not become a “Black Box” for SMEs:•The bank should communicate the relevant criteria affecting the rating of SMEs.•The bank should inform SMEs about its assessment in order to allow SMEs to improve.The negotiations on a European Code of Conduct between Banks and SMEs , which would have included a self-commitment for transparent rating procedures by Banks, failed. Therefore, UEAPME expects from the new European Commission and the new European Parliament support for:•binding rules in the framework of the new Capital Adequacy Directive, which ensure the transparency of rating procedures and credit scoring systems for SMEs;•Elaboration of national Codes of Conduct in order to improve the relations between Banks and SMEs and to support the adaptation of SMEs to the new financial environment.3. SMEs need an extension of credit guarantee systems with a special focus on Micro-LendingBusiness start-ups, the transfer of businesses and innovative fast growth SMEs also depended in the past very often on public support to get access to finance. Increasing risk awareness by banks and the stricter interpretation of State Aid Rules will further increase the need for public support.Already now, there are credit guarantee schemes in many countries on the limit of their capacity and too many investment projects cannot be realized by SMEs.Experiences show that Public money, spent for supporting credit guarantees systems, is a very efficient instrument and has a much higher multiplying effect than other instruments. One Euro form the European Investment Funds can stimulate 30 Euro investments in SMEs (for venture capital funds the relation is only 1:2).Therefore, UEAPME expects the new European Commission and the new European Parliament to support:•The extension of funds for national credit guarantees schemes in the framework of the new Multi-Annual Programmed for Enterprises;•The development of new instruments for securitizations of SME portfolios;•The recognition of existing and well functioning credit guarantees schemes as collateral;•More flexibility within the European Instruments, because of national differences in the situation of SME finance;•The development of credit guarantees schemes in the new Member States;•The development of an SBIC-like scheme in the Member States to close the equity gap (0.2 – 2.5 Mio Euro, according to the expert meeting on PACE on April 27 in Luxemburg).•the development of a financial support scheme to encourage the internalizations of SMEs (currently there is no scheme available at EU level: termination of JOP, fading out of JEV).4. SMEs need company and income taxation systems, which strengthen their capacity for self-financingMany EU Member States have company and income taxation systems with negative incentives to build-up capital within the company by re-investing their profits. This is especially true for companies, which have to pay income taxes. Already in the past tax-regimes was one of the reasons for the higher dependence of Europe’s SMEs on bank lending. In future, the result of rating will also depend on the amount of capital in the company; the high dependence on lending will influence the access to lending. This is a vicious cycle, which has to be broken.Even though company and income taxation falls under the competence of Member States, UEAPME asks the new European Commission and the new European Parliament to publicly support tax-reforms, which will strengthen the capacity of Crafts, Trades and SME for self-financing. Thereby, a special focus on non-corporate companies is needed.5. Risk Capital – equity financingExternal equity financing does not have a real tradition in the SME sector. On the one hand, small enterprises and family business in general have traditionally not been very open towards external equity financing and are not used to informing transparently about their business.On the other hand, many investors of venture capital and similar forms of equity finance are very reluctant regarding investing their funds in smaller companies, which is more costly than investing bigger amounts in larger companies. Furthermore it is much more difficult to set out of such investments in smaller companies.Even though equity financing will never become the main source of financing for SMEs, it is an important instrument for highly innovative start-ups and fast growing companies and it has therefore to be further developed. UEAPME sees three pillars for such an approach where policy support is needed:Availability of venture capital•The Member States should review their taxation systems in order to create incentives to invest private money in all forms of venture capital.•Guarantee instruments for equity financing should be further developed.Improve the conditions for investing venture capital into SMEs•The development of secondary markets for venture capital investments in SMEs should be supported.•Accounting Standards for SMEs should be revised in order to ease transparent exchange of information between investor and owner-manager.Owner-managers must become more aware about the need for transparency towards investors•SME owners will have to realise that in future access to external finance (venture capital or lending) will depend much more on a transparent and open exchange of information about the situation and the perspectives of their companies.•In order to fulfil the new needs for transparency, SMEs will have to use new information instruments (business plans, financial reporting, etc.) and new management instruments (risk-management, financial management, etc.).外文资料翻译涂敏之会计学 8051208076题目:未来的中小企业融资背景:中小企业融资已经改变未来的经济复苏将取决于能否工艺品,贸易和中小企业利用其潜在的增长和创造就业。
外文文献翻译范例

StatusComplete
Type:Office
Location:Hong Kong
Construction started:18 April 1985
Completed:1990
Opening:17 May 1990
HeightAntenna spire:367.4 m (1,205.4 ft)
2011年6月8日
外文文献翻译(译成中文1000字左右):
【主要阅读文献不少于5篇,译文后附注文献信息,包括:作者、书名(或论文题目)、出 版 社(或刊物名称)、出版时间(或刊号)、页码。提供所译外文资料附件(印刷类含封面、封底、目录、翻译部分的复印件等,网站类的请附网址及原文)
原文网址:/TALLEST_TOWERS/t_sears.htm
译文
建筑师:Bruce Graham, design partner, Skidmore, Owings and Merrill
地点:Chicago
甲方:Sears Roebuck and Company
工程师:Fazlur Khan of Skidmore, Owings and Merrill.项目年份:2008
香港1985年4月18日开工建设1990年完成1990年5月17日开幕高度天线尖顶三百六十七点四米2418英尺屋顶三百一十五点米10335英尺顶层二百八十八点二米九百四十五点五英尺技术细节地上楼层数724层楼建筑面积一十三点五万平方米1450000平方英尺电梯数45由奥的斯电梯公司生产的设计与施工主要承建商香港建设控股有限公司引文需要熊谷组香港贝聿铭建筑师事务所建筑师事务所谢尔曼西贡有限公司sl的托马斯博阿达莱斯利罗伯逊结构工程师协会rllp参考文献对中国塔简称中银大厦银行是中环香港最知名的摩天大楼之一
5 中英文翻译

外文参考文献全文及译文英文原文4.1 DefinitionA durable lining is one that performs satisfactorily in the working environment during its anticipated service life. The material used should be such as to maintain its integrity and, if applicable, to protect other embedded materials.4.2 Design lifeSpecifying the required life of a lining (see Section 2.3.4) is signifi-cant in the design, not only in terms of the predicted loadings but also with respect to long-term durability. Currently there is no guide on how to design a material to meet a specified design life, although the new European Code for Concrete (British Standards Institution, 2003) addresses this problem. This code goes some way to recommending various mix proportions and reinforcement cover for design lives of 50 and 100 years. It can be argued that linings that receive annular grouting between the excavated bore and the extrados of the lining, or are protected by primary linings, for example sprayed concrete, may have increased resistance to any external aggressive agents. Normally, these elements of a lining system are considered to be redundant in terms of design life. This is because reliably assessing whether annulus grouting is complete or assessing the properties or the quality of fast set sprayed concrete with time is generally difficult.Other issues that need to be considered in relation to design life include the watertightness of a structure and fire-life safety. Both of these will influence the design of any permanent lining.4.3 Considerations of durability related to tunnel useLinings may be exposed to many and varied aggressive environments. Durability issues to be addressed will be very dependent not only on the site location and hence the geological environment but also on the use of the tunnel/shaft (see Fig. 4.1).The standards of material, design and detailing needed to satisfy durability requirements will differ and sometimes conflict. In these cases a compromise must be made to provide the best solution possible based on the available practical technology.4.4 Considerations of durability related to tunnel4.4.1 Steel/cast-iron liningsUnprotected steel will corrode at a rate that depends upon the temperature, presence of water with reactive ions (from salts and acids) and availability of oxygen. Typically corrosion rates can reach about 0.1 mm/year. If the availability of oxygen is limited, for example at the extrados of a segmental lining, pitting corrosion is likely to occur for which corrosion rates are more difficult to ascertain.Grey cast-iron segments have been employed as tunnel linings for over a hundred years, with little evidence as yet of serious corrosion. This is because this type of iron contains flakes of carbon that become bound together with the corrosion product to prevent water and, in ventilated tunnels, oxygen from reaching the mass of the metal. Corrosion is therefore stifled. This material is rarely if ever used in modern construction due to the higher strength capacities allowed with SGI linings.Spheroidal-Graphite cast iron (SGI) contains free carbon in nodules rather than flakes, and although some opinion has it that this will reduce the self-stifling action found in grey irons, one particular observation suggests that this is not necessarily so. A 250 m length of service tunnel was built in 1975 for the Channel Tunnel, and SGI segments were installed at the intersection with the tunnel constructed in 1880. The tunnel was mainly unventilated for the next ten years, by which time saline groundwater had caused corrosion and the intrados appeared dreadfully corroded. The application of some vigorous wire brushing revealed that the depth of corrosion was in reality minimal.4.4.2 Concrete liningsIn situ concrete was first used in the UK at the turn of the century. Precast concrete was introduced at a similar time but it was not used extensively until the 1930s. There is therefore only 70 to 100 years of knowledge of concrete behaviour on which to base the durability design of a concrete lining.The detailed design, concrete production and placing, applied curing and post curing exposure, and operating environment of the lining all impact upon its durability. Furthermore, concrete is an inherently variable material. In order to specify and design to satisfy durability requirements, assumptions have to be made about the severity of exposure in relation to deleterious agents, as well as the likely variability in performance of the lining material itself. The factors that generally influence the durability of the con-crete and those that should be considered in the design and detailing of a tunnel lining include:1.operational environment2.shape and bulk of the concrete3.cover to the embedded steel4.type of cement5.type of aggregate6.type and dosage of admixture7.cement content and free water/cement ratio8.workmanship, for example compaction’ finishing, curing9.permeability, porosity and dijfusivity of the final concrete.The geometric shape and bulk of the lining section is important because concrete linings generally have relatively thin walls and are possibly subject to a significant external hydraulic head. Both of these will increase the ingress of aggressive agents into the concrete.4.5 Design and specification for durabilityIt has to be accepted that all linings will be subject to some level of corrosion and attack by both the internal and external environment around a tunnel. They will also be affected by fire. Designing for durability is dependent not only on material specification but also on detailing and design of the lining.4.5.1 Metal liningsOccasionally segments are fabricated from steel, and these should be protected by the application of a protective system. Liner plates formed from pressing sheet steel usually act as a temporary support while an in situ concrete permanent lining is constructed. They are rarely protected from corrosion, but if they are to form a structural part of the lining system they should also be protected by the application of a protective system. Steel sections are often employed as frames for openings and to create small structures such as sumps. In these situations they should be encased in con-crete with suitable cover and anti-crack reinforcement. In addition, as the quality of the surrounding concrete might not be of a high order consideration should be given to the application of a protec-tive treatment to such steelwork.Spheroidal-Graphite cast iron segmental tunnel linings are usually coated internally and externally with a protective paint system. They require the radial joint mating surfaces, and the circumferential joint surfaces, to be machined to ensure good load transfer across thejoints and for the formation of caulking and sealing grooves. It is usual to apply a thin coat of protective paint to avoid corrosion between fabrication and erection, but long-term protective coatings are unnecessary as corrosion in such joints is likely to be stifled.It is suggested that for SGI segmental linings the minimum design thicknesses of the skin and outer flanges should be increased by one millimetre to allow for some corrosion (see Channel Tunnel case history in Chapter 10). If routine inspections give rise to a concern about corrosion it is possible to take action, by means of a cathodic protection system or otherwise, to restrain further deterioration. The chance of having to do this over the normal design lifetime is small.(1)Protective systemsCast iron segmental linings are easily protected with a coating of bitumen, but this material presents a fire hazard, which is now unacceptable on the interior of the tunnel. A thin layer, up to 200 um in thickness, of specially formulated paint is now employed; to get the paint to adhere it is necessary to specify the surface preparation. Grit blasting is now normally specified, however, care should be taken in the application of these coatings. The problem of coatings for cast iron is that grit blasting leavesbehind a surface layer of small carbon particles, which prevents the adhesion of materials, originally designed for steelwork, and which is difficult to remove. It is recommended that the designer take advice from specialist materials suppliers who have a proven track record.Whether steel or cast iron segments are being used, consideration of the ease with which pre-applied coatings can be damaged during handling, erection and subsequent construction activities in the tunnel is needed.(2) Fire resistanceExperiences of serious fires in modern tunnels suggest that temperatures at the lining normally average 600-700 °C, but can reach 1300 °C (see Section 4.5.3). It is arguable that fire protection is not needed except where there is a risk of a high-temperature (generally hydrocarbon) fire. It can be difficult to find an acceptable economic solution, but intumescent paint can be employed. This is not very effective in undersea applications. As an alternative an internal lining of polypropylene fibre reinforced concrete might be considered effective. 4.5.2 Concrete liningsAll aspects of a lining’s behaviour during its design life, both under load and within theenvironment, should be considered in order to achieve durability. The principle factors that should be considered in the design and detailing are:1.Material(s)2.production method3.application method (e.g. sprayed concrete)4.geological conditions5.design life6.required performance criteria.(1) CorrosionThe three main aspects of attack that affect the durability of concrete linings are:corrosion of metalschloride-induced corrosion of embedded metalscarbonation-induced corrosion of embedded metals.Corrosion of metalsUnprotected steel will corrode at a rate that depends upon temperature, presence of water and availability of oxygen. Exposed metal fittings, either cast in (i.e. a bolt- or grout-socket), or loose (e.g. a bolt), will corrode (see Section 4.5.4). It is impractical to provide a comprehensive protection system to these items and it is now standard practice to eliminate ferrous cast in fittings totally by the use of plastics. Loose fixings such as bolts should always be specified with a coating such as zinc.Chloride-induced corrosionCorrosion of reinforcement continues to represent the single largest cause of deterioration of reinforced concrete structures. Whenever there are chloride ions in concrete containing embedded metal there is a risk of corrosion. All constituents of concrete may contain some chlorides and the concrete may be contaminated by other external sources, for example de-icing salts and seawater.Damage to concrete due to reinforcement corrosion will only normally occur when chloride ions, water and oxygen are all present.Chlorides attack the reinforcement by breaking down the passive layer around the reinforcement. This layer is formed on the surface of the steel as a result of the highly alkaline environment formed by the hydrated cement. The result is the corrosion of the steel, whichcan take the form of pitting or general corrosion. Pitting corrosion reduces the size of the bar, while general corrosion will result in cracking and spalling of the concrete.Although chloride ions have no significant effect on the per-formance of the concrete material itself, certain types of concrete are more vulnerable to attack because the chloride ions then find it easier to penetrate the concrete. The removal of calcium alumi- nate in sulphate-resistant cement (the component that reacts with external sulphates), results in the final concrete being less resistant to the ingress of chlorides. To reduce the penetration of chloride ions, a dense impermeable concrete is required. The use of corrosion inhibitors does not slow down chloride migration but does enable the steel to tolerate high levels of chloride before corrosion starts.Current code and standard recommendations to reduce chloride attack are based on the combination of concrete grade (defined by cement content and type, water/cement ratio and strength, that is indirectly related to permeability) and cover to the reinforcement. The grade and cover selected is dependent on the exposure condition. There are also limits set on the total chlorides content of the concrete mix.Carbonation-induced corrosionIn practice, carbonation-induced corrosion is regarded as a minor problem compared with chloride- induced corrosion. Even if carbonation occurs it is chloride-induced corrosion that will generally determine the life of the lining. Carbonated concrete is of lower strength but as carbonation is lim-ited to the extreme outer layer the reduced strength of the concrete section is rarely significant.Damage to concrete will only normally occur when carbon dioxide, water, oxygen and hydroxides are all present. Carbonation is unlikely to occur on the external faces of tunnels that are constantly under water, whereas some carbonation will occur on the internal faces of tunnels that are generally dry. Carbonation-induced corrosion, how-ever, is unlikely in this situation due to lack of water. Linings that are cyclically wet and dry are the most vulnerable.When carbon dioxide from the atmosphere diffuses into the concrete, it combines with water forming carbonic acid. This then reacts with the alkali hydroxides forming carbonates. In the presence of free water, calcium carbonate is deposited in the pores. The pH of the pore fluid drops from a value of about 12.6 in the uncarbonated region to 8 in the carbonated region. If this reduction in alkalinity occurs close to the steel, it can cause depassivation. Inthe presence of water and oxygen corrosion of the reinforcement will then occur.To reduce the rate of carbonation a dense impermeable concrete is required.As with chloride-induced corrosion, current code and standard recommendations to reduce carbonation attack are based on the combination of concrete grade and reinforcement cover.Other chemical attackChemical attack is by direct attack either on the lining material or on any embedded materials, caused by aggressive agents being part of the contents within the tunnel or in the ground in the vicinity of the tunnel. Damage to the material will depend on a number of factors including the concentration and type of chemical in question, and the movement of the ground-water, that is the ease with which the chemicals can be replenished at the surface of the concrete. In this respect static water is generally defined as occurring in ground having a mass permeability of <10-6m/s and mobile water >10-6 m/s. The following types of exchange reactions may occur between aggressive fluids and components of the lining material:●sulphate attack●acid attack●alkali-silica reaction (ASR).Sulphates (conventional and thaumasite reaction)In soil and natural groundwater, sulphates of sodium, potassium, magnesium and calcium are common. Sulphates can also be formed by the oxi-dation of sulphides, such as pyrite,as a result of natural processes or with the aid of construction process activities. The geological strata most likely to have a substantial sulphate concentration are ancient sedimentary clays. In most other geological deposits only the weathered zone (generally 2m to 10m deep) is likely to have a significant quantity of sulphates present. By the same processes, sulphates can be present in contaminated ground. Internal corro-sion in concrete sewers will be, in large measure, due to the presence of sulphides and sulphates at certain horizons dependent on the level of sewer utilisation. Elevated temperatures will contribute to this corrosion.Ammonium sulphate is known to be one of the salts most aggressive to concrete. However, there is no evidence that harmful concentrations occur in natural soils.Sulphate ions primarily attack the concrete material and not the embedded metals. They are transported into the concrete in water or in unsaturated ground, by diffusion. The attackcan sometimes result in expansion and/or loss of strength. Two forms of sulphate attack are known; the conventional type leading to the formation of gypsum and ettringite, and a more recently identified type produ-cing thaumasite. Both may occur together.Constituents of concrete may contain some sulphates and the concrete may be contaminated by external sources present in the ground in the vicinity of the tunnel or within the tunnel.Damage to concrete from conventional sulphate reaction will only normally occur when water, sulphates or sulphides are all present. For a thaumasite-producing sulphate reaction, in addition to water and sulphate or sulphides, calcium silicate hydrate needs to be present in the cement matrix, together with calcium carbonate. In addition, the temperature has to be relatively low (generally less than 15 °C).Conventional sulphate attack occurs when sulphate ions react with calcium hydroxide to form gypsum (calcium sulphate), which in turn reacts with calcium aluminate to form ettringite. Sulphate resisting cements have a low level of calcium aluminate so reducing the extent of the reaction. The formation of gypsum and ettringite results in expansion and disruption of the concrete.Sulphate attack, which results in the mineral thaumasite, is a reaction between the calcium silicate hydrate, carbonate and sulphate ions. Calcium silicate hydrate forms the main binding agent in Portland cement, so this form of attack weakens the con-crete and, in advanced cases, the cement paste matrix is eventually reduced to a mushy, incohesive white mass. Sulphate resisting cements are still vulnerable to this type of attack.Current code and standard recommendations to reduce sulphate attack are based on the combination of concrete grade. Future code requirements will also consider aggregate type. There are also limits set on the total sulphate content of the concrete mix but, at present, not on aggregates, the recommendations of BRE Digest 363 1996 should be followed for any design.AcidsAcid attack can come from external sources, that are present in the ground in the vicinity of the tunnel, or from within the tunnel. Groundwater may be acidic due to the presence of humic acid (which results from the decay of organic matter), carbonic acid or sulphuric acid. The first two will not produce a pH below 3.5. Residual pockets of sulphuric (natural andpollution), hydrochloric or nitric acid may be found on some sites, particularly those used for industrial waste. All can produce pH values below 3.5. Carbonic acid will also be formed when carbon dioxide dissolves in water.Concrete subject to the action of highly mobile acidic water is vulnerable to rapid deterioration. Acidic ground waters that are not mobile appear to have little effect on buried concrete.Acid attack will affect both the lining material and other embedded metals. The action of acids on concrete is to dissolve the cement hydrates and, also in the case of aggregate with high calcium carbonate content, much of the aggregate. In the case of concrete with siliceous gravel, granite or basalt aggregate the sur-face attack will produce an exposed aggregate finish. Limestone aggregates give a smoother finish. The rate of attack depends more on the rate of movement of the water over the surface and the quality of the concrete, than on the type of cement or aggregate.Only a very high density, relatively impermeable concrete will be resistant for any period of time without surface protection. Damage to concrete will only normally occur when mobile water conditions are present.Current code and standard recommendations to reduce acid attack are based on the concrete grade (defined by cement content and type, water/cement ratio and strength). As cement type is not significant in resisting acid attack, future code requirements will put no restrictions on the type used.(2) Alkali Silica Reaction (ASR)Some aggregates contain particular forms of silica that may be susceptible to attack by alkalis originat-ing from the cement or other sources.There are limits to the reactive alkali content of the concrete mix, and also to using a combination of aggregates likely to be unreactive. Damage to concrete will only normally occur when there is a high moisture level within the concrete, there is a high reactivity alkali concrete content or another source of reactive alkali, and the aggregate contains an alkali-reactive constituent. Current code and standard recommendations to reduce ASR are based on limiting the reactive alkali content of the concrete mix, the recommendations of BRE 330 1999 should be followed for any design.(3) Physical processesVarious mechanical processes including freeze-thaw action, impact, abrasion and cracking can cause concrete damage.Freeze-thawConcretes that receive the most severe exposure to freezing and thawing are those which are saturated during freezing weather, such as tunnel portals and shafts.Deterioration may occur due to ice formation in saturated con-crete. In order for internal stresses to be induced by ice formation, about 90% or more by volume of pores must be filled with water. This is because the increase in volume when water turns to ice is about 8% by volume.Air entrainment in concrete can enable concrete to adequately resist certain types of freezing and thawing deterioration, provided that a high quality paste matrix and a frost-resistant aggregate are used.Current code and standard recommendations to reduce freeze- thaw attack are based on introducing an air entrainment agent when the concrete is below a certain grade. It should be noted that the inclusion of air will reduce the compressive strength of the concrete.ImpactAdequate behaviour under impact load can generally be achieved by specifying concrete cube compressive strengths together with section size, reinforcement and/or fibre content. Tensile capacity may also be important, particularly for concrete without reinforcement.AbrasionThe effects of abrasion depend on the exact cause of the wear. When specifying concrete for hydraulic abrasion in hydraulic applications, the cube compressive strength of the concrete is the principal controlling factor.CrackingThe control of cracks is a function of the strength of concrete, the cover, the spacing, size and position of reinforce-ment, and the type and frequency of the induced stress. When specifying concrete cover there is a trade-off between additional protection from external chloride attack to the reinforcement, and reduction in overall strength of the lining.4.5.3 Protective systemsAdequate behaviour within the environment is achieved by specify-ing concrete to thebest of current practice in workmanship and materials. Protection of concrete surfaces is recommended in codes and standards when the level of aggression from chemicals exceeds a maximum specified limit. Various types of surface protection include coatings, waterproof barriers and a sacrificial layer.(1) CoatingsCoatings have changed over the years, with tar and cut-back bitumens being less popular, and replaced by rubberised bitumen emulsions and epoxy resins. The fire hazard associated with bituminous coatings has limited their use to the extrados of the lining in recent times. The risk of damage to coat-ings during construction operations should be considered.(2) Waterproof barriersThe requirements for waterproof barriers are similar to those of coatings. Sheet materials are commonly used, including plastic and bituminous membranes. Again, the use of bituminous materials should be limited to the extrados.(3) Sacrificial layerThis involves increasing the thickness of the concrete to absorb all the aggressive chemicals in the sacrificial outer layer. However, use of this measure may not be appropriate in circumstances where the surface of the concrete must remain sound, for example joint surfaces in segmental linings.(4) Detailing of precast concrete segmentsThe detailing of the ring plays an important role in the success of the design and performance of the lining throughout its design life. The ring details should be designed with consideration given to casting methods and behaviour in place. Some of the more important considerations are as follows.4.5.5 Codes and standardsBuilding Research Establishment (BRE) Digest 330: 1999 (Building Research Establishment, 1999), Building Research Establishment (BRE) Digest 363: 1996 (Building Research Establishment, 1996),BRE Special Digest 1 (Building Research Establishment, 2003) and British Standard BSEN 206-1: 2000 (British Standards Institution, 2003) are the definitive reference points for designing concrete mixes which are supplemented by BS8110 (British Standards Institution, 1997) and BS 8007 (British Standards Institution, 1987). BSEN 206-1 also references Eurocode 2: Design of Concrete Structures (European Commission,1992).(1) European standardsEN206 Concrete - Performance, Production and Conformity, and DD ENV 1992-1-1 {Eurocode 2: Design of Concrete Structures Part 1) (British Standards Institution, 2003 and European Commission,1992).Within the new European standard EN 206 Concrete - Perfor-mance, Production and Conformity,durability of concrete will rely on prescriptive specification of minimum grade, minimum binder content and maximum water/binder ratio for a series of defined environmental classes. This standard includes indicative values of specification parameters as it is necessary to cover the wide range of environments and cements used in the EU member states.Cover to reinforcement is specified in DD ENV 1992-1 -1 (Eurocode 2: Design of Concrete Structures Part 1 - European Commission, 1992).(2) BRE 330:1999This UK Building Research Establishment code (Building Research Establishment, 1999) gives the back-ground to ASR as well as detailed guidance for minimising the risks of ASR and examples of the methods to be used in new construction.(3) Reinforcement BRE 363: 1996This UK Building Research Establishment code (Building Research Establishment, 1996) discusses the factors responsible for sulphate and acid attack on concrete below ground level and recommends the type of cement and quality of concrete to provide resistance to attack. (4) BRE Special Digest 1This special digest (Building Research Establishment, 2003) was published following the recent research into the effects of thaumasite on concrete. It replaces BRE Digest 363: 2001. Part 4 is of specific reference to precast concrete tunnel linings.(5) BS 8110/BS 8007Guidance is given on minimum grade, minimum cement and maximum w/c ratio for different conditions of exposure. Exposure classes are mild, moderate, severe, very severe, most severe and abrasive related to chloride attack, carbonation and freeze-thaw. The relationship between cover of the reinforcement and concrete quality is also given together with crack width (British Standards Institution, 1987a and 1997a).(6) OthersChemically aggressive environments are classified in specialist standards. For information on industrial acids and made up ground, reference may be made to a specialist producer of acid resistant finishes or BS 8204-2 (British Standards Institu-tion, 1999). For silage attack, reference should be made to the UK Ministry of Agriculture, Fisheries and Food.中文翻译4.1 定义耐用的衬砌指的是在衬砌的预期服务寿命内提供令人满意的工作环境。
文学作品中英文对照外文翻译文献

文学作品中英文对照外文翻译文献
本文旨在汇总文学作品中的英文和中文对照外文翻译文献,共有以下几篇:
1. 《傲慢与偏见》
翻译:英文原版名为“Pride and Prejudice”,中文版由钱钟书翻译。
该小说是英国作家简.奥斯汀的代表作之一,描绘了19世纪英国中上层社会的生活和爱情故事。
2. 《了不起的盖茨比》
翻译:英文原版名为“The Great Gatsby”,中文版由杨绛翻译。
小说主要讲述了一个居住在纽约长岛的年轻白领盖茨比为了追求他的旧爱黛西而付出的努力,是20世纪美国文学的经典之作。
3. 《麦田里的守望者》
翻译:英文原版名为“The Catcher in the Rye”,中文版由施蛰存翻译。
该小说主人公霍尔顿是美国现代文学中最为知名的反英雄形象之一,作品深刻地揭示了青少年内心的孤独和矛盾。
4. 《1984》
翻译:英文原版名为“1984”,中文版由李敬瑞翻译。
该小说是英国作家乔治.奥威尔的代表作之一,描绘了一个虚构的极权主义社会。
以上是部分文学作品的中英文对照外文翻译文献,可以帮助读者更好地理解和学习相关文学作品。
英文参考文献及翻译

(二 零 一 五 年 六 月英文参考资料翻译 题 目:多层横机针织织物面料的开发 ****:*** 学 院:轻工与纺织学院 系 别:服装设计与工程系 专 业:服装设计与工程 班 级:服装设计与工程11-1班 指导教师:邱莉 讲师学校代码: 10128学 号:************Development of Multi-Layer Fabric on a Flat KnittingMachineAbstractThe loop transfer technique was used to develop the a splitable multi layer knit fabric on a computerized multi gauge flat knitting machine. The fabric consists of three layers: inner-single jersey, middle-1X1 purl and, outer-single jersey. By varying the loop length the multi layer knit fabric samples were produced,namely CCC-1, CCC-2 and CCC-3. The above multi layer fabrics were knitted using 24s Ne cotton of combined yarn feed in feeders 3, 4, and 4 respectively. The influence of loop length on wpc,cpc and tightness factor was studied using linear regression. The water vapor and air permeability properties of the produced multi layer knit fabrics were studied using ANOVA. The change of raw material in three individual layers could be useful for the production of fabric for functional, technical, and industrial applications.Keywords: multi layer fabric, loop length, loop transfer, permeability prope1.INTRODUCTIONcapable of manufacturing engineered fabric in two dimensional, three dimensional of bi-layer, and multilayer knit fabrics. The feature includes individual needle selection, the presences of holding down sinkers, racking, transfer, and adapted feeding devices combined with CAD. The layered fabrics are more suitable for functional and technical applications than single layer fabrics. These fabrics may be non-splitable (branching knit structure, plated fabric, spacer fabric) and splitable (bilayer,multilayer). The functional knitted structure of two different fabric layers based on different textile components (hydrophobic and hydrophilic textile material) is used to produce leisure wear, sportswear and protective clothing to improve the comfort. The separation layer is polypropylene in contact with skin, and the absorption layer will be the outside layer when cotton is used for the knit fabric .Garments made of plant branch structured knit fabrics facilitate the transport of sweat from the skin to the outer layer of the fabric very fast and make the wearer more comfortable. Qing Chen et al .reported that the branching knitted fabrics made from different combinations of polyester/cotton yarns with varying linear density and various knitted structure produced in rib machine improved water transport properties. The Moisture comfort characteristics of plated knitted fabric was good, reported by the authors findings from three layer plated fabric of cotton (40s), lycra (20 D) and superfine polypropylene (55 dtex/72 f) was used as a raw material in face, middle and ground layers respectively . The applications in wearable electronics for the multilayer fabric are wider. Novel multi-functional fibers structure include three layered knit fabric embedded with electronics for health monitoring. The knit fabric consists of 5% spandex and 95% polypropylene in –1stlayer, -2nd layer composed of metal fibers to conduct electric current to embedded electronics + PCM and the 3rd layer is composed of highly hydrophilic cotton . In flat knitting, two surface (U-,V-,M-,X- and Y-shaped) and three surface layers (U-face to face, U- zigzag and X-shaped) spacer fabric were developed from hybrid glass filament and polypropylene for light weight composite applications.HCebulla et al produced three dimensional performs like open cuboid and spherical shell using the needle parking method. The focus was in the area of individual needle selection in the machine for the production of near net-shape performs. The multi layered sandwich knit fabric of rectangular core structure (connecting layer: rib), triangular core structure (connecting layer:interlock), honeycomb core structure (connecting layer: jersey combined with rib), triple face structure 1 (connecting layers are not alternated), Triple face structure 2 (connecting layers are alternated) were developed on a flat-knitting machine for technical applications .In this direction the flat knitting machine was elected to produce splittable multi layer knit fabric with varying loop length and loop transfer techniques. The influence of loop length on wpc, cpc and tightness factor was studied for the three individual layers in the fabric. The important breathability properties of the fabric such as water vapor permeability and air permeability were studied. The production technique used for this fabric has wide applications such as in functional wear, technical textiles, and wearable textiles.2.MATERIALS AND METHODSThe production of multi-layer knit fabrics such as CCC-1, CCC-2 and CCC-3 cotton yarn with the linear density of 24s Ne was fed in the knit feeder.For layered fabric development, a computerized multi gauge flat knitting machine and combined yarn feed was selected like 3, 4 and 4 respectively, shown in the Table I. Q. M. Wang and H. Hu [9] was the selected yarn feed in the range of 4 –10 for the production of glass fiber yarn composite reinforcement on a flat knitting machine. The intermediate between integral and fully fashioned garment was produced using the “half gauging orneedle parking” technique. The use of only alternate needles on each bed of the flat knitting machine was used for stitch formation, The remaining needles did not participate in stitch formation in the same course,but the loops formed were kept in the needle head until employed for stitch formation again, thus freeing needles to be used as temporary parking places for loop transfer . For production of layered fabric and fully fashionedgarment, the loop transfer stitch is essential part of the panel. The running-on bars were used for transferring of loops either by hand or automatically from one needle to another needle depending on the machine. The principle of the loop transfer is shown in the Figure1.FIGURE. 1. Principle of loop transfer.(a)The delivering needle is raised by a cam in the carriage. The loop is stretched over the transfer spring. (b)The receiving needle is raised slightly from its needle bed. The receiving needle enters the transfer spring of delivering needle and penetrates the loop that will be transferred. (c)The delivering needle retreats leaving the loop on the receiving needle. The transfer spring opens to permits the receiving needleto move back from its closure. Finally, loop transference is completed.TABLE I. Machine & Fabric parameters.2.1 Fabric DevelopmentUsing STOLL M1.PLUS 5.1.034 software the needle selection pattern was simulated is shown in Figure 2.In Figure3, feeder 1, 2 and 3 are used for the formation of three layer fabric (inner-single jersey,middle-1X1 purl and outer-single jersey) respectively. With knit stitches the outer and inner layer knit fabrics are formed by means of selecting the alternate working needles in each bed. But the middle layer fabric is formed by free needles in each bed with the help of loop transfer and knit stitches.FIGURE 2. Selection of Machine & pattern parameters.FIGURE 3. Needle diagram for the multi-layer knit fabric.2.2TESTINGThe produced multi layer knit fabric was given a relaxation process and the following tests were carried out. The knitted fabric properties are given in Table II. and the cross sectional view of the fabrics is shown in Figure 4.FIGURE 4. Cross Sectional view of Multi-layer knit fabric.2.3 Stitch DensityThe courses and wale density of the samples in outer,middle and inner layer were calculated individually in the direction of the length and width of the fabric.The average density per square centimeter was taken for the discussion.2.4 Loop LengthIn outer, middle and inner layers of various combinations in multi layer fabric, 20 loops in a course were unraveled and the length of yarn in cm (LT) was measured. From the LT value the stitch length/loop length was measured by usingStitch length/loop length in cm (L) = (LT)/20 (1)The average loop length (cm) was taken and reported in Table II.2.5 Tightness Factor (K)The tightness of the knits was characterized by the tightness factor (K). It is known that K is a ratio of the area covered by the yarns in one loop to the area occupied by the loop. It is also an indication of the relative looseness or tightness of the knitted structure. For determination of TF the following formula was usedTightness Factor (K) = √T/l (2)Where T= Yarn linear density in Tex, l = loop length of fabric in cm. The TF of three layers (outer, midd le, and inner) were calculated separately is given in Table II.TABLE II. Multi-layer knitted fabric parameters3. RESULTS AND DISCUSSIONThe water vapor permeability of the multi layer knit fabrics were analyzed and shown in Figure 8. It can be observed that a linear trend is followed between water vapor permeability and loop length. With increases in loop length, there is less resistance per unit area, so, the permeability property of the fabric also increased. Anova data show increasesin loop length yield a significant difference in the water vapor permeability of the multi fabrics [F (2, 15) > Fcrit]. The regression analysis was done between CCC-1and CCC-2 and CCC-2 and CCC-3 for studying the influence of the number of yarn feeds.R2 values shows 0.755 for both comparisons. The water vapor permeability of the fabric is highly influenced by the length of the loop in the fabric and less by the number of yarn feed in the fabric.The air permeability of the multi layer knit fabricswas analyzed and is shown in Figure 8. It can be observed that the air permeability of the CCC-1,CCC-2, and CCC-3 fabrics is linear with loop length.FIGURE 8. Water Vapor Permeability & Air Permeability of fabric.As loop length in the fabric increased, air permeability also increased. The Anova- single factor analysis also proves that there is a significant difference at 5 % significance level between the air permeability characteristics of multi layer fabrics produced from various loop length [F (2, 15) > F crit] shown in Table IV. To study the influence of the combination yarn feed, the regression analysis was done between CCC-1 and CCC-2 andCCC-2 and CCC-3. It shows R2 =0.757. So, the air permeability of the fabric is may not be dependent on the number of yarns fed, but more influenced by the loop length.4. CONCLUSIONSIn flat knitting machine using a loop transfer technique, multi layer fabrics were developed with varying loop length. With respect to loop length, the loop density and tightness factor were analyzed.Based on analysis the following conclusions were made:TABLE III. Permeability Characteristics of Multi-layer knit fabrics.TABLE IV. ANOVA single factor data analysis.For multi-layer fabric produced with various basic structures (single jersey and 1x1 purl), the change of loop length between the layers has no significant difference.The wpc and cpc had an inverse relationship with the loop length produced from CCC combination multilayer fabrics.The combination yarn feed is an important factor affecting the tightness factor and loop lengths of the individual layers in knitted fabrics.The water vapor and air permeability properties of the multi layer knit fabrics were highly influenced by the change in loop length followed by the combination yarn feed.多层横机针织织物面料的开发摘要循环传输技术被用于开发一种计算机化多针距的横机上的一个多层编织织物。
外文文献翻译原文+译文

外文文献翻译原文Analysis of Con tin uous Prestressed Concrete BeamsChris BurgoyneMarch 26, 20051、IntroductionThis conference is devoted to the development of structural analysis rather than the strength of materials, but the effective use of prestressed concrete relies on an appropriate combination of structural analysis techniques with knowledge of the material behaviour. Design of prestressed concrete structures is usually left to specialists; the unwary will either make mistakes or spend inordinate time trying to extract a solution from the various equations.There are a number of fundamental differences between the behaviour of prestressed concrete and that of other materials. Structures are not unstressed when unloaded; the design space of feasible solutions is totally bounded;in hyperstatic structures, various states of self-stress can be induced by altering the cable profile, and all of these factors get influenced by creep and thermal effects. How were these problems recognised and how have they been tackled?Ever since the development of reinforced concrete by Hennebique at the end of the 19th century (Cusack 1984), it was recognised that steel and concrete could be more effectively combined if the steel was pretensioned, putting the concrete into compression. Cracking could be reduced, if not prevented altogether, which would increase stiffness and improve durability. Early attempts all failed because the initial prestress soon vanished, leaving the structure to be- have as though it was reinforced; good descriptions of these attempts are given by Leonhardt (1964) and Abeles (1964).It was Freyssineti’s observations of the sagging of the shallow arches on three bridges that he had just completed in 1927 over the River Allier near Vichy which led directly to prestressed concrete (Freyssinet 1956). Only the bridge at Boutiron survived WWII (Fig 1). Hitherto, it had been assumed that concrete had a Young’s modulus which remained fixed, but he recognised that the de- ferred strains due to creep explained why the prestress had been lost in the early trials. Freyssinet (Fig. 2) also correctly reasoned that high tensile steel had to be used, so that some prestress would remain after the creep had occurred, and alsothat high quality concrete should be used, since this minimised the total amount of creep. The history of Freyssineti’s early prestressed concrete work is written elsewhereFigure1:Boutiron Bridge,Vic h yFigure 2: Eugen FreyssinetAt about the same time work was underway on creep at the BRE laboratory in England ((Glanville 1930) and (1933)). It is debatable which man should be given credit for the discovery of creep but Freyssinet clearly gets the credit for successfully using the knowledge to prestress concrete.There are still problems associated with understanding how prestressed concrete works, partly because there is more than one way of thinking about it. These different philosophies are to some extent contradictory, and certainly confusing to the young engineer. It is also reflected, to a certain extent, in the various codes of practice.Permissible stress design philosophy sees prestressed concrete as a way of avoiding cracking by eliminating tensile stresses; the objective is for sufficient compression to remain after creep losses. Untensionedreinforcement, which attracts prestress due to creep, is anathema. This philosophy derives directly from Freyssinet’s logic and is primarily a working stress concept.Ultimate strength philosophy sees prestressing as a way of utilising high tensile steel as reinforcement. High strength steels have high elastic strain capacity, which could not be utilised when used as reinforcement; if the steel is pretensioned, much of that strain capacity is taken out before bonding the steel to the concrete. Structures designed this way are normally designed to be in compression everywhere under permanent loads, but allowed to crack under high live load. The idea derives directly from the work of Dischinger (1936) and his work on the bridge at Aue in 1939 (Schonberg and Fichter 1939), as well as that of Finsterwalder (1939). It is primarily an ultimate load concept. The idea of partial prestressing derives from these ideas.The Load-Balancing philosophy, introduced by T.Y. Lin, uses prestressing to counter the effect of the permanent loads (Lin 1963). The sag of the cables causes an upward force on the beam, which counteracts the load on the beam. Clearly, only one load can be balanced, but if this is taken as the total dead weight, then under that load the beam will perceive only the net axial prestress and will have no tendency to creep up or down.These three philosophies all have their champions, and heated debates take place between them as to which is the most fundamental.2、Section designFrom the outset it was recognised that prestressed concrete has to be checked at both the working load and the ultimate load. For steel structures, and those made from reinforced concrete, there is a fairly direct relationship between the load capacity under an allowable stress design, and that at the ultimate load under an ultimate strength design. Older codes were based on permissible stresses at the working load; new codes use moment capacities at the ultimate load. Different load factors are used in the two codes, but a structure which passes one code is likely to be acceptable under the other.For prestressed concrete, those ideas do not hold, since the structure is highly stressed, even when unloaded. A small increase of load can cause some stress limits to be breached, while a large increase in load might be needed to cross other limits. The designer has considerable freedom to vary both the working load and ultimate load capacities independently; both need to be checked.A designer normally has to check the tensile and compressive stresses, in both the top and bottom fibre of the section, for every load case. The critical sections are normally, but not always, the mid-span and the sections over piers but other sections may become critical ,when the cable profile has to be determined.The stresses at any position are made up of three components, one of which normally has a different sign from the other two; consistency of sign convention is essential.If P is the prestressing force and e its eccentricity, A and Z are the area of the cross-section and its elastic section modulus, while M is the applied moment, then where ft and fc are the permissible stresses in tension and compression.c e t f ZM Z P A P f ≤-+≤Thus, for any combination of P and M , the designer already has four in- equalities to deal with.The prestressing force differs over time, due to creep losses, and a designer isusually faced with at least three combinations of prestressing force and moment;• the applied moment at the time the prestress is first applied, before creep losses occur,• the maximum applied moment after creep losses, and• the minimum applied moment after creep losses.Figure 4: Gustave MagnelOther combinations may be needed in more complex cases. There are at least twelve inequalities that have to be satisfied at any cross-section, but since an I-section can be defined by six variables, and two are needed to define the prestress, the problem is over-specified and it is not immediately obvious which conditions are superfluous. In the hands of inexperienced engineers, the design process can be very long-winded. However, it is possible to separate out the design of the cross-section from the design of the prestress. By considering pairs of stress limits on the same fibre, but for different load cases, the effects of the prestress can be eliminated, leaving expressions of the form:rangestress e Perm issibl Range Mom entZ These inequalities, which can be evaluated exhaustively with little difficulty, allow the minimum size of the cross-section to be determined.Once a suitable cross-section has been found, the prestress can be designed using a construction due to Magnel (Fig.4). The stress limits can all be rearranged into the form:()M fZ PA Z e ++-≤1 By plotting these on a diagram of eccentricity versus the reciprocal of the prestressing force, a series of bound lines will be formed. Provided the inequalities (2) are satisfied, these bound lines will always leave a zone showing all feasible combinations of P and e. The most economical design, using the minimum prestress, usually lies on the right hand side of the diagram, where the design is limited by the permissible tensile stresses.Plotting the eccentricity on the vertical axis allows direct comparison with the crosssection, as shown in Fig. 5. Inequalities (3) make no reference to the physical dimensions of the structure, but these practical cover limits can be shown as wellA good designer knows how changes to the design and the loadings alter the Magnel diagram. Changing both the maximum andminimum bending moments, but keeping the range the same, raises and lowers the feasible region. If the moments become more sagging the feasible region gets lower in the beam.In general, as spans increase, the dead load moments increase in proportion to the live load. A stage will be reached where the economic point (A on Fig.5) moves outside the physical limits of the beam; Guyon (1951a) denoted the limiting condition as the critical span. Shorter spans will be governed by tensile stresses in the two extreme fibres, while longer spans will be governed by the limiting eccentricity and tensile stresses in the bottom fibre. However, it does not take a large increase in moment ,at which point compressive stresses will govern in the bottom fibre under maximum moment.Only when much longer spans are required, and the feasible region moves as far down as possible, does the structure become governed by compressive stresses in both fibres.3、Continuous beamsThe design of statically determinate beams is relatively straightforward; the engineer can work on the basis of the design of individual cross-sections, as outlined above. A number of complications arise when the structure is indeterminate which means that the designer has to consider, not only a critical section,but also the behaviour of the beam as a whole. These are due to the interaction of a number of factors, such as Creep, Temperature effects and Construction Sequence effects. It is the development of these ideas whichforms the core of this paper. The problems of continuity were addressed at a conference in London (Andrew and Witt 1951). The basic principles, and nomenclature, were already in use, but to modern eyes concentration on hand analysis techniques was unusual, and one of the principle concerns seems to have been the difficulty of estimating losses of prestressing force.3.1 Secondary MomentsA prestressing cable in a beam causes the structure to deflect. Unlike the statically determinate beam, where this motion is unrestrained, the movement causes a redistribution of the support reactions which in turn induces additional moments. These are often termed Secondary Moments, but they are not always small, or Parasitic Moments, but they are not always bad.Freyssinet’s bridge across the Marne at Luzancy, started in 1941 but not completed until 1946, is often thought of as a simply supported beam, but it was actually built as a two-hinged arch (Harris 1986), with support reactions adjusted by means of flat jacks and wedges which were later grouted-in (Fig.6). The same principles were applied in the later and larger beams built over the same river.Magnel built the first indeterminate beam bridge at Sclayn, in Belgium (Fig.7) in 1946. The cables are virtually straight, but he adjusted the deck profile so that the cables were close to the soffit near mid-span. Even with straight cables the sagging secondary momentsare large; about 50% of the hogging moment at the central support caused by dead and live load.The secondary moments cannot be found until the profile is known but the cablecannot be designed until the secondary moments are known. Guyon (1951b) introduced the concept of the concordant profile, which is a profile that causes no secondary moments; es and ep thus coincide. Any line of thrust is itself a concordant profile.The designer is then faced with a slightly simpler problem; a cable profile has to be chosen which not only satisfies the eccentricity limits (3) but is also concordant. That in itself is not a trivial operation, but is helped by the fact that the bending moment diagram that results from any load applied to a beam will itself be a concordant profile for a cable of constant force. Such loads are termed notional loads to distinguish them from the real loads on the structure. Superposition can be used to progressively build up a set of notional loads whose bending moment diagram gives the desired concordant profile.3.2 Temperature effectsTemperature variations apply to all structures but the effect on prestressed concrete beams can be more pronounced than in other structures. The temperature profile through the depth of a beam (Emerson 1973) can be split into three components for the purposes of calculation (Hambly 1991). The first causes a longitudinal expansion, which is normally released by the articulation of the structure; the second causes curvature which leads to deflection in all beams and reactant moments in continuous beams, while the third causes a set of self-equilibrating set of stresses across the cross-section.The reactant moments can be calculated and allowed-for, but it is the self- equilibrating stresses that cause the main problems for prestressed concrete beams. These beams normally have high thermal mass which means that daily temperature variations do not penetrate to the core of the structure. The result is a very non-uniform temperature distribution across the depth which in turn leads to significant self-equilibrating stresses. If the core of the structure is warm, while the surface is cool, such as at night, then quite large tensile stresses can be developed on the top and bottom surfaces. However, they only penetrate a very short distance into the concrete and the potential crack width is very small. It can be very expensive to overcome the tensile stress by changing the section or the prestress。
外文文献及翻译

((英文参考文献及译文)二〇一六年六月本科毕业论文 题 目:STATISTICAL SAMPLING METHOD, USED INTHE AUDIT学生姓名:王雪琴学 院:管理学院系 别:会计系专 业:财务管理班 级:财管12-2班 学校代码: 10128 学 号: 201210707016Statistics and AuditRomanian Statistical Review nr. 5 / 2010STATISTICAL SAMPLING METHOD, USED IN THE AUDIT - views, recommendations, fi ndingsPhD Candidate Gabriela-Felicia UNGUREANUAbstractThe rapid increase in the size of U.S. companies from the earlytwentieth century created the need for audit procedures based on the selectionof a part of the total population audited to obtain reliable audit evidence, tocharacterize the entire population consists of account balances or classes oftransactions. Sampling is not used only in audit – is used in sampling surveys,market analysis and medical research in which someone wants to reach aconclusion about a large number of data by examining only a part of thesedata. The difference is the “population” from which the sample is selected, iethat set of data which is intended to draw a conclusion. Audit sampling appliesonly to certain types of audit procedures.Key words: sampling, sample risk, population, sampling unit, tests ofcontrols, substantive procedures.Statistical samplingCommittee statistical sampling of American Institute of CertifiedPublic Accountants of (AICPA) issued in 1962 a special report, titled“Statistical sampling and independent auditors’ which allowed the use ofstatistical sampling method, in accordance with Generally Accepted AuditingStandards (GAAS). During 1962-1974, the AICPA published a series of paperson statistical sampling, “Auditor’s Approach to Statistical Sampling”, foruse in continuing professional education of accountants. During 1962-1974,the AICPA published a series of papers on statistical sampling, “Auditor’sApproach to Statistical Sampling”, for use in continuing professional educationof accountants. In 1981, AICPA issued the professional standard, “AuditSampling”, which provides general guidelines for both sampling methods,statistical and non-statistical.Earlier audits included checks of all transactions in the period coveredby the audited financial statements. At that time, the literature has not givenparticular attention to this subject. Only in 1971, an audit procedures programprinted in the “Federal Reserve Bulletin (Federal Bulletin Stocks)” includedseveral references to sampling such as selecting the “few items” of inventory.Statistics and Audit The program was developed by a special committee, which later became the AICPA, that of Certified Public Accountants American Institute.In the first decades of last century, the auditors often applied sampling, but sample size was not in related to the efficiency of internal control of the entity. In 1955, American Institute of Accountants has published a study case of extending the audit sampling, summarizing audit program developed by certified public accountants, to show why sampling is necessary to extend the audit. The study was important because is one of the leading journal on sampling which recognize a relationship of dependency between detail and reliability testing of internal control.In 1964, the AICPA’s Auditing Standards Board has issued a report entitled “The relationship between statistical sampling and Generally Accepted Auditing Standards (GAAS)” which illustrated the relationship between the accuracy and reliability in sampling and provisions of GAAS.In 1978, the AICPA published the work of Donald M. Roberts,“Statistical Auditing”which explains the underlying theory of statistical sampling in auditing.In 1981, AICPA issued the professional standard, named “Audit Sampling”, which provides guidelines for both sampling methods, statistical and non-statistical.An auditor does not rely solely on the results of a single procedure to reach a conclusion on an account balance, class of transactions or operational effectiveness of the controls. Rather, the audit findings are based on combined evidence from several sources, as a consequence of a number of different audit procedures. When an auditor selects a sample of a population, his objective is to obtain a representative sample, ie sample whose characteristics are identical with the population’s characteristics. This means that selected items are identical with those remaining outside the sample.In practice, auditors do not know for sure if a sample is representative, even after completion the test, but they “may increase the probability that a sample is representative by accuracy of activities made related to design, sample selection and evaluation” [1]. Lack of specificity of the sample results may be given by observation errors and sampling errors. Risks to produce these errors can be controlled.Observation error (risk of observation) appears when the audit test did not identify existing deviations in the sample or using an inadequate audit technique or by negligence of the auditor.Sampling error (sampling risk) is an inherent characteristic of the survey, which results from the fact that they tested only a fraction of the total population. Sampling error occurs due to the fact that it is possible for Revista Română de Statistică nr. 5 / 2010Statistics and Auditthe auditor to reach a conclusion, based on a sample that is different from the conclusion which would be reached if the entire population would have been subject to audit procedures identical. Sampling risk can be reduced by adjusting the sample size, depending on the size and population characteristics and using an appropriate method of selection. Increasing sample size will reduce the risk of sampling; a sample of the all population will present a null risk of sampling.Audit Sampling is a method of testing for gather sufficient and appropriate audit evidence, for the purposes of audit. The auditor may decide to apply audit sampling on an account balance or class of transactions. Sampling audit includes audit procedures to less than 100% of the items within an account balance or class of transactions, so all the sample able to be selected. Auditor is required to determine appropriate ways of selecting items for testing. Audit sampling can be used as a statistical approach and a non- statistical.Statistical sampling is a method by which the sample is made so that each unit consists of the total population has an equal probability of being included in the sample, method of sample selection is random, allowed to assess the results based on probability theory and risk quantification of sampling. Choosing the appropriate population make that auditor’ findings can be extended to the entire population.Non-statistical sampling is a method of sampling, when the auditor uses professional judgment to select elements of a sample. Since the purpose of sampling is to draw conclusions about the entire population, the auditor should select a representative sample by choosing sample units which have characteristics typical of that population. Results will not extrapolate the entire population as the sample selected is representative.Audit tests can be applied on the all elements of the population, where is a small population or on an unrepresentative sample, where the auditor knows the particularities of the population to be tested and is able to identify a small number of items of interest to audit. If the sample has not similar characteristics for the elements of the entire population, the errors found in the tested sample can not extrapolate.Decision of statistical or non-statistical approach depends on the auditor’s professional judgment which seeking sufficient appropriate audits evidence on which to completion its findings about the audit opinion.As a statistical sampling method refer to the random selection that any possible combination of elements of the community is equally likely to enter the sample. Simple random sampling is used when stratification was not to audit. Using random selection involves using random numbers generated byRomanian Statistical Review nr. 5 / 2010Statistics and Audit a computer. After selecting a random starting point, the auditor found the first random number that falls within the test document numbers. Only when the approach has the characteristics of statistical sampling, statistical assessments of risk are valid sampling.In another variant of the sampling probability, namely the systematic selection (also called random mechanical) elements naturally succeed in office space or time; the auditor has a preliminary listing of the population and made the decision on sample size. “The auditor calculated a counting step, and selects the sample element method based on step size. Step counting is determined by dividing the volume of the community to sample the number of units desired. Advantages of systematic screening are its usability. In most cases, a systematic sample can be extracted quickly and method automatically arranges numbers in successive series.”[2].Selection by probability proportional to size - is a method which emphasizes those population units’recorded higher values. The sample is constituted so that the probability of selecting any given element of the population is equal to the recorded value of the item;Stratifi ed selection - is a method of emphasis of units with higher values and is registered in the stratification of the population in subpopulations. Stratification provides a complete picture of the auditor, when population (data table to be analyzed) is not homogeneous. In this case, the auditor stratifies a population by dividing them into distinct subpopulations, which have common characteristics, pre-defined. “The objective of stratification is to reduce the variability of elements in each layer and therefore allow a reduction in sample size without a proportionate increase in the risk of sampling.” [3] If population stratification is done properly, the amount of sample size to come layers will be less than the sample size that would be obtained at the same level of risk given sample with a sample extracted from the entire population. Audit results applied to a layer can be designed only on items that are part of that layer.I appreciated as useful some views on non-statistical sampling methods, which implies that guided the selection of the sample selecting each element according to certain criteria determined by the auditor. The method is subjective; because the auditor selects intentionally items containing set features him.The selection of the series is done by selecting multiple elements series (successive). Using sampling the series is recommended only if a reasonable number of sets used. Using just a few series there is a risk that the sample is not representative. This type of sampling can be used in addition to other samples, where there is a high probability of occurrence of errors. At the arbitrary selection, no items are selected preferably from the auditor, Revista Română de Statistică nr. 5 / 2010Statistics and Auditthat regardless of size or source or characteristics. Is not the recommended method, because is not objective.That sampling is based on the auditor’s professional judgment, which may decide which items can be part or not sampled. Because is not a statistical method, it can not calculate the standard error. Although the sample structure can be constructed to reproduce the population, there is no guarantee that the sample is representative. If omitted a feature that would be relevant in a particular situation, the sample is not representative.Sampling applies when the auditor plans to make conclusions about population, based on a selection. The auditor considers the audit program and determines audit procedures which may apply random research. Sampling is used by auditors an internal control systems testing, and substantive testing of operations. The general objectives of tests of control system and operations substantive tests are to verify the application of pre-defined control procedures, and to determine whether operations contain material errors.Control tests are intended to provide evidence of operational efficiency and controls design or operation of a control system to prevent or detect material misstatements in financial statements. Control tests are necessary if the auditor plans to assess control risk for assertions of management.Controls are generally expected to be similarly applied to all transactions covered by the records, regardless of transaction value. Therefore, if the auditor uses sampling, it is not advisable to select only high value transactions. Samples must be chosen so as to be representative population sample.An auditor must be aware that an entity may change a special control during the course of the audit. If the control is replaced by another, which is designed to achieve the same specific objective, the auditor must decide whether to design a sample of all transactions made during or just a sample of transactions controlled again. Appropriate decision depends on the overall objective of the audit test.Verification of internal control system of an entity is intended to provide guidance on the identification of relevant controls and design evaluation tests of controls.Other tests:In testing internal control system and testing operations, audit sample is used to estimate the proportion of elements of a population containing a characteristic or attribute analysis. This proportion is called the frequency of occurrence or percentage of deviation and is equal to the ratio of elements containing attribute specific and total number of population elements. WeightRomanian Statistical Review nr. 5 / 2010Statistics and Audit deviations in a sample are determined to calculate an estimate of the proportion of the total population deviations.Risk associated with sampling - refers to a sample selection which can not be representative of the population tested. In other words, the sample itself may contain material errors or deviations from the line. However, issuing a conclusion based on a sample may be different from the conclusion which would be reached if the entire population would be subject to audit.Types of risk associated with sampling:Controls are more effective than they actually are or that there are not significant errors when they exist - which means an inappropriate audit opinion. Controls are less effective than they actually are that there are significant errors when in fact they are not - this calls for additional activities to establish that initial conclusions were incorrect.Attributes testing - the auditor should be defining the characteristics to test and conditions for misconduct. Attributes testing will make when required objective statistical projections on various characteristics of the population. The auditor may decide to select items from a population based on its knowledge about the entity and its environment control based on risk analysis and the specific characteristics of the population to be tested.Population is the mass of data on which the auditor wishes to generalize the findings obtained on a sample. Population will be defined compliance audit objectives and will be complete and consistent, because results of the sample can be designed only for the population from which the sample was selected.Sampling unit - a unit of sampling may be, for example, an invoice, an entry or a line item. Each sample unit is an element of the population. The auditor will define the sampling unit based on its compliance with the objectives of audit tests.Sample size - to determine the sample size should be considered whether sampling risk is reduced to an acceptable minimum level. Sample size is affected by the risk associated with sampling that the auditor is willing to accept it. The risk that the auditor is willing to accept lower, the sample will be higher.Error - for detailed testing, the auditor should project monetary errors found in the sample population and should take into account the projected error on the specific objective of the audit and other audit areas. The auditor projects the total error on the population to get a broad perspective on the size of the error and comparing it with tolerable error.For detailed testing, tolerable error is tolerable and misrepresentations Revista Română de Statistică nr. 5 / 2010Statistics and Auditwill be a value less than or equal to materiality used by the auditor for the individual classes of transactions or balances audited. If a class of transactions or account balances has been divided into layers error is designed separately for each layer. Design errors and inconsistent errors for each stratum are then combined when considering the possible effect on the total classes of transactions and account balances.Evaluation of sample results - the auditor should evaluate the sample results to determine whether assessing relevant characteristics of the population is confirmed or needs to be revised.When testing controls, an unexpectedly high rate of sample error may lead to an increase in the risk assessment of significant misrepresentation unless it obtained additional audit evidence to support the initial assessment. For control tests, an error is a deviation from the performance of control procedures prescribed. The auditor should obtain evidence about the nature and extent of any significant changes in internal control system, including the staff establishment.If significant changes occur, the auditor should review the understanding of internal control environment and consider testing the controls changed. Alternatively, the auditor may consider performing substantive analytical procedures or tests of details covering the audit period.In some cases, the auditor might not need to wait until the end audit to form a conclusion about the effectiveness of operational control, to support the control risk assessment. In this case, the auditor might decide to modify the planned substantive tests accordingly.If testing details, an unexpectedly large amount of error in a sample may cause the auditor to believe that a class of transactions or account balances is given significantly wrong in the absence of additional audit evidence to show that there are not material misrepresentations.When the best estimate of error is very close to the tolerable error, the auditor recognizes the risk that another sample have different best estimate that could exceed the tolerable error.ConclusionsFollowing analysis of sampling methods conclude that all methods have advantages and disadvantages. But the auditor is important in choosing the sampling method is based on professional judgment and take into account the cost / benefit ratio. Thus, if a sampling method proves to be costly auditor should seek the most efficient method in view of the main and specific objectives of the audit.Romanian Statistical Review nr. 5 / 2010Statistics and Audit The auditor should evaluate the sample results to determine whether the preliminary assessment of relevant characteristics of the population must be confirmed or revised. If the evaluation sample results indicate that the relevant characteristics of the population needs assessment review, the auditor may: require management to investigate identified errors and likelihood of future errors and make necessary adjustments to change the nature, timing and extent of further procedures to take into account the effect on the audit report.Selective bibliography:[1] Law no. 672/2002 updated, on public internal audit[2] Arens, A şi Loebbecke J - Controve …Audit– An integrate approach”, 8th edition, Arc Publishing House[3] ISA 530 - Financial Audit 2008 - International Standards on Auditing, IRECSON Publishing House, 2009- Dictionary of macroeconomics, Ed C.H. Beck, Bucharest, 2008Revista Română de Statistică nr. 5 / 2010Statistics and Audit摘要美国公司的规模迅速增加,从第二十世纪初创造了必要的审计程序,根据选定的部分总人口的审计,以获得可靠的审计证据,以描述整个人口组成的帐户余额或类别的交易。
品牌管理参考文献及外文文献翻译-中字4865

品牌管理参考文献及外文文献翻译-中字4865 品牌管理参考文献及外文文献翻译-英语论文英文:2799字中文:4685字:此版本为WORD格式,下载后可随便修改,:品牌管理参考文献及外文文献翻译-英语论文绪论新经济时代的大型企业面临的最主要问题是如何建立和管理企业的品牌。
谁拥有强有力的品牌,谁就拥有了竞争的资本,未来企业间的竞争是品牌的战争。
企业的品牌从默默无闻发展成为一个著名的成功品牌,是一个从小到大的过程,是和企业成长的生命周期密切相关的。
企业处在不同的成长阶段,面临的经营环境、经营管理重点各不相同,相应的品牌管理的重点和特色也有所不同。
1 品牌促进企业成长的机理认识作为企业声誉与信息的组合体,品牌是企业及其产品所包含的技术、质量、功能、文化、市场地位等引发形成的信息系统,是企业及其产品识别的符号系统。
品牌通过其内涵的信息系统及市场对它的反应评价,影响市场的行为,产生有利于该企业的行为偏好,并进一步区别于有形要素的存在,成为企业的无形资产而发挥功能,实现其经济价值,推动企业的成长。
大量中外企业的实践证明,品牌是促进企业成长的主要动力,而且品牌对企业的贡献随着企业的成长日益扩大。
1.1 通过影响顾客的购买心理和购买选择偏好,扩大产品的销售品牌在顾客心目中是企业和产品的标志,代表着产品的品质、特色,代表着企业的经营特色、质量管理要求等。
顾客通过品牌可以非常容易地获取和辨别有关的信息,获取信息成本的下降意味着顾客购买成本的下降。
而顾客熟悉的品牌或者知名度较高的品牌,又使顾客的购买风险感觉系数下降。
这两个方面的综合作用使顾客的购买心理和购买行为形成了对某种品牌产品的选择偏好,从而扩大了产品的销售。
1.2 通过提高品牌的认知,创造产品的附加价值品牌知名度可以通过广告迅际利润,在价格上升时,消费者反应缺乏弹性,价格下降时则富有弹性,这也是名牌产品之所以能够获得比一般产品更高利润空间的原因所在。
1.3 通过强化顾客对品牌的联想和忠诚,提高产品的竞争能力当顾客对品牌有了整体认知了以后,企业可以通过强化顾客对品牌的联想和忠诚进一步推动品牌对企业成长的促进作用。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
基于4G LTE技术的高速铁路移动通信系统KS Solanki教授,Kratika ChouhanUjjain工程学院,印度Madhya Pradesh的Ujjain摘要:随着时间发展,高速铁路(HSR)要求可靠的,安全的列车运行和乘客通信。
为了实现这个目标,HSR的系统需要更高的带宽和更短的响应时间,而且HSR的旧技术需要进行发展,开发新技术,改进现有的架构和控制成本。
为了满足这一要求,HSR采用了GSM的演进GSM-R技术,但它并不能满足客户的需求。
因此采用了新技术LTE-R,它提供了更高的带宽,并且在高速下提供了更高的客户满意度。
本文介绍了LTE-R,给出GSM-R与LTE-R之间的比较结果,并描述了在高速下哪种铁路移动通信系统更好。
关键词:高速铁路,LTE,GSM,通信和信令系统一介绍高速铁路需要提高对移动通信系统的要求。
随着这种改进,其网络架构和硬件设备必须适应高达500公里/小时的列车速度。
HSR还需要快速切换功能。
因此,为了解决这些问题,H SR需要一种名为LTE-R的新技术,基于LTE-R的HSR提供高数据传输速率,更高带宽和低延迟。
LTE-R能够处理日益增长的业务量,确保乘客安全并提供实时多媒体信息。
随着列车速度的不断提高,可靠的宽带通信系统对于高铁移动通信至关重要。
HSR的应用服务质量(QO S)测量,包括如数据速率,误码率(BER)和传输延迟。
为了实现HSR的运营需求,需要一个能够与LTE保持一致的能力的新系统,提供新的业务,但仍能够与GSM-R长时间共存。
HSR系统选择合适的无线通信系统时,需要考虑性能,服务,属性,频段和工业支持等问题。
4G LTE系统与第三代(3G)系统相比,它具有简单的扁平架构, 高数据速率和低延迟。
在LTE的性能和成熟度水平上,LTE- railway(LTE-R)将可能成为下一代H SR通信系统。
二LTE-R系统描述考虑LTE-R的频率和频谱使用,对为高速铁路(HSR)通信提供更高效的数据传输非常重要。
高铁是重要的战略基础设施,因此需要专门为其分配大量的频谱块。
包括欧洲铁路局(ERA),中国铁路在内的一些行业机构正试图对高铁使用的频谱进行划分。
目前,尽管在一些国家也使用700-900MHz频段,但大多数LTE系统工作在1GHz以上的频段,例如 1.8,2.1,2.3和2.6GHz。
较高的频带提供较大的带宽,提供较高的数据速率,而较低的频带对应较长的距离。
由于高频段传播损耗较大,衰落更严重,因此建议LTE-R小区的半径保持小于2km[由于HSR中信噪比(SNR)的严格要求],但是这导致频繁切换和为满足更高的BS密度而投资更多资金。
因此,450-470 MHz,800MHz和1.4GHz等低频带已被广泛考虑。
450-470MHz频段已经被铁路行业所采用。
此外,LTE的载波聚合能力将允许使用不同的频带来克服容量问题。
图1(b)给出了中国450〜470 MHz频率的详细分配[12],在该频带内为LTE-R分配足够的带宽是可行的。
在欧洲,UIC的FRMCS 希望通过在当前的GSM-R投资基础之上,重复使用现有的无线站点。
这可以节省高达80-90%的网络成本。
铁路也考虑继续使用GSM-R无线,因此,1 GHz以下的频谱划分在欧洲更具成本效益。
但是,频段的选择取决于政府政策,因国家而异。
标准LTE包括演进分组核心(EPC)的核心网络和演进通用陆地无线电的无线电接入网络(E-UTRAN)。
基于互联网协议(IP)的EPC支持语音和数据到小区塔的无缝切换,并且每个E-UTRAN小区将通过高速分组接入(HSPA)支持高数据和语音容量。
作为HSR下一代通信系统的候选者。
LTE-R继承了LTE的所有重要特性,并提供额外的无线接入系统与车载设备(OBU)交换无线信号,与HSR特定需求相匹配。
图2给出了根据[4]的LTE-R的未来架构,它表明LTE-R的核心网络以后兼容GSM-R。
与公众LTE网络相比,LTE-R在架构,系统参数,网络布局,业务和QoS 等方面存在很多差异。
列车信息由RB C和机载无线电设备检测。
这提高了列车跟踪的准确性和列车调度的效率。
LTER也可用于为未来的自动驾驶系统提供信息传输。
基于未来HSR通信的QoS要求,LTE-R的优选参数总结在表1中。
请注意,LTE-R的可靠性将超过容量。
该网络必须能够在复杂的铁路环境中以500km/h的速度运行。
因此,正交相移键控(QPSK)调制是优选的,并且必须尽可能减少重传的分组数量。
三LTE-R服务HSR通信打算使用一个完善的/现成的系统,这些系统需要对某些特定需求的服务级别定义进行。
正如E-Train项目[6]所建议的, LTE-R应提供一系列服务来提高安全性,服务质量和效率。
与GSM-R的传统业务相比,描述了LTE-R的一些特性。
1)控制系统的信息传输:为了与ETCS-3或中国列车控制系统第4级(CTCS-4)兼容,LTE-R通过无线通信实现控制信息的实时信息传输,延迟时间<50 毫秒.当ETCS-2 / CTCS-3中的轨道电路检测到列车的位置信息时,在ETCS-3 / CTCS-4和LTE -R中,火车的位置信息被RBC和机载无线电设备检测到。
这提高了列车跟踪的准确性和列车调度的效率。
LTE-R也可用于为未来的自动驾驶系统提供信息传输。
2) 实时监控:LTE-R提供前轨道,驾驶室和汽车连接器状态的视频监控;轨道状况的实时信息监测(如温度和探伤);对铁路基础设施(如桥梁和隧道)视频监控以避免自然灾害;和对交叉轨道的视频监控以检测低温下的冻结。
监控信息将与控制中心和高速列车实时共享,延时小于300毫秒。
尽管上述一些监视可以通过有线通信进行,但基于无线的LTER系统对部署和维护而言更具成本效益。
3) 培训多媒体调度:LTE-R为调度员提供驾驶员和车场的全部调度信息(包括文本,数据,语音,图像,视频等),提高调度效率。
它支持丰富的功能,如语音中继,动态分组,临时组呼,短消息和多媒体消息。
4) 铁路应急通信:当发生自然灾害,事故或其他紧急情况时,需要在事故现场和救援中心之间建立即时通信,以提供语音,视频,数据和图像传输。
铁路应急通信系统使用铁路专用网络,以确保与GSM-R相比快速部署和更快的响应速度(延迟<100ms)。
5) 铁路物联网(IOT):LTE-R提供铁路物联网服务,例如实时查询和跟踪火车和货物。
它有助于提高运输效率并延长服务范围。
此外,铁路物联网还可以提高列车的安全性。
今天的大多数列车依靠位于偏远地区的轨道交换机。
借助IOT和远程监控,可以将交换机的轨道基础设施重新转换为电力线,可以自动执行许多常规安全检查并降低维护成本。
除了之前列出的功能之外,还应包括LTE-R的其他一些服务,例如动态座位预留,移动电子票务和乘客信息的无线互动。
基于UIC,中国铁路和ERA的技术报告,图3总结了LTE-R未来可能提供的服务。
值得注意的是,高速列车内的乘客的宽带无线接入不由LTE-R提供因为其带宽有限。
一些针对火车乘客的宽带无线接入的候选者已经被探讨过,例如Wi-Fi,全球微波接入互操作性(WiMAX), 3G / 4G / 5G,卫星通信以及光纤无线电(RoF) 技术[13]。
四LTE-R挑战与LTE-R相关的挑战有以下几个。
1) 特定于HSR的场景:在的LTE 标准中,提出了HSR的信道模型,其仅包括两种场景,即开放空间和隧道,并且在两种场景中都使用非衰落信道模型。
但是,如[15]所示,HSR的严格要求(高速,铁轨平坦度等)导致许多特定于HSR的环境,例如高架桥,岩屑和隧道。
这些情景中的传播特性是不同于传统的蜂窝通信,可能会明显影响GSM-R 和LTE-R的系统性能。
过去,为了描述GSM-R频带的HSR信道,进行了一些测量,并且已经提出了基于情景的路径损耗和阴影衰落模型,用于930MHz的GSM-R。
然而,这项工作仍在进行中,许多科学问题尚未在LTE-R频段得到解决,例如传播损耗,多径分量(MPC)的几何分布以及这些HSR特有的二维/三维角度估计环境。
有必要LTE-R的链路预算和网络设计开发一系列信道模型,并且需要广泛的信道测量。
2) 高机动性:高速列车通常以350公里/小时的速度运行,而LTE-R 则支持500公里/小时。
高速度导致一系列问题。
首先,高速度导致信道的不平稳,因为在短时间段内,列车通过MPC发生显著变化的大区域。
非平稳性的表征特别重要,因为它影响单载波和多载波系统的BER(误码率)。
其次,高速度导致接收频率的偏移,称为多普勒频移。
例如,如果频率为2.6 GHz,则350km/h的最大多普勒频移为843Hz,而10km/h的行人移动速度仅为24 Hz。
严重的多普勒频移会导致信号的相移,并可能损害角度调制信号的接收。
然而,由于高速列车大多以已知速度沿着预定线路移动,因此可以通过使用实时记录的速度和位置信息来跟踪和补偿多普勒频移。
第三,由于高速,HSR环境中会出现大的多普勒扩展。
对于LTE-R(宽带系统),多普勒扩展通常会导致信号与干扰加噪声比的损失,并会妨碍载波恢复和同步。
多普勒扩展也是正交频分复用(OFDM)系统特别关心的问题,因为它会破坏OFDM子载波的正交性。
应考虑几种方法,如频域均衡和载波间干扰自消除方案[18]。
3) 延迟扩展:延迟扩散会导致OF DM子载波之间的正交性的丢失,并且应该使用称为循环前缀(CP)的特殊类型的保护间隔。
延迟弥散决定了所需的CP长度。
LTE支持短(4.76毫秒)和长(16.67毫秒)CP方案。
对于短CP方案,两个MPC之间相应的最大路径长度差为1.4公里。
由于铁路通信的目标是提供线性覆盖,因此广泛使用沿轨道具有主瓣的定向BS天线,因此发射功率集中在窄条形区域。
直观地说,我们预计短CP方案对于LTE-R是足够的。
这主要是因为高速列车主要在(半)郊区/郊区环境中行驶,而在这些环境中,散射点很少。
但是,在一些具有丰富多次反射的特殊环境中,如岩屑,预计会出现较大的延迟差异(注意这是需要基于测量的验证),并应使用较长的CP方案。
另一个大型延误传播的例子发生在沿着铁轨[19]的山区,特别是在火车进入和离开隧道前后。
需要进行更多测量来解决HSR环境中延迟传播的现象,并且需要像使用通用LTE一样根据环境调整CP。
4) 线性覆盖:在HSR中,使用沿轨道定向天线的线性覆盖,其中定向BS 天线将它们的主瓣沿着轨道定位,以使其功率效率更高。
线性覆盖带来一些好处,例如,与火车的已知位置相比,可以设计具有良好性能的基于距离/时间的波束形成算法。
然而,值得注意的是,线性覆盖的链路预算和性能分析与蜂窝系统的环形小区不同,例如用于确定覆盖区域的百分比。