外文翻译原文一
塑料注塑模具中英文对照外文翻译文献
外文翻译及原文(文档含英文原文和中文翻译)【原文一】CONCURRENT DESIGN OF PLASTICS INJECTION MOULDS AbstractThe plastic product manufacturing industry has been growing rapidly in recent years. One of the most popular processes for making plastic parts is injection moulding. The design of injection mould is critically important to product quality and efficient product processing.Mould-making companies, who wish to maintain the competitive edge, desire to shorten both design and manufacturing leading times of the by applying a systematic mould design process. The mould industry is an important support industry during the product development process, serving as an important link between the product designer and manufacturer. Product development has changed from the traditional serial process of design, followed by manufacture, to a more organized concurrent process where design and manufacture are considered at a very early stage of design. The concept of concurrent engineering (CE) is no longer new and yet it is still applicable and relevant in today’s manuf acturing environment. Team working spirit, management involvement, total design process and integration of IT tools are still the essence of CE. The application of The CE process to the design of an injection process involves the simultaneous consideration of plastic part design, mould design and injection moulding machine selection, production scheduling and cost as early as possible in the design stage.This paper presents the basic structure of an injection mould design. The basis of this system arises from an analysis of the injection mould design process for mould design companies. This injection mould design system covers both the mould design process and mould knowledge management. Finally the principle of concurrent engineering process is outlined and then its principle is applied to the design of a plastic injection mould.Keywords :Plastic injection mould design, Concurrent engineering, Computer aided engineering, Moulding conditions, Plastic injection moulding, Flow simulation1.IntroductionInjection moulds are always expensive to make, unfortunately without a mould it can not be possible ho have a moulded product. Every mould maker has his/her own approach to design a mould and there are many different ways of designing and building a mould. Surely one of the most critical parameters to be considered in the design stage of the mould is the number of cavities, methods of injection, types of runners, methods of gating, methods of ejection, capacity and features of the injection moulding machines. Mould cost, mould quality and cost of mould product are inseparableIn today’s completive environment, computer aided mould filling simulation packages can accurately predict the fill patterns of any part. This allows for quick simulations of gate placements and helps finding the optimal location. Engineers can perform moulding trials on the computer before the part design is completed. Process engineers can systematically predict a design and process window, and can obtain information about the cumulative effect of the process variables that influence part performance, cost, and appearance.2.Injection MouldingInjection moulding is one of the most effective ways to bring out the best in plastics. It is universally used to make complex, finished parts, often in a single step, economically, precisely and with little waste. Mass production of plastic parts mostly utilizes moulds. The manufacturing process and involving moulds must be designed after passing through the appearance evaluation and the structure optimization of the product design. Designers face a hugenumber of options when they create injection-moulded components. Concurrent engineering requires an engineer to consider the manufacturing process of the designed product in the development phase. A good design of the product is unable to go to the market if its manufacturing process is impossible or too expensive. Integration of process simulation, rapid prototyping and manufacturing can reduce the risk associated with moving from CAD to CAM and further enhance the validity of the product development.3. Importance of Computer Aided Injection Mould DesignThe injection moulding design task can be highly complex. Computer Aided Engineering (CAE) analysis tools provide enormous advantages of enabling design engineers to consider virtually and part, mould and injection parameters without the real use of any manufacturing and time. The possibility of trying alternative designs or concepts on the computer screen gives the engineers the opportunity to eliminate potential problems before beginning the real production. Moreover, in virtual environment, designers can quickly and easily asses the sensitivity of specific moulding parameters on the quality and manufacturability of the final product. All theseCAE tools enable all these analysis to be completed in a meter of days or even hours, rather than weeks or months needed for the real experimental trial and error cycles. As CAE is used in the early design of part, mould and moulding parameters, the cost savings are substantial not only because of best functioning part and time savings but also the shortens the time needed to launch the product to the market.The need to meet set tolerances of plastic part ties in to all aspects of the moulding process, including part size and shape, resin chemical structure, the fillers used, mould cavity layout, gating, mould cooling and the release mechanisms used. Given this complexity, designers often use computer design tools, such as finite element analysis (FEA) and mould filling analysis (MFA), to reduce development time and cost. FEA determines strain, stress and deflection in a part by dividing the structure into small elements where these parameters can be well defined. MFA evaluates gate position and size to optimize resin flow. It also defines placement of weld lines, areas of excessive stress, and how wall and rib thickness affect flow. Other finite element design tools include mould cooling analysis for temperature distribution, and cycle time and shrinkage analysis for dimensional control and prediction of frozen stress and warpage.The CAE analysis of compression moulded parts is shown in Figure 1. The analysis cycle starts with the creation of a CAD model and a finite element mesh of the mould cavity. After the injection conditions are specified, mould filling, fiber orientation, curing and thermal history, shrinkage and warpage can be simulated. The material properties calculated by the simulation can be used to model the structural behaviour of the part. If required, part design, gate location and processing conditions can be modified in the computer until an acceptable part is obtained. After the analysis is finished an optimized part can be produced with reduced weldline (known also knitline), optimized strength, controlled temperatures and curing, minimized shrinkage and warpage.Machining of the moulds was formerly done manually, with a toolmaker checking each cut. This process became more automated with the growth and widespread use of computer numerically controlled or CNC machining centres. Setup time has also been significantly reduced through the use of special software capable of generating cutter paths directly from a CAD data file. Spindle speeds as high as 100,000 rpm provide further advances in high speed machining. Cutting materials have demonstrated phenomenal performance without the use of any cutting/coolant fluid whatsoever. As a result, the process of machining complex cores and cavities has been accelerated. It is good news that the time it takes to generate a mould is constantly being reduced. The bad news, on the other hand, is that even with all these advances, designing and manufacturing of the mould can still take a long time and can be extremely expensive.Figure 1 CAE analysis of injection moulded partsMany company executives now realize how vital it is to deploy new products to market rapidly. New products are the key to corporate prosperity. They drive corporate revenues, market shares, bottom lines and share prices. A company able to launch good quality products with reasonable prices ahead of their competition not only realizes 100% of the market before rival products arrive but also tends to maintain a dominant position for a few years even after competitive products have finally been announced (Smith, 1991). For most products, these two advantages are dramatic. Rapid product development is now a key aspect of competitive success. Figure 2 shows that only 3–7% of the product mix from the average industrial or electronics company is less than 5 years old. For companies in the top quartile, the number increases to 15–25%. For world-class firms, it is 60–80% (Thompson, 1996). The best companies continuously develop new products. AtHewlett-Packard, over 80% of the profits result from products less than 2 years old! (Neel, 1997)Figure 2. Importance of new product (Jacobs, 2000)With the advances in computer technology and artificial intelligence, efforts have been directed to reduce the cost and lead time in the design and manufacture of an injection mould. Injection mould design has been the main area of interest since it is a complex process involving several sub-designs related to various components of the mould, each requiring expert knowledge and experience. Lee et. al. (1997) proposed a systematic methodology and knowledge base for injection mould design in a concurrent engineering environment.4.Concurrent Engineering in Mould DesignConcurrent Engineering (CE) is a systematic approach to integrated product development process. It represents team values of co-operation, trust and sharing in such a manner that decision making is by consensus, involving all per spectives in parallel, from the very beginning of the productlife-cycle (Evans, 1998). Essentially, CE provides a collaborative, co-operative, collective and simultaneous engineering working environment. A concurrent engineering approach is based on five key elements:1. process2. multidisciplinary team3. integrated design model4. facility5. software infrastructureFigure 3 Methodologies in plastic injection mould design, a) Serial engineering b) Concurrent engineeringIn the plastics and mould industry, CE is very important due to the high cost tooling and long lead times. Typically, CE is utilized by manufacturing prototype tooling early in the design phase to analyze and adjust the design. Production tooling is manufactured as the final step. The manufacturing process and involving moulds must be designed after passing through the appearance evaluation and the structure optimization of the product design. CE requires an engineer to consider the manufacturing process of the designed product in the development phase.A good design of the product is unable to go to the market if its manufacturing process is impossible. Integration of process simulation and rapid prototyping and manufacturing can reduce the risk associated with moving from CAD to CAM and further enhance the validity of the product development.For years, designers have been restricted in what they can produce as they generally have todesign for manufacture (DFM) – that is, adjust their design intent to enable the component (or assembly) to be manufactured using a particular process or processes. In addition, if a mould is used to produce an item, there are therefore automatically inherent restrictions to the design imposed at the very beginning. Taking injection moulding as an example, in order to process a component successfully, at a minimum, the following design elements need to be taken into account:1. . geometry;. draft angles,. Non re-entrants shapes,. near constant wall thickness,. complexity,. split line location, and. surface finish,2. material choice;3. rationalisation of components (reducing assemblies);4. cost.In injection moulding, the manufacture of the mould to produce the injection-moulded components is usually the longest part of the product development process. When utilising rapid modelling, the CAD takes the longer time and therefore becomes the bottleneck.The process design and injection moulding of plastics involves rather complicated and time consuming activities including part design, mould design, injection moulding machine selection, production scheduling, tooling and cost estimation. Traditionally all these activities are done by part designers and mould making personnel in a sequential manner after completing injection moulded plastic part design. Obviously these sequential stages could lead to long product development time. However with the implementation of concurrent engineering process in the all parameters effecting product design, mould design, machine selection, production scheduling,tooling and processing cost are considered as early as possible in the design of the plastic part. When used effectively, CAE methods provide enormous cost and time savings for the part design and manufacturing. These tools allow engineers to virtually test how the part will be processed and how it performs during its normal operating life. The material supplier, designer, moulder and manufacturer should apply these tools concurrently early in the design stage of the plastic parts in order to exploit the cost benefit of CAE. CAE makes it possible to replace traditional, sequential decision-making procedures with a concurrent design process, in which all parties can interact and share information, Figure 3. For plastic injection moulding, CAE and related design data provide an integrated environment that facilitates concurrent engineering for the design and manufacture of the part and mould, as well as material selection and simulation of optimal process control parameters.Qualitative expense comparison associated with the part design changes is shown in Figure 4 , showing the fact that when design changes are done at an early stages on the computer screen, the cost associated with is an order of 10.000 times lower than that if the part is in production. These modifications in plastic parts could arise fr om mould modifications, such as gate location, thickness changes, production delays, quality costs, machine setup times, or design change in plastic parts.Figure 4 Cost of design changes during part product development cycle (Rios et.al, 2001)At the early design stage, part designers and moulders have to finalise part design based on their experiences with similar parts. However as the parts become more complex, it gets rather difficult to predict processing and part performance without the use of CAE tools. Thus for even relatively complex parts, the use of CAE tools to prevent the late and expensive design changesand problems that can arise during and after injection. For the successful implementation of concurrent engineering, there must be buy-in from everyone involved.5.Case StudyFigure 5 shows the initial CAD design of plastics part used for the sprinkler irrigation hydrant leg. One of the essential features of the part is that the part has to remain flat after injection; any warping during the injection causes operating problems.Another important feature the plastic part has to have is a high bending stiffness. A number of feeders in different orientation were added to the part as shown in Figure 5b. These feeders should be designed in a way that it has to contribute the weight of the part as minimum aspossible.Before the design of the mould, the flow analysis of the plastic part was carried out with Moldflow software to enable the selection of the best gate location Figure 6a. The figure indicates that the best point for the gate location is the middle feeder at the centre of the part. As the distortion and warpage of the part after injection was vital from the functionality point of view and it has to be kept at a minimum level, the same software was also utilised to yiled the warpage analysis. Figure 5 b shows the results implying the fact that the warpage well after injection remains within the predefined dimensional tolerances.6. ConclusionsIn the plastic injection moulding, the CAD model of the plastic part obtained from commercial 3D programs could be used for the part performance and injection process analyses. With the aid ofCEA technology and the use of concurrent engineering methodology, not only the injection mould can be designed and manufactured in a very short of period of time with a minimised cost but also all potential problems which may arise from part design, mould design and processing parameters could be eliminated at the very beginning of the mould design. These two tools help part designers and mould makers to develop a good product with a better delivery and faster tooling with less time and money.References1. Smith P, Reinertsen D, The time-to-market race, In: Developing Products in Half the Time. New York, Van Nostrand Reinhold, pp. 3–13, 19912.Thompson J, The total product development organization. Proceedings of the SecondAsia–Pacific Rapid Product Development Conference, Brisbane, 19963.Neel R, Don’t stop after the prototype, Seventh International Conference on Rapid Prototyping, San Francisco, 19974.Jacobs PF, “Chapter 3: Rapid Product Development” in Rapid Tooling: Technologies and Industrial Applications , Ed. Peter D. Hilton; Paul F. Jacobs, Marcel Decker, 20005.Lee R-S, Chen, Y-M, and Lee, C-Z, “Development of a concurrent mould design system: a knowledge based approach”, Computer Integrated Manufacturing Systems, 10(4), 287-307, 19976.Evans B., “Simultaneous Engineering”, Mechanical Engi neering , V ol.110, No.2, pp.38-39, 19987.Rios A, Gramann, PJ and Davis B, “Computer Aided Engineering in Compression Molding”, Composites Fabricators Association Annual Conference , Tampa Bay, 2001【译文一】塑料注塑模具并行设计塑料制品制造业近年迅速成长。
外文翻译1
译文(一)THE ACCOUNTING REVIEWV ol. 83, No. 3 2008pp. 823–853市场参与者的杜邦分析的使用马克•t•Soliman华盛顿大学文摘:杜邦分析,一种常见的财务报表分析,依靠于净营业资产收益率的两个乘法组件:利润率和资产周转率。
这两个会计比率衡量不同的构造。
因此,有不同的属性。
之前的研究已经发现,资产周转率的变化是未来收益的变化正相关。
本文全面探讨了杜邦组件和沿着三个维度有助于文学。
首先,本文有助于财务报表分析文献,发现在这个会计信息信号实际上是增量学习会计信号在先前的研究在预测未来收益。
其次,它有助于文学在股票市场上使用的会计信息通过检查眼前和未来的股本回报投资者应对这些组件。
最后,它增加了分析师的文献处理会计信息的再次测试直接和延迟反应的分析师通过同期预测修正以及未来预测错误。
一致的跨市场加入者的两组,结果表明是有用的信息就是明证杜邦组件和股票收益之间的联系以及维度分析师预测。
然而,我发现预测未来预测错误和异常返回信息处理表明似乎没有完成。
平均水平,分析表明杜邦组件代表增量和可行的操作特征信息的公司。
关键词:财务报表分析、杜邦分析、市场回报、分析师预估。
数据可用性:在这项研究中使用的数据是公开的来源显示的文本。
在本文中,我分析杜邦分析中包含的信息是否与股市回报相关和分析师预测。
之前的研究文档组件从杜邦分析,分解的净营业资产收益率为利润率和资产周转率,有解释力对未来盈利能力的变化。
本文增加了文献综合研究投资者和分析师反应杜邦组件三个维度。
首先,它复制先前记录的预测能力和检查是否健壮和增量其他预测已经考虑在文学的存在。
其次,它探讨了使用这些组件的股市投资者通过观察同生和未来收益。
在同时代的长窗协会和短时期限信息测试,结果显示积极联系杜邦组件和股本回报率。
但小未来异常返回交易策略显示的信息可能不完整的处理。
最后,检查当前预测修正由卖方分析师和未来的预测错误。
外文文献翻译(图片版)
本科毕业论文外文参考文献译文及原文学院经济与贸易学院专业经济学(贸易方向)年级班别2007级 1 班学号3207004154学生姓名欧阳倩指导教师童雪晖2010 年 6 月 3 日目录1 外文文献译文(一)中国银行业的改革和盈利能力(第1、2、4部分) (1)2 外文文献原文(一)CHINA’S BANKING REFORM AND PROFITABILITY(Part 1、2、4) (9)1概述世界银行(1997年)曾声称,中国的金融业是其经济的软肋。
当一国的经济增长的可持续性岌岌可危的时候,金融业的改革一直被认为是提高资金使用效率和消费型经济增长重新走向平衡的必要(Lardy,1998年,Prasad,2007年)。
事实上,不久前,中国的国有银行被视为“技术上破产”,它们的生存需要依靠充裕的国家流动资金。
但是,在银行改革开展以来,最近,强劲的盈利能力已恢复到国有商业银行的水平。
但自从中国的国有银行在不久之前已经走上了改革的道路,它可能过早宣布银行业的改革尚未取得完全的胜利。
此外,其坚实的财务表现虽然强劲,但不可持续增长。
随着经济增长在2008年全球经济衰退得带动下已经开始软化,银行预计将在一个比以前更加困难的经济形势下探索。
本文的目的不是要评价银行业改革对银行业绩的影响,这在一个完整的信贷周期后更好解决。
相反,我们的目标是通过审查改革的进展和银行改革战略,并分析其近期改革后的强劲的财务表现,但是这不能完全从迄今所进行的改革努力分离。
本文有三个部分。
在第二节中,我们回顾了中国的大型国有银行改革的战略,以及其执行情况,这是中国银行业改革的主要目标。
第三节中分析了2007年的财务表现集中在那些在市场上拥有浮动股份的四大国有商业银行:中国工商银行(工商银行),中国建设银行(建行),对中国银行(中银)和交通银行(交通银行)。
引人注目的是中国农业银行,它仍然处于重组上市过程中得适当时候的后期。
第四节总结一个对银行绩效评估。
注塑模具设计外文翻译
毕业设计(论文)外文资料翻译及原文(2012届)题目电话机三维造型与注塑模具设计指导教师院系工学院班级学号姓名二〇一一年十二月六日【译文一】塑料注塑模具并行设计Assist.Prof.Dr. A. Y AYLA /Prof.Dr. Paş a YAYLA摘要塑料制品制造业近年迅速成长。
其中最受欢迎的制作过程是注塑塑料零件。
注塑模具的设计对产品质量和效率的产品加工非常重要。
模具公司想保持竞争优势,就必须缩短模具设计和制造的周期。
模具是工业的一个重要支持行业,在产品开发过程中作为一个重要产品设计师和制造商之间的联系。
产品开发经历了从传统的串行开发设计制造到有组织的并行设计和制造过程中,被认为是在非常早期的阶段的设计。
并行工程的概念(CE)不再是新的,但它仍然是适用于当今的相关环境。
团队合作精神、管理参与、总体设计过程和整合IT工具仍然是并行工程的本质。
CE过程的应用设计的注射过程包括同时考虑塑件设计、模具设计和注塑成型机的选择、生产调度和成本中尽快设计阶段。
介绍了注射模具的基本结构设计。
在该系统的基础上,模具设计公司分析注塑模具设计过程。
该注射模设计系统包括模具设计过程及模具知识管理。
最后的原则概述了塑料注射模并行工程过程并对其原理应用到设计。
关键词:塑料注射模设计、并行工程、计算机辅助工程、成型条件、塑料注塑、流动模拟1、简介注塑模具总是昂贵的,不幸的是没有模具就不可能生产模具制品。
每一个模具制造商都有他/她自己的方法来设计模具,有许多不同的设计与建造模具。
当然最关键的参数之一,要考虑到模具设计阶段是大量的计算、注射的方法,浇注的的方法、研究注射成型机容量和特点。
模具的成本、模具的质量和制件质量是分不开的在针对今天的计算机辅助充型模拟软件包能准确地预测任何部分充填模式环境中。
这允许快速模拟实习,帮助找到模具的最佳位置。
工程师可以在电脑上执行成型试验前完成零件设计。
工程师可以预测过程系统设计和加工窗口,并能获得信息累积所带来的影响,如部分过程变量影响性能、成本、外观等。
外文文献翻译译稿
外文文献翻译译稿1可用性和期望值来自Willliam S.Green, Patrick W.Jordan.产品的愉悦:超越可用性根据人机工程学会(HFES)的观点,人机工程学着眼于“发现和共享可用于各种系统和设备设计的、关于人的特点的知识”。
人们通常只是把它作为生物力学和人体测量所关注的内容,实际上它是从更广泛的意义上的一种对人(产品用户)的全面和综合的理解。
HFES从二战中有军方从事的系统分析中发展而来。
其中的三种主要研究的是人体测量、复杂信息的解释和管理,以及在部队和装备调配中应用的系统分析。
系统分析在尺度和复杂性方面跨度很大,大的系统分析有类似于诺曼底登陆准备的大型系统规划,小到去理解如何从合理性和规模的角度才最佳的布置和装备人员。
诺曼底登陆是20世纪最复杂的事件之一。
他要求建立一个在战斗开始之前还不确定的庞大的人员和物资的合理分配系统。
在更小的规模上,装备和军事人物的布置意味着如何去组织、训练和安排战士,最大限度的发挥他们的长处。
士兵必须迅速地接受训练,并且能够有效地使用和维护在二战中发展起来的一系列技术装备。
其中,对于飞行员、潜艇人员和坦克驾驶员有神采的限制。
复杂的新装备的开发要求找到最好的税收、密码便医院、破译人员、雷达和声纳操作员、轰炸机驾驶员和机组人员。
在战后,随着公司及其产品在尺度、领域和复杂性方面的增长,很多系统分析人员在商用领域找到了发展机会。
尽管是战后的发展才导致了1957年人机工程协会(HFES)的建立,但人机研究的起源可以追溯到大批量生产方式的成型阶段,是当时提高生产效率的要求。
随着工作方式从手工生产和农业生产中的转移,新的工厂工作的概念逐步发展起来。
福特的流水生产线和泰勒的效率理论开始对生产的规划和教育产生影响。
即使在家庭生活中,妇女们也开始接受了现代家庭管理理论,并运用这些理论来组织和规划家庭。
在20世纪末,一种涵盖面更广的人机工程正在发展之中。
新的人机工程学是为了适应已经被广泛意识到的对用户行为模式更深入的需求而诞生的,它开始应用定型研究方法,并探索人的情感和认知因素。
外文文献翻译译稿和原文【范本模板】
外文文献翻译译稿1卡尔曼滤波的一个典型实例是从一组有限的,包含噪声的,通过对物体位置的观察序列(可能有偏差)预测出物体的位置的坐标及速度。
在很多工程应用(如雷达、计算机视觉)中都可以找到它的身影。
同时,卡尔曼滤波也是控制理论以及控制系统工程中的一个重要课题。
例如,对于雷达来说,人们感兴趣的是其能够跟踪目标.但目标的位置、速度、加速度的测量值往往在任何时候都有噪声。
卡尔曼滤波利用目标的动态信息,设法去掉噪声的影响,得到一个关于目标位置的好的估计.这个估计可以是对当前目标位置的估计(滤波),也可以是对于将来位置的估计(预测),也可以是对过去位置的估计(插值或平滑).命名[编辑]这种滤波方法以它的发明者鲁道夫。
E。
卡尔曼(Rudolph E. Kalman)命名,但是根据文献可知实际上Peter Swerling在更早之前就提出了一种类似的算法。
斯坦利。
施密特(Stanley Schmidt)首次实现了卡尔曼滤波器。
卡尔曼在NASA埃姆斯研究中心访问时,发现他的方法对于解决阿波罗计划的轨道预测很有用,后来阿波罗飞船的导航电脑便使用了这种滤波器。
关于这种滤波器的论文由Swerling(1958)、Kalman (1960)与Kalman and Bucy(1961)发表。
目前,卡尔曼滤波已经有很多不同的实现.卡尔曼最初提出的形式现在一般称为简单卡尔曼滤波器。
除此以外,还有施密特扩展滤波器、信息滤波器以及很多Bierman, Thornton开发的平方根滤波器的变种.也许最常见的卡尔曼滤波器是锁相环,它在收音机、计算机和几乎任何视频或通讯设备中广泛存在。
以下的讨论需要线性代数以及概率论的一般知识。
卡尔曼滤波建立在线性代数和隐马尔可夫模型(hidden Markov model)上.其基本动态系统可以用一个马尔可夫链表示,该马尔可夫链建立在一个被高斯噪声(即正态分布的噪声)干扰的线性算子上的。
系统的状态可以用一个元素为实数的向量表示.随着离散时间的每一个增加,这个线性算子就会作用在当前状态上,产生一个新的状态,并也会带入一些噪声,同时系统的一些已知的控制器的控制信息也会被加入。
外文翻译--创业板市场
外文文献翻译译文一、外文原文原文:China's Second BoardI. Significance of and events leading to the establishment of a Second BoardOn 31 March 2009 the China Securities Regulatory Commission (CSRC issued Interim Measures on the Administration of Initial Public Offerings and Listings of Shares on the ChiNext [i.e., the Second Board, also called the Growth Enterprise Market] ("Interim Measures"), which came into force on 1 May 2009. This marked the creation by the Shenzhen Stock Exchange of the long-awaited market for venture businesses. As the original plan to establish such a market in 2001 had come to nothing when the dotcom bubble burst, the market's final opening came after a delay of nearly 10 years.Ever since the 1980s, when the Chinese government began to foster the development of science and technology, venture capital has been seen in China as a means of supporting the development of high-tech companies financially. The aim, as can be seen from the name of the 1996 Law of the People's Republic of China on Promoting the Conversion of Scientific and Technological Findings into Productivity ,was to support the commercialization of scientific and technological developments. Venture capital funds developed gradually in the late 1990s, and between then and 2000 it looked increasingly likely that a Second Board would be established. When the CSRC published a draft plan for this in September 2000, the stage was set. However, when the dotcom bubble (and especially the NASDAQ bubble) burst, this plan was shelved. Also, Chinese investors and venture capitalists were probably not quite ready for such a move.As a result, Chinese venture businesses sought to list on overseas markets (a so-called "red chip listing") from the late 1990s. However, as these listings increased, so did the criticism that valuable Chinese assets were being siphoned overseas.On thepolicy front, in 2004 the State Council published Some Opinions on Reform, Opening and Steady Growth of Capital Markets ("the Nine Opinions"), in which the concept of a "multi-tier capital market" was presented for the first time. A first step in this direction was made in the same year, when an SME Board was established as part of the Main Board. Although there appear to have been plans to eventually relax the SME Board's listing requirements, which were the same as those for companies listed on the Main Board, and to make it a market especially for venture businesses, it was decided to establish a separate market (the Second Board) for this purpose and to learn from the experience of the SME Board.As well as being part of the process of creating a multi-tier capital market, the establishment of the Second Board was one of the measures included in the policy document Several Opinions of the General Office of the State Council on Providing Financing Support for Economic Development ("the 30 Financial Measures"), published in December 2008 in response to the global financial crisis and intended as a way of making it easier for SMEs to raise capital.It goes without saying that the creation of the Second Board was also an important development in that it gives private equity funds the opportunity to exit their investments. The absence of such an exit had been a disincentive to such investment, with most funds looking for a red chip listing as a way of exiting their investments. However, with surplus savings at home, the Chinese authorities began to encourage companies to raise capital on the domestic market rather than overseas. This led, in September 2006, to a rule making it more difficult for Chinese venture businesses to list their shares on overseas markets. The corollary of this was that it increased the need for a means whereby Chinese private equity funds could exit their investments at an early opportunity and on their own market. The creation of the Second Board was therefore a belated response to this need.II. Rules and regulations governing the establishment of the Second BoardWe now take a closer look at some of the rules and regulations governing the establishment of the Second Board.First , the Interim Measures on the Administration of Initial Public Offerings andListings of Shares on the ChiNext, issued by the CSRC on 31 March 2009 with effect from 1 May 2009. The Interim Measures consist of six chapters and 58 articles, stipulating issue terms and procedures, disclosure requirements, regulatory procedures, and legal responsibilities.First, the General Provisions chapter. The first thing this says (Article 1) is: "These Measures are formulated for the purposes of promoting the development of innovative enterprises and other growing start-ups" This shows that one of the main listing criteria is a company's technological innovativeness and growth potential. The Chinese authorities have actually made it clear that, although the Second Board and the SME Board are both intended for SMEs of similar sizes, the Second Board is specifically intended for SMEs at the initial (rather than the growth or mature) stage of their development with a high degree of technological innovativeness and an innovative business model while the SME Board is specifically intended for companies with relatively stable earnings at the mature stage of their development. They have also made it clear that the Second Board is not simply a "small SME Board." This suggests to us that the authorities want to see technologically innovative companies listing on the Second Board and SMEs in traditional sectors listing on the SME Board.Next, Article 7 says: "A market access system that is commensurate with the risk tolerance of investors shall be established for investors on the ChiNext and investment risk shall be fully disclosed to investors." One noteworthy feature is the adoption of the concept of the "qualified investor" in an attempt to improve risk control.Furthermore, Article 8 says: "China Securities Regulatory Commission (hereinafter, CSRC) shall, in accordance with law, examine and approve the issuer’s IPO application and supervise the issuer’s IPO activities. The stock exchange shall formulate rules in accordance with law, provide an open, fair and equitable market environment and ensure the normal operation of the ChiNext." Until the Second Board was established, it was thought by some that the stock exchange had the right to approve new issues. Under the Interim Measures, however, it is the CSRC that examines and approves applications.First, offering conditions. Article 10 stipulates four numerical conditions for companies applying for IPOs.Second, offering procedures. The Interim Measures seek to make sponsoring securities companies more responsible by requiring them to conduct due diligence investigations and make prudential judgment on the issuer’s growth and render special opinions thereon.Third, information disclosure. Article 39 of the Interim Measures stipulates that the issuer shall make a statement in its prospectus pointing out the risks of investing in Second Board companies: namely, inconsistent performance, high operational risk, and the risk of delisting. Similarly,Fourth, supervision. Articles 51 and 52 stipulate that the stock exchange (namely, the Shenzhen Stock Exchange) shall establish systems for listing, trading and delisting Second Board stocks, urge sponsors to fulfill their ongoing supervisory obligations, and establish a market risk warning system and an investor education system.1. Amendments to the Interim Measures on Securities Issuance and Listing Sponsor System and the Provisional Measures of the Public Offering Review Committee of the China Securities Regulatory Commission2. Rules Governing the Listing of Shares on the ChiNext of Shenzhen Stock Exchange Next, the Shenzhen Stock Exchange published Rules Governing the Listing of Shares on the ChiNext of Shenzhen Stock Exchange on 6 June (with effect from 1 July).3. Checking investor eligibility As the companies listed on the Second Board are more risky than those listed on the Main Board and are subject to more rigorous delisting rules (see above), investor protection requires that checks be made on whether Second Board shares are suitable for all those wishing to invest in them.4. Rules governing (1) application documents for listings on the ChiNext and (2) prospectuses of ChiNext companies On 20 July the CSRC published rules governing Application Documents for Initial Public Offerings and Listings of Shares on the ChiNext and Prospectuses of ChiNext Companies, and announced that it would begin processing listing applications on 26 July.III. Future developmentsAs Its purpose is to "promote the development of innovative enterprises and other growing start-ups",the Second Board enables such companies to raise capital by issuing shares. That is why its listing requirements are less demanding than those of the Main Board but also why it has various provisions to mitigate risk. For one thing, the Second Board has its own public offering review committee to check how technologically specialized applicant companies are, reflecting the importance attached to this. For another, issuers and their controlling shareholders, de facto controllers, and sponsoring securities companies are subject to more demanding accountability requirements. The key factor here is, not surprisingly, disclosure. Also, the qualified investor system is designed to mitigate the risks to retail investors.Once the rules and regulations governing the Second Board were published, the CSRC began to process listing applications from 26 July 2009. It has been reported that 108 companies initially applied. As of mid-October, 28 of these had been approved and on 30 October they were listed on the Second Board.As of 15 December, there are 46 companies whose listing application has been approved by CSRC (including the above-mentioned 28 companies). They come from a wide range of sectors, especially information technology, services, and biopharmacy. Thus far, few companies in which foreign private equity funds have a stake have applied. This is because these funds have tended to go for red-chip listings.Another point is movement between the various tiers of China's multi-tier capital market. As of early September, four companies that are traded on the new Third Board had successfully applied to list on the Second Board. As 22 new Third Board companies meet the listing requirements of the Second Board on the basis of their interim reports for the first half of fiscal 2009, a growing number of companies may transfer their listing from the new Third Board to the Second Board. We think this is likely to make the new Third Board a more attractive market for private equity investors.The applicants include companies that were in the process of applying for a listing on the SME Board. The CSRC has also made it clear that it does not see theSecond Board simply as a "small SME Board" and attaches great importance to the companies' innovativeness and growth potential. Ultimately, whether or not such risks can be mitigated will depend on whether the quality of the companies that list on the Second Board improves and disclosure requirements are strictly complied with. For example, according to the rules governing Prospectuses of ChiNext Companies, companies are required to disclose the above-mentioned supplementary agreements as a control right risk. The point is whether such requirements will be complied with.Since there is a potentially large number of high-tech companies in China in the long term, whether or not the Second Board becomes one of the world's few successful venture capital markets will depend on whether all these rules and regulations succeed in shaping its development and the way in which it is run.The authorities clearly want to avoid a situation where the Second Board attracts a large number of second-rate companies and becomes a vehicle for market abuse as it would then run the risk of becoming an illiquid market shunned by investors who have lost trust in it. Indeed, such has been the number of companies applying to list on the Second Board that some observers have expressed concern about their quality.There has also been some concern about investor protection. For example, supplementary agreements between private equity funds and issuers pose a risk to retail investors in that they may suddenly be faced with a change in the controlling shareholder. This is because such agreements can result in a transfer of shares from the founder or controlling shareholder to a private equity fund if the company fails to meet certain agreed targets or in a shareholding structure that is different from the apparent one, for example. The problem of low liquidity, which has long faced the new Third Board market, where small-cap high-tech stocks are also traded, also needs to be addressed.Meanwhile, the Second Board's Public Offering Review Committee was officially established on 14 August. It has 35 members. A breakdown reveals that the number of representatives of the CSRC and the Shenzhen Stock Exchange has been limited to three and two, respectively, to ensure that the committee has the necessary number of technology specialists. Of the remainder, 14 are accountants, six lawyers,three from the Ministry of Science and Technology, three from the China Academy of Sciences, two from investment trust companies, one from an asset evaluation agency, and one from the National Development and Reform Commission (NDRC). It has been reported that the members include specialists in the six industry fields the CSRC considers particularly important for Second Board companies (namely, new energy, new materials, biotechnology and pharmaceuticals, energy conservation and environmental protection, services and IT).Source: Takeshi Jingu.2009.“China's Second Board”. Nomura Journal of Capital Markets Winter 2009 V ol.1 No.4.pp.1-15.二、翻译文章译文:中国创业板市场一、建立创业板市场及其意义2009年3月31日中国证券监督管理委员会(以下简称“中国证监会”)发行《中国证监会管理暂行办法》,首次在创业板市场上[即,第二个板,也叫创业板市场](“暂行办法”) 公开募股,从 2009年的5月1日开始生效,这标志着深圳证券交易所市场这个人们期待已久的合资企业即将诞生。
外文翻译原文
DOI10.1007/s10711-012-9699-zORIGINAL PAPERParking garages with optimal dynamicsMeital Cohen·Barak WeissReceived:19January2011/Accepted:22January2012©Springer Science+Business Media B.V.2012Abstract We construct generalized polygons(‘parking garages’)in which the billiard flow satisfies the Veech dichotomy,although the associated translation surface obtained from the Zemlyakov–Katok unfolding is not a lattice surface.We also explain the difficulties in constructing a genuine polygon with these properties.Keywords Active vitamin D·Parathyroid hormone-related peptide·Translation surfaces·Parking garages·Veech dichotomy·BilliardsMathematics Subject Classification(2000)37E351Introduction and statement of resultsA parking garage is an immersion h:N→R2,where N is a two dimensional compact connected manifold with boundary,and h(∂N)is afinite union of linear segments.A parking garage is called rational if the group generated by the linear parts of the reflections in the boundary segments isfinite.If h is actually an embedding,the parking garage is a polygon; thus polygons form a subset of parking garages,and rationals polygons(i.e.polygons all of whose angles are rational multiples ofπ)form a subset of rational parking garages.The dynamics of the billiardflow in a rational polygon has been intensively studied for over a century;see[7]for an early example,and[5,10,13,16]for recent surveys.The defi-nition of the billiardflow on a polygon readily extends to a parking garage:on the interior of N the billiardflow is the geodesicflow on the unit tangent bundle of N(with respect to the pullback of the Euclidean metric)and at the boundary,theflow is defined by elastic reflection (angle of incidence equals the angle of return).Theflow is undefined at thefinitely many M.Cohen·B.Weiss(B)Ben Gurion University,84105Be’er Sheva,Israele-mail:barakw@math.bgu.ac.ilM.Cohene-mail:comei@bgu.ac.ilpoints of N which map to‘corners’,i.e.endpoints of boundary segments,and hence at thecountable union of codimension1submanifolds corresponding to points in the unit tangentbundle for which the corresponding geodesics eventually arrive at corners in positive or neg-ative time.Since the direction of motion of a trajectory changes at a boundary segment viaa reflection in its side,for rational parking garages,onlyfinitely many directions of motionare assumed.In other words,the phase space of the billiardflow decomposes into invarianttwo-dimensional subsets corresponding tofixing the directions of motion.Veech[12]discovered that the billiardflow in some special polygons exhibits a strikingly he found polygons for which,in any initial direction,theflow is eithercompletely periodic(all orbits are periodic),or uniquely ergodic(all orbits are equidistrib-uted).Following McMullen we will say that a polygon with these properties has optimaldynamics.We briefly summarize Veech’s strategy of proof.A standard unfolding construc-tion usually attributed to Zemlyakov and Katok[15]1,associates to any rational polygon Pa translation surface M P,such that the billiardflow on P is essentially equivalent to thestraightlineflow on M P.Associated with any translation surface M is a Fuchsian group M,now known as the Veech group of M,which is typically trivial.Veech found M and P forwhich this group is a non-arithmetic lattice in SL2(R).We will call these lattice surfaces and lattice polygons respectively.Veech investigated the SL2(R)-action on the moduli space of translation surfaces,and building on earlier work of Masur,showed that lattice surfaces haveoptimal dynamics.From this it follows that lattice polygons have optimal dynamics.This chain of reasoning remains valid if one starts with a parking garage instead of apolygon;namely,the unfolding construction associates a translation surface to a parkinggarage,and one may define a lattice parking garage in an analogous way.The arguments ofVeech then show that the billiardflow in a lattice parking garage has optimal dynamics.Thisgeneralization is not vacuous:lattice parking garages,which are not polygons,were recentlydiscovered by Bouw and Möller[2].The term‘parking garage’was coined by Möller.A natural question is whether Veech’s result admits a converse,i.e.whether non-latticepolygons or parking garages may also have optimal dynamics.In[11],Smillie and the sec-ond-named author showed that there are non-lattice translation surfaces which have optimaldynamics.However translation surfaces arising from billiards form a set of measure zero inthe moduli space of translation surfaces,and it was not clear whether the examples of[11]arise from polygons or parking garages.In this paper we show:Theorem1.1There are non-lattice parking garages with optimal dynamics.An example of such a parking garage is shown in Fig.1.Veech’s work shows that for lattice polygons,the directions in which all orbits are periodicare precisely those containing a saddle connection,i.e.a billiard path connecting corners ofthe polygon which unfold to singularities of the corresponding surface.Following Cheunget al.[3],if a polygon P has optimal dynamics,and the periodic directions coincide with thedirections of saddle connections,we will say that P satisfies strict ergodicity and topologicaldichotomy.It is not clear to us whether our example satisfies this stronger property.As weexplain in Remark3.2below,this would follow if it were known that the center of the regularn-gon is a‘connection point’in the sense of Gutkin,Hubert and Schmidt[8]for some nwhich is an odd multiple of3.Veech also showed that for a lattice polygon P,the number N P(T)of periodic strips on P of length at most T satisfies a quadratic growth estimate of the form N P(T)∼cT2for a positive constant c.As we explain in Remark3.3,our examples also satisfy such a quadratic growth estimate.1But dating back at least to Fox and Kershner[7].Fig.1A non-lattice parkinggarage with optimal dynamics.(Here 2/n represents angle 2π/n )It remains an open question whether there is a genuine polygon which has optimal dynam-ics and is not a lattice polygon.Although our results make it seem likely that such a polygon exists,in her M.Sc.thesis [4],the first-named author obtained severe restrictions on such a polygon.In particular she showed that there are no such polygons which may be constructed from any of the currently known lattice examples via the covering construction as in [11,13].We explain these results and prove a representative special case in §4.2PreliminariesIn this section we cite some results which we will need,and deduce simple consequences.For the sake of brevity we will refer the reader to [10,11,16]for definitions of translation surfaces.Suppose S 1,S 2are compact orientable surfaces and π:S 2→S 1is a branched cover.That is,πis continuous and surjective,and there is a finite 1⊂S 1,called the set of branch points ,such that for 2=π−1( 1),the restriction of πto S 2 2is a covering map of finite degree d ,and for any p ∈ 1,#π−1(p )<d .A ramification point is a point q ∈ 2for which there is a neighborhood U such that {q }=U ∩π−1(π(q ))and for all u ∈U {q },# U ∩π−1(π(u )) ≥2.If M 1,M 2are translation surfaces,a translation map is a surjective map M 2→M 1which is a translation in charts.It is a branched cover.In contrast to other authors (cf.[8,13]),we do not require that the set of branch points be distinct from the singularities of M 1,or that they be marked.It is clear that the ramification points of the cover are singularities on M 2.If M is a lattice surface,a point p ∈M is called periodic if its orbit under the group of affine automorphisms of M is finite.A point p ∈M is called a connection point if any seg-ment joining a singularity with p is contained in a saddle connection (i.e.a segment joining singularities)on M .The following proposition summarizes results discussed in [7,9–11]:Proposition 2.1(a)A non-minimal direction on a translation surface contains a saddle connection.(b)If M 1is a lattice surface,M 2→M 1is translation map with a unique branch point,then any minimal direction on M 2is uniquely ergodic.(c)If M2→M1is a translation map such that M1is a lattice surface,then all branchpoints are periodic if and only if M2is a lattice surface.(d)If M2→M1is a translation map with a unique branch point,such that M1is a latticesurface and the branch point is a connection point,then any saddle connection direction on M2is periodic.Corollary2.2Let M2→M1be a translation map such that M1is a lattice surface with a unique branch point p.Then:(1)M2has optimal dynamics.(2)If p is a connection point then M2satisfies topological dichotomy and strict ergodicity.(3)If p is not a periodic point then M2is not a lattice surface.Proof To prove(1),by(b),the minimal directions are uniquely ergodic,and we need to prove that the remaining directions are either completely periodic or uniquely ergodic. By(a),in any non-minimal direction on M2there is a saddle connectionδ,and there are three possibilities:(i)δprojects to a saddle connection on M1.(ii)δprojects to a geodesic segment connecting the branch point p to itself.(iii)δprojects to a geodesic segment connecting p to a singularity.In case(i)and(ii)since M1is a lattice surface,the direction is periodic on M1,hence on M2as well.In case(iii),there are two subcases:ifδprojects to a part of a saddle connec-tion on M1,then it is also a periodic direction.Otherwise,in light of Proposition2.1(a),the direction must be minimal in M1,and hence,by Proposition2.1(b),uniquely ergodic in M2. This proves(1).Note also that if p is a connection point then the last subcase does not arise, so all directions which are non-minimal on M2are periodic.This proves(2).Statement(3) follows from(c).We now describe the unfolding construction[7,15],extended to parking garages.Let P=(h:N→R2).An edge of P is a connected subset L of∂N such that h(L)is a straight segment and L is maximal with these properties(with respect to inclusion).A vertex of P is any point which is an endpoint of an edge.The angle at a vertex is the total interior angle, measured via the pullback of the Euclidean metric,at the vertex.By convention we always choose the positive angles.Note that for polygons,angles are less than2π,but for parking garages there is no apriori upper bound on the angle at a vertex.Since our parking garages are rational,all angles are rational multiples ofπ,and we always write them as p/q,omitting πfrom the notation.Let G P be the dihedral group generated by the linear parts of reflections in h(L),for all edges L.For the sake of brevity,if there is a reflection with linear part gfixing a line parallel to L,we will say that gfixes L.Let S be the topological space obtained from N×G P by identifying(x,g1)with(x,g2)whenever g−11g2fixes an edge containing h(x).Topologically S is a compact orientable surface,and the immersions g◦h on each N×{g}induce an atlas of charts to R2which endows S with a translation surface structure.We denote this translation surface by M P,and writeπP for the map N×G P→M P.We will be interested in a‘partial unfolding’which is a variant of this construction,in which we reflect a parking garage repeatedly around several of its edges to form a larger parking garage.Formally,suppose P=(h:N→R2)and Q=(h :N →R2)are parking garages.For ≥1,we say that P tiles Q by reflections,and that is the number of tiles,if the following holds.There are maps h 1,...h :N→N and g1,...,g ∈G P(not necessarily distinct)satisfying:(A)The h i are homeomorphisms onto their images,and N = h i (N ).(B)For each i ,the linear part of h ◦h i ◦h −1is everywhere equal to g i .(C)For each 1≤i <j ≤ ,let L i j =h i (N )∩h j (N )and L =(h i )−1(L i j ).Then (h j )−1◦h i is the identity on L ,and L is either empty,or a vertex,or an edge of P .If L is an edge then h i (N )∪h j (N )is a neighborhood of L i j.If L i j is a vertex then there is a finite set of i =i 1,i 2,...,i k =j such that h i s (N )contains a neighborhood of L i j ,and each consecutive pair h i t (N ),h i t +1(N )intersect along an edge containing L i j .V orobets [13]realized that a tiling of parking garages gives rise to a branched cover.More precisely:Proposition 2.3Suppose P tiles Q by reflections with tiles,M P ,M Q are the correspond-ing translation surfaces obtained via the unfolding construction,and G P ,G Q are the cor-responding reflection groups.Then there is a translation map M Q →M P ,such that the following hold:(1)G Q ⊂G P .(2)The branch points are contained in the G P -orbit of the vertices of P .(3)The degree of the cover is [G P :G Q ].(4)Let z ∈M P be a point which is represented (as an element of N ×{1,...,r })by(x ,k )with x a vertex in P with angle m n (where gcd (m ,n )=1).Let (y i )⊂M Q be the pre-images of z,with angles k i m n in Q .Then z is a branch point of the cover if and only if k i n for some i.Proof Assertion (1)follows from the fact that Q is tiled by P .Since this will be impor-tant in the sequel,we will describe the covering map M Q →M P in detail.We will map (x ,g )∈N ×G Q to πP (x ,gg i )∈M P ,where x =h i (x ).We now check that this map is independent of the choice of x ,i ,and descends to a well-defined map M Q →M P ,which is a translation in charts.If x =h i (x 1)=h j (x 2)then x 1=x 2since (h i )−1◦h j is the identity.If x is in the relative interior of an edge L i j thenπP (x ,gg i )=πP (x ,gg j )(1)since (gg i )−1gg j =g −1i g j fixes an edge containing h (x 1).If x 1is a vertex of P then one proves (1)by an induction on k ,where k is as in (C).This shows that the map is well-defined.We now show that it descends to a map M Q →M P .Suppose (x ,g ),(x ,g )are two points in N ×G Q which are identified in M Q ,i.e.x ∈∂N is in the relative interior of an edge fixed by g −1g .By (C)there is a unique i such that x is in the image of h i .Thus (x ,g )maps to (x ,gg i )and (x ,g )maps to (x ,g g i ),and g −1i g −1g g i fixes the edge through x =g −1i (x ).It remains to show that the map we have defined is a translation in charts.This follows immediately from the chain rule and (B).Assertion (2)is simple and left to the reader.For assertion (3)we note that M P (resp.M Q )is made of |G P |(resp. |G Q |)copies of P .The point z will be a branch point if and only if the total angle around z ∈M P differs from the total angle around one of the pre-images y i ∈M Q .The total angle at a singularity corresponding to a vertex with angle r /s (where gcd (r ,s )=1)is 2r π,thus the total angle at z is 2m πand the total angle at y i is 2k i m πgcd (k i ,n ).Assertion (4)follows.3Non-lattice dynamically optimal parking garagesIn this section we prove the following result,which immediately implies Theorem1.1: Theorem3.1Let n≥9be an odd number divisible by3,and let P be an isosceles triangle with equal angles1/n.Let Q be the parking garage made of four copies of P glued as in Fig.1, so that Q has vertices(in cyclic order)with angles1/n,2/n,3/n,(n−2)/n,2/n,3(n−2)/n. Then M P is a lattice surface and M Q→M P is a translation map with one aperiodic branchpoint.In particular Q is a non-lattice parking garage with optimal dynamics.Proof The translation surface M P is the double n-gon,one of Veech’s original examples of lattice surfaces[12].The groups G P and G Q are both equal to the dihedral group D n.Thus by Proposition2.3,the degree of the cover M Q→M P is four.Again by Proposition2.3, since n is odd and divisible by3,the only vertices which correspond to branch points are the two vertices z1,z2with angle2/n(they correspond to the case k i=2while the other vertices correspond to1or3).In the surface M P there are two points which correspond to vertices of equal angle in P(the centers of the two n-gons),and these points are known to be aperiodic [9].We need to check that z1and z2both map to the same point in M P.This follows from the fact that both are opposite the vertex z3with angle3/n,which also corresponds to the center of an n-gon,so in M P project to a point which is distinct from z3. Remark3.2As of this writing,it is not known whether the center of the regular n-gon is a connection point on the double n-gon surface.If this turns out to be the case for some n which is an odd multiple of3,then by Corollary2.2(2),our construction satisfies strict ergodicity and topological dichotomy.See[1]for some recent related results.Remark3.3Since our examples are obtained by taking branched covers over lattice surfaces, a theorem of Eskin et al.[6,Thm.8.12]shows that our examples also satisfy a quadratic growth estimate of the form N P(T)∼cT2;moreover§9of[6]explains how one may explicitly compute the constant c.4Non-lattice optimal polygons are hard tofindIn this section we present results indicating that the above considerations will not easily yield a non-lattice polygon with optimal dynamics.Isolating the properties necessary for our proof of Theorem3.1,we say that a pair of polygons(P,Q)is suitable if the following hold:•P is a lattice polygon.•P tiles Q by reflections.•The corresponding cover M Q→M P as in Proposition2.3has a unique branch point which is aperiodic.In her M.Sc.thesis at Ben Gurion University,thefirst-named author conducted an exten-sive search for a suitable pair of polygons.By Corollary2.2,such a pair will have yielded a non-lattice polygon with optimal dynamics.The search begins with a list of candidates for P,i.e.a list of currently known lattice polygons.At present,due to work of many authors, there is a fairly large list of known lattice polygons but there is no classification of all lattice polygons.In[4],the full list of lattice polygons known as of this writing is given,and the following is proved:Theorem4.1(M.Cohen)Among the list of lattice surfaces given in[4],there is no P for which there is Q such that(P,Q)is a suitable pair.The proof of Theorem4.1contains a detailed case-by-case analysis for each of the differ-ent possible P.These cases involve some common arguments which we will illustrate in this section,by proving the special case in which P is any of the obtuse triangles investigated byWard[14]:Theorem4.2For n≥4,let P=P n be the(lattice)triangle with angles1n,12n,2n−32n.Then there is no polygon Q for which(P,Q)is a suitable pair.Our proof relies on some auxiliary statements which are of independent interest.In all of them,M Q→M P is the branched cover with unique branch point corresponding to a suitable pair(P,Q).These statements are also valid in the more general case in which P,Q are parking garages.Recall that an affine automorphism of a translation surface is a homeomorphism which is linear in charts.We denote by Aff(M)the group of affine automorphisms of M and by D:Aff(M)→GL2(R)the homomorphism mapping an affine automorphism to its linear part.Note that we allow orientation-reversing affine automorphisms,i.e.detϕmay be1 or−1.We now explain how G P acts on M P by translation equivalence.LetπP:N×G P→M P and S be as in the discussion preceding Proposition2.3,and let g∈G P.Since the left action of g on G is a permutation and preserves the gluing ruleπP,the map N×G P→N×G P sending(x,g )to(x,g−1g )induces a homeomorphismϕ:S→S and g◦h◦ϕis a translation in charts.Thus g∈G P gives a translation isomorphism of M P,and similarly g∈G P gives a translation isomorphism of M Q.Lemma4.3The branch point of the cover p:M Q→M P isfixed by G Q.Proof Since G Q⊂G P,any g∈G Q induces translation isomorphisms of both M P and M Q.We denote both by g.The definition of p given in thefirst paragraph of the proof of Proposition2.3shows that p◦g=g◦p;namely both maps are induced by sending (x ,g )∈N ×G Q toπP(x,gg g i),where x =h i(x).Since the cover p has a unique branch point,any g∈G Q mustfix it. Lemma4.4If an affine automorphismϕof a translation surface has infinitely manyfixed points then Dϕfixes a nonzero vector,in its linear action on R2.Proof Suppose by contradiction that the linear action of Dϕon the plane has zero as a uniquefixed point,and let Fϕbe the set offixed points forϕ.For any x∈Fϕwhich is not a singularity,there is a chart from a neighborhood U x of x to R2with x→0,and a smaller neighborhood V x⊂U x,such thatϕ(V x)⊂U x and when expressed in this chart,ϕ|V x is given by the linear action of Dϕon the plane.In particular x is the onlyfixed point in V x. Similarly,if x∈Fϕis a singularity,then there is a neighborhood U x of x which maps to R2 via afinite branched cover ramified at x→0,such that the action ofϕin V x⊂U x covers the linear action of Dϕ.Again we see that x is the onlyfixed point in V x.By compactness wefind that Fϕisfinite,contrary to hypothesis. Lemma4.5Suppose M is a lattice surface andϕ∈Aff(M)has Dϕ=−Id.Then afixed point forϕis periodic.Proof LetF1={σ∈Aff(M):Dσ=−Id}.Thenϕ∈F1and F1isfinite,since it is a coset for the group ker D which is known to be finite.Let A⊂M be the set of points which arefixed by someσ∈F1.By Lemma4.4this is afinite set,which contains thefixed points forϕ.Thus in order to prove the Lemma,it suffices to show that A is Aff(M)-invariant.Letψ∈Aff(M),and let x∈A,so that x=σ(x)with Dσ=−Id.Since-Id is central in GL2(R),D(σψ)=D(ψσ),so there is f∈ker D such thatψσ=fσψ.Thereforeψ(x)=ψσ(x)=fσψ(x),and fσ∈F1.This proves thatψ(x)∈A.Remark4.6This improves Theorem10of[8],where a similar conclusion is obtained under the additional assumptions that M is hyperelliptic and Aff(M)is generated by elliptic ele-ments.The following are immediate consequences:Corollary4.7Suppose(P,Q)is a suitable pair.Then•−Id/∈D(G Q).•None of the angles between two edges of Q are of the form p/q with gcd(p,q)=1and q even.Proof of Theorem4.2We will suppose that Q is such that(P,Q)are a suitable pair and reach a contradiction.If n is even,then Aff(M P)contains a rotation byπwhichfixes the points in M P coming from vertices of P.Thus by Lemma4.5all vertices of P give rise to periodic points,contradicting Proposition2.1(c).So n must be odd.Let x1,x2,x3be the vertices of P with corresponding angles1/n,1/2n,(2n−3)/2n. Then x3gives rise to a singularity,hence a periodic point.Also using Lemma4.5and the rotation byπ,one sees that x2also gives rise to a periodic point.So the unique branch point must correspond to the vertex x1.The images of the vertex x1in P give rise to two regular points in M P,marked c1,c2in Fig.2.Any element of G P acts on{c1,c2}by a permutation, so by Lemma4.3,G Q must be contained in the subgroup of index twofixing both of the c i. Let e1be the edge of P opposite x1.Since the reflection in e1,or any edge which is an image of e1under G P,swaps the c i,we have:e1is not a boundary edge of Q.(2) We now claim that in Q,any vertex which corresponds to the vertex x3from P is alwaysdoubled,i.e.consists of an angle of(2n−3)/n.Indeed,for any polygon P0,the group G P0 is the dihedral group D N where N is the least common multiple of the denominators of theangles at vertices of P0.In particular it contains-Id when N is even.Writing(2n−3)/2n in reduced form we have an even denominator,and since,by Corollary4.7,−Id/∈G Q,in Q the angle at vertex x3must be multiplied by an even integer2k.Since2k(2n−3)/2n is bigger than2if k>1,and since the total angle at a vertex of a polygon is less than2π,we must have k=1,i.e.any vertex in Q corresponding to the vertex x3is always doubled.This establishes the claim.It is here that we have used the assumption that Q is a polygon and not a parking garage.Fig.2Ward’s surface,n=5Fig.3Two options to start the construction ofQThere are two possible configurations in which a vertex x3is doubled,as shown in Fig.3. The bold lines indicate lines which are external,i.e.boundary edges of Q.By(2),the con-figuration on the right cannot occur.Let us denote the polygon on the left hand side of Fig.3by Q0.It cannot be equal to Q,since it is a lattice polygon.We now enlarge Q0by adding copies of P step by step,as described in Fig.4.Without loss of generality wefirst add triangle number1.By(2),the broken line indicates a side which must be internal in Q.Therefore,we add triangle number 2.We denote the resulting polygon by Q1.One can check by computing angles,using thefact that n is odd,and using Proposition2.3(4)that the cover M Q1→M P will branch overthe points a corresponding to vertex x2.Since the allowed branching is only over the points corresponding to x1,we must have Q1 Q,so we continue the construction.Without loss of generality we add triangle number3.Again,by(2),the broken line indicates a side which must be internal in Q.Therefore,we add triangle number4,obtaining Q2.Now,using Prop-osition2.3(4)again,in the cover M Q2→M P we have branching over two vertices u andv which are both of type x1and correspond to distinct points c1and c2in M P.This implies Q2 Q.Fig.4Steps of the construction of QSince both vertices u and v are delimited by2external sides,we cannot change the angle to prevent the branching over one of these points.This means that no matter how we continue to construct Q,the branching in the cover M Q→M P will occur over at least two points—a contradiction.Acknowledgments We are grateful to Yitwah Cheung and Patrick Hooper for helpful discussions,and to the referee for a careful reading and helpful remarks which improved the presentation.This research was supported by the Israel Science Foundation and the Binational Science Foundation.References1.Arnoux,P.,Schmidt,T.:Veech surfaces with non-periodic directions in the tracefield.J.Mod.Dyn.3(4),611–629(2009)2.Bouw,I.,Möller,M.:Teichmüller curves,triangle groups,and Lyapunov exponents.Ann.Math.172,139–185(2010)3.Cheung,Y.,Hubert,P.,Masur,H.:Topological dichotomy and strict ergodicity for translation surfaces.Ergod.Theory Dyn.Syst.28,1729–1748(2008)4.Cohen,M.:Looking for a Billiard Table which is not a Lattice Polygon but satisfies the Veech dichotomy,M.Sc.thesis,Ben-Gurion University(2010)/pdf/1011.32175.DeMarco,L.:The conformal geometry of billiards.Bull.AMS48(1),33–52(2011)6.Eskin,A.,Marklof,J.,Morris,D.:Unipotentflows on the space of branched covers of Veech surfaces.Ergod.Theorm Dyn.Syst.26(1),129–162(2006)7.Fox,R.H.,Kershner,R.B.:Concerning the transitive properties of geodesics on a rational polyhe-dron.Duke Math.J.2(1),147–150(1936)8.Gutkin,E.,Hubert,P.,Schmidt,T.:Affine diffeomorphisms of translation surfaces:Periodic points,Fuchsian groups,and arithmeticity.Ann.Sci.École Norm.Sup.(4)36,847–866(2003)9.Hubert,P.,Schmidt,T.:Infinitely generated Veech groups.Duke Math.J.123(1),49–69(2004)10.Masur,H.,Tabachnikov,S.:Rational billiards andflat structures.In:Handbook of dynamical systems,vol.1A,pp.1015–1089.North-Holland,Amsterdam(2002)11.Smillie,J.,Weiss,B.:Veech dichotomy and the lattice property.Ergod.Theorm.Dyn.Syst.28,1959–1972(2008)Geom Dedicata12.Veech,W.A.:Teichmüller curves in moduli space,Eisenstein series and an application to triangularbilliards.Invent.Math.97,553–583(1989)13.V orobets,Y.:Planar structures and billiards in rational polygons:the Veech alternative.(Russian);trans-lation in Russian Math.Surveys51(5),779–817(1996)14.Ward,C.C.:Calculation of Fuchsian groups associated to billiards in a rational triangle.Ergod.TheoryDyn.Syst.18,1019–1042(1998)15.Zemlyakov,A.,Katok,A.:Topological transitivity of billiards in polygons,Math.Notes USSR Acad.Sci:18:2291–300(1975).(English translation in Math.Notes18:2760–764)16.Zorich,A.:Flat surfaces.In:Cartier,P.,Julia,B.,Moussa,P.,Vanhove,P.(eds.)Frontiers in numbertheory,physics and geometry,Springer,Berlin(2006)123。
毕业论文文献外文翻译----危机管理:预防,诊断和干预文献翻译-中英文文献对照翻译
第1页 共19页中文3572字毕业论文(设计)外文翻译标题:危机管理-预防,诊断和干预一、外文原文标题:标题:Crisis management: prevention, diagnosis and Crisis management: prevention, diagnosis andintervention 原文:原文:The Thepremise of this paper is that crises can be managed much more effectively if the company prepares for them. Therefore, the paper shall review some recent crises, theway they were dealt with, and what can be learned from them. Later, we shall deal with the anatomy of a crisis by looking at some symptoms, and lastly discuss the stages of a crisis andrecommend methods for prevention and intervention. Crisis acknowledgmentAlthough many business leaders will acknowledge thatcrises are a given for virtually every business firm, many of these firms do not take productive steps to address crisis situations. As one survey of Chief Executive officers of Fortune 500 companies discovered, 85 percent said that a crisisin business is inevitable, but only 50 percent of these had taken any productive action in preparing a crisis plan(Augustine, 1995). Companies generally go to great lengths to plan their financial growth and success. But when it comes to crisis management, they often fail to think and prepare for those eventualities that may lead to a company’s total failure.Safety violations, plants in need of repairs, union contracts, management succession, and choosing a brand name, etc. can become crises for which many companies fail to be prepared untilit is too late.The tendency, in general, is to look at the company as a perpetual entity that requires plans for growth. Ignoring the probabilities of disaster is not going to eliminate or delay their occurrences. Strategic planning without inclusion ofcrisis management is like sustaining life without guaranteeinglife. One reason so many companies fail to take steps to proactively plan for crisis events, is that they fail to acknowledge the possibility of a disaster occurring. Like an ostrich with its head in the sand, they simply choose to ignorethe situation, with the hope that by not talking about it, it will not come to pass. Hal Walker, a management consultant, points out “that decisions will be more rational and better received, and the crisis will be of shorter duration, forcompanies who prepare a proactive crisis plan” (Maynard, 1993) .It is said that “there are two kinds of crises: those that thatyou manage, and those that manage you” (Augustine, 1995). Proactive planning helps managers to control and resolve a crisis. Ignoring the possibility of a crisis, on the other hand,could lead to the crisis taking a life of its own. In 1979, theThree-Mile Island nuclear power plant experienced a crisis whenwarning signals indicated nuclear reactors were at risk of a meltdown. The system was equipped with a hundred or more different alarms and they all went off. But for those who shouldhave taken the necessary steps to resolve the situation, therewere no planned instructions as to what should be done first. Hence, the crisis was not acknowledged in the beginning and itbecame a chronic event.In June 1997, Nike faced a crisis for which they had no existi existing frame of reference. A new design on the company’s ng frame of reference. A new design on the company’s Summer Hoop line of basketball shoes - with the word air writtenin flaming letters - had sparked a protest by Muslims, who complained the logo resembled the Arabic word for Allah, or God.The council of American-Islamic Relations threatened aa globalNike boycott. Nike apologized, recalled 38,000 pairs of shoes,and discontinued the line (Brindley, 1997). To create the brand,Nike had spent a considerable amount of time and money, but hadnever put together a general framework or policy to deal with such controversies. To their dismay, and financial loss, Nike officials had no choice but to react to the crisis. This incident has definitely signaled to the company that spending a little more time would have prevented the crisis. Nonetheless,it has taught the company a lesson in strategic crisis management planning.In a business organization, symptoms or signals can alert the strategic planners or executives of an eminent crisis. Slipping market share, losing strategic synergy anddiminishing productivity per man hour, as well as trends, issues and developments in the socio-economic, political and competitive environments, can signal crises, the effects of which can be very detrimental. After all, business failures and bankruptcies are not intended. They do not usually happen overnight. They occur more because of the lack of attention to symptoms than any other factor.Stages of a crisisMost crises do not occur suddenly. The signals can usuallybe picked up and the symptoms checked as they emerge. A company determined to address these issues realizes that the real challenge is not just to recognize crises, but to recognize themin a timely fashion (Darling et al., 1996). A crisis can consistof four different and distinct stages (Fink, 1986). The phasesare: prodromal crisis stage, acute crisis stage, chronic crisisstage and crisis resolution stage.Modern organizations are often called “organic” due tothe fact that they are not immune from the elements of their surrounding environments. Very much like a living organism, organizations can be affected by environmental factors both positively and negatively. But today’s successfulorganizations are characterized by the ability to adapt by recognizing important environmental factors, analyzing them, evaluating the impacts and reacting to them. The art of strategic planning (as it relates to crisis management)involves all of the above activities. The right strategy, in general, provides for preventive measures, and treatment or resolution efforts both proactively and reactively. It wouldbe quite appropriate to examine the first three stages of acrisis before taking up the treatment, resolution or intervention stage.Prodromal crisis stageIn the field of medicine, a prodrome is a symptom of the onset of a disease. It gives a warning signal. In business organizations, the warning lights are always blinking. No matter how successful the organization, a number of issues andtrends may concern the business if proper and timely attentionis paid to them. For example, in 1995, Baring Bank, a UK financial institution which had been in existence since 1763,ample opportunitysuddenly and unexpectedly failed. There wasfor the bank to catch the signals that something bad was on thehorizon, but the company’s efforts to detect that were thwarted by an internal structure that allowed a single employee both to conduct and to oversee his own investment trades, and the breakdown of management oversight and internalcontrol systems (Mitroff et al., 1996). Likewise, looking in retrospect, McDonald’s fast food chain was given the prodromalsymptoms before the elderly lady sued them for the spilling ofa very hot cup of coffee on her lap - an event that resulted in a substantial financial loss and tarnished image of thecompany. Numerous consumers had complained about thetemperature of the coffee. The warning light was on, but the company did not pay attention. It would have been much simplerto pick up the signal, or to check the symptom, than facing the consequences.In another case, Jack in the Box, a fast food chain, had several customers suffer intestinal distress after eating at their restaurants. The prodromal symptom was there, but the company took evasive action. Their initial approach was to lookaround for someone to blame. The lack of attention, the evasiveness and the carelessness angered all the constituent groups, including their customers. The unfortunate deaths thatptoms,occurred as a result of the company’s ignoring thesymand the financial losses that followed, caused the company to realize that it would have been easier to manage the crisis directly in the prodromal stage rather than trying to shift theblame.Acute crisis stageA prodromal stage may be oblique and hard to detect. The examples given above, are obvious prodromal, but no action wasWebster’s New Collegiate Dictionary, an acute stage occursacutewhen a symptom “demands urgent attention.” Whether the acutesymptom emerges suddenly or is a transformation of a prodromalstage, an immediate action is required. Diverting funds and other resources to this emerging situation may cause disequilibrium and disturbance in the whole system. It is onlythose organizations that have already prepared a framework forthese crises that can sustain their normal operations. For example, the US public roads and bridges have for a long time reflected a prodromal stage of crisis awareness by showing cracks and occasionally a collapse. It is perhaps in light of the obsessive decision to balance the Federal budget that reacting to the problem has been delayed and ignored. This situation has entered an acute stage and at the time of this writing, it was reported that a bridge in Maryland had just collapsed.The reason why prodromes are so important to catch is thatit is much easier to manage a crisis in this stage. In the caseof most crises, it is much easier and more reliable to take careof the problem before it becomes acute, before it erupts and causes possible complications (Darling et al., 1996). In andamage. However, the losses are incurred. Intel, the largest producer of computer chips in the USA, had to pay an expensiveprice for initially refusing to recall computer chips that proved unreliable o n on certain calculations. The f irmfirm attempted to play the issue down and later learned its lesson. At an acutestage, when accusations were made that the Pentium Chips were not as fast as they claimed, Intel quickly admitted the problem,apologized for it, and set about fixing it (Mitroff et al., 1996). Chronic crisis stageDuring this stage, the symptoms are quite evident and always present. I t isIt is a period of “make or break.” Being the third stage, chronic problems may prompt the company’s management to once and for all do something about the situation. It may be the beginning of recovery for some firms, and a deathknell for others. For example, the Chrysler Corporation was only marginallysuccessful throughout the 1970s. It was not, however, until the company was nearly bankrupt that amanagement shake-out occurred. The drawback at the chronic stage is that, like in a human patient, the company may get used to “quick fixes” and “band “band--aid”approaches. After all, the ailment, the problem and the crisis have become an integral partoverwhelmed by prodromal and acute problems that no time or attention is paid to the chronic problems, or the managers perceive the situation to be tolerable, thus putting the crisison a back burner.Crisis resolutionCrises could be detected at various stages of their development. Since the existing symptoms may be related todifferent problems or crises, there is a great possibility thatthey may be misinterpreted. Therefore, the people in charge maybelieve they have resolved the problem. However, in practicethe symptom is often neglected. In such situations, the symptomwill offer another chance for resolution when it becomes acute,thereby demanding urgent care. Studies indicate that today anincreasing number of companies are issue-oriented and searchfor symptoms. Nevertheless, the lack of experience in resolvinga situation and/or inappropriate handling of a crisis can leadto a chronic stage. Of course, there is this last opportunityto resolve the crisis at the chronic stage. No attempt to resolve the crisis, or improper resolution, can lead to grim consequences that will ultimately plague the organization or even destroy it.It must be noted that an unsolved crisis may not destroy the company. But, its weakening effects can ripple through the organization and create a host of other complications.Preventive effortsThe heart of the resolution of a crisis is in the preventiveefforts the company has initiated. This step, similar to a humanbody, is actually the least expensive, but quite often the mostoverlooked. Preventive measures deal with sensing potential problems (Gonzales-Herrero and Pratt, 1995). Major internalfunctions of a company such as finance, production, procurement, operations, marketing and human resources are sensitive to thesocio-economic, political-legal, competitive, technological, demographic, global and ethical factors of the external environment. What is imminently more sensible and much more manageable, is to identify the processes necessary forassessing and dealing with future crises as they arise (Jacksonand Schantz, 1993). At the core of this process are appropriate information systems, planning procedures, anddecision-making techniques. A soundly-based information system will scan the environment, gather appropriate data, interpret this data into opportunities and challenges, and provide a concretefoundation for strategies that could function as much to avoid crises as to intervene and resolve them.Preventive efforts, as stated before, require preparations before any crisis symptoms set in. Generally strategic forecasting, contingency planning, issues analysis, and scenario analysis help to provide a framework that could be used in avoiding and encountering crises.出处:出处:Toby TobyJ. Kash and John R. Darling . Crisis management: prevention, diagnosis 179-186二、翻译文章标题:危机管理:预防,诊断和干预译文:本文的前提是,如果该公司做好准备得话,危机可以更有效地进行管理。
工程管理外文翻译(原文+译文)
Concrete Construction matterT. Pauly, M. J. N. PriestleyAbstractViewed in terms of accepted practices, concrete construction operations leave much to be desired with respect to the quality, serviceability, and safety of completed structures. The shortcomings of these operations became abundantly clear when a magnitude 7.6 earthquake struck northern Paki-stan on October 8, 2005, destroying thousands of buildings, damaging bridges, and killing an esti-mated 79,000 people. The unusually low quality of construction operations prevalent was a major cause of the immense devastation.Keywords: Concrete Placing Curing Construction TechnologyPlacing ConcreteIf concrete is placed in the surface, the sur-face should be filled with water sufficiently to prevent it from absorbing the concrete of its water. If fresh concrete is to be placed on or nearby to concrete that has solidified, the surface of the placed concrete should be cleaned absolutely, preferably with a high-pressure air or water jet or steel-wire brushes. The surface should be wet, but there should be no much water. A little quantity of cement grout should be brushed over the whole area, and then followed immediately with the application of a 1/2-in Layer of mortar. The fresh concrete should be placed on or against the mortar.In order to decrease the disintegration re-sulting from carriage after it is placed. The con-crete should be placed as nearly as probably in itsfinal point. It should be placed in layers to permit uniform compaction. The time interval between the placing of layers should be limited to assure perfect bond between the fresh and previously placed concrete.In placing concrete in deeper patters, a ves-sel should be used to limit the free fall to not over 3 or 4 ft, in order to prevent concrete disintegra-tion. The vessel is a pipe made of lightweight metal, having adjustable lengths and attached to the bottom of a hopper into which the concrete is deposited. As the patters are filled, sections of the pipe may be removed.Immediately after the concrete is placed, it should be compacted by hand pudding or a me-chanical vibrator to eliminate voids. The vibrator should be left in one position only long enough to reduce the concrete around it to a plastic mass; then the vibrator should be moved, or disintegra-tion of the aggregate will occur. In general, the vibrator should not be permitted to penetrate concrete in the prior lift.The mainly advantage of vibrating is that it permits the use of a drier concrete, which has a higher strength because of the reduced water content. Among the advantages of vibrating con-crete are the following:1.The decreased water permits a reduction in the cement and fine aggregate because less cement paste is needed.2.The lower water content decreases shrinkage and voids.3.The drier concrete decreases the cost of finishing the surface.4.Mechanical vibration may replace three to eight hand puddles.5.The lower water content increases the strength of the concrete.6.The drier mixture permits theremoval of some patters more quickly, which may reduce the cost of patters.Curing ConcreteIf concrete is to gain its maximum strength and other desirable properties, it should be cured with adequate moisture and at a favorable tem-perature. Failure to provide these conditions may result in an inferior concrete.The initial moisture in concrete is adequate to hydrate all the cement, provided it is not should replace the moisture that does evaporate. This may be accomplished by many methods, such as leaving the patters in place, keeping the surface wet, or covering the surface with a liquid curing compound, which comes being to a water-tight membrane that prevents the escape of the initial water. Curing compounds may be applied by brushes or pressure sprayers. A gallon will cover 200 to 300 sq ft.Concrete should be placed at a temperature not less than 40 or more than 80°F.A lower tem-perature will decrease the rate of setting, while ahigher temperature will decrease the ultimate strength.Placing Concrete in Cold WeatherWhen the concrete is placed during cold weather, it is usually necessary to preheat the water, the aggregate, or both in order that the ini-tial temperature will assure an initial set and gain in strength .Preheating the water is the most ef-fective method of providing the necessary tem-perature. For this purpose a water reservoir should be equipped with pipe coils through which steam can be passed, or steam may bedischarged directly into the water, several outlets being used to given better distribution of the heat.When the temperatures of the mixtures are known, some specific charts may be used to cal-culate the temperature of concrete. A straight line pass all three scales, passing through every two known temperatures, will assure the determina-tion of the third temperature. If the surface of sand isdry, the fact lines of the scales giving the temperature of concrete should be used. However, if the sand contains about 3 percent moisture, the dotted lines should be used.Specifications usually demand that freshly placed concrete shall be kept at a temperature of not less than 70°F for 3 days or 50°F for 5 days after it is placed. Some proper method must be provided to keep the demanded temperature when the cold weather is estimated.Reinforcing steels for concreteCompared with concrete, steel is a high strength material. The useful strength of ordinary reinforcing steels in tension as well as compres-sion, i.e., the yield strength, is about 15 times the compressive strength of common structural con-crete, and well over 100 times its tensile strength. On the other hand, steel is a high-cost material compared with concrete. It follow that the two materials are the best used in combination if theconcrete is made to resist the compressive stresses and the compressive force, longitudinal steel reinforcing bars are located close to the ten-sion face to resist the tension force., and usually additional steel bars are so disposed that they re-sist the inclined tension stresses that are caused by the shear force in the beams. However, rein-forcement is also used for resisting compressive forces primarily where it is desired to reduce the cross-sectional dimensions of compression members, as in the lower-floor columns of multi-story buildings. Even if no such necessity exits , a minimum amount of reinforce- ment is placed in all compression members to safeguard them against the effects of small accidental bending moments that might crack and even fail an unre-inforced member.For most effective reinforcing action, it is essential that steel and concrete deform together, i. e., that there be a sufficiently strong bond be-tween the two materials to ensure that no relative movements of the steel bars and the surrounding concrete occur. This bond is provided by the rela-tively large chemical adhesion which develops at the steel-concrete interface, by the natural roughness of the mill scale of hot-rolled rein-forcing bars , and by the closely spaced rib-shap-ed surface deformations with which reinforcing bars are furnished in order to provide a high de-gree of interlocking of the two materials.Steel is used in two different ways in con-crete structures: as reinforcing steel and as prestressing steel .reinforcing steel is placed in the forms prior to casting of the concrete. Stresses in the steel, as in the hardened concrete, are caused only by the loads on the structure, except for possible parasitic stresses from shrinkage or similar causes. In contrast, in priestesses concrete structures large tension forces are applied to the reinforcement prior to letting it act jointly with the concrete in resistingexternal.The most common type of reinforcing steel is in the form of round bars, sometimes called rebars, available in a large range of diameters,from 10 to 35 mm for ordinary applications and in two heavy bar sizes off 44 and 57 mm these bars are furnished with surface deformations for the purpose of increasing resistance to slip be-tween steel and concrete minimum requirements for these deformations have been developed in experimental research. Different bar producers use different patterns, all of which satisfy these requirements.Welding of rebars in making splices, or for convenience in fabricating reinforcing cages for placement in the forms, may result in metal-lurgical changes that reduce both strength and ductility, and special restrictions must be placed both strength and ductility, and special restric-tions must be placed both on the type of steel used and the welding procedures the provisions of ASTM A706 relatespecifically to welding.In reinforced concrete a long-time trend is evident toward the use of higher strength materi-als, both steel and concrete.Reinforcing bars with 40ksi yield stress , almost standard 20 years ago , have largely been replaced by bars with 60ksi yield stress , both because they are more economical and because their use tends to reduce congestion of steel in the forms .The ACI Code permits reinforcing steels up to Fy=80ksi. Such high strength steels usually yield gradually but have no yield plateau in this situation the ACI Code requires that at the speci-fied minimum yield strength the total strain shall not exceed 0.0035 this is necessary to make cur-rent design methods, which were developed for sharp-yielding steels with a yield plateau, appli-cable to such higher strength steels. there is no ASTM specification for deformed bars may be used , according to the ACI Code , providing they meet the requirements stated under special circumstances steel in this higher strength range has its place, e.g., in lower-story columns of high-rise buildings.In order to minimize corrosion of rein-forcement and consequent spelling of concrete under sever exposure conditions such as in bridge decks subjected to deicing chemicals , galvanized or epoxy-coated rebars may be specified.Repair of Concrete StructuresReinforced concrete is generally a very du-rable structural material and very little repair work is usually needed. However, its durability can be affected by a variety of causes, including those of design and construction faults, use of inferior materials and exposure to aggressive en-vironment. The need for a repair is primarily dic-tated by the severity of the deterioration as de-termined from the diagnosis. Good workmanship is essential if any thing more than just a cosmetic treatment to the creation is required.1. performance requirements of repair systemHaving established the causes of the defect by carefully diagnosing the distress, the next step should be to consider the requirements of the re-pair method that will offer an effective solution to the problem (see fig.).①DurabilityIt is important to select repair materials that provide adequate durability. Materials used for the repair job should be at least as durable as the substrate concrete to which it is applied.②Protection of steelThe mechanism of protection provided to the reinforcing depends on the type of repair ma-terials used. For example, cementations materials can protect the steel from further corrosion by their inhibitive effect of increasing the alkalinity of the concrete, whereas epoxy resin mortars can give protection against the ingress of oxygen,moisture and other harmful agents.③Bond with substrateThe bond with the substrate must produce an integral repair to prevent entry of moisture and atmospheric gases at the interface. With most re-pair materials, the bond is greatly enhanced with the use of a suitable bonding aid such as an un-filled epoxy resin systems and slurry of Portland cement, plus any latex additives for a Portland cement-based repair system. Precautions should also be takento remove all loose and friable ma-terials from the surfaces to be bonded.④Dimensional StabilityShrinkage of materials during curing should be kept to a minimum. Subsequent dimensional change should be very close in the substrate in order to prevent failure⑤Initial Resistance to Environmentally In-duced DamageSome initial exposure conditions may lead to premature damage lo repairs. For example, partially cured Portland cement repairs can dete-riorate from hot weather preventing full hydration of the cement. To prevent this from happening extra protection during curing time may be nec-essary.⑥Ease of ApplicationMaterials should be easily mixed and ap-plied so that they can be worked readily into small crevices and voids. Ideally, the material should not stick to tools, and should not shear while being trowel led nor slump after placement.⑦AppearanceThe degree to which the repair material should match the existing concrete will depend on the use of the structure and the client' s re-quirements. A surface coating may be required when appearance is important or when cover to reinforcement is small.2. Selection of Repair MethodsA suitable repair counteracts all the defi-ciencies which are relevant to the use of the structure.The selection of tile correct method and material for a particular, application requires careful consideration, whether to meet special requirements for placing strength, durability or other short-or long-term properties. These con-siderations include:1. Nature of the DistressIf alive crack is filled with a rigid material, then either the repair material will eventually fail or some new cracking will occur adjacent to the original crack. Repairs to live cracks must either use flexible materials to accommodate move-ments or else steps must be taken prior to the re-pair to eliminate the movement.2. Position of the CrackTechniques which rely on gravity to intro-duce the material into the crack are more suc-cessfully carried out on horizontal surfaces but are rarely effective on vertical ones.3. EnvironmentIf moisture, water or contaminants are found in the crack, then it is necessary to rectify the leaks Repair to slop leaks may be further com-plicated by the need to make the repairs while the structure is in service and the environment is damp.4. WorkmanshipThe skill the operatives available to carry put the repairs is another relevant factors. Some-times this can mean the difference between a permanent repair and premature failure of the re-pair material.5. CostThe cost of repair materials is usually small compared with the costs of providing access, preparation and actual labor.6. AppearanceThe repair surface may be unsightly, par-ticularly when it appears on a prominent part of the building. In this case, the repair system will include some form of treatment over the entire surface.Reference[1]Philip Jodidio, Contemporary European Architecture, Taschen, Koln, pp.148-153[2]Ann Breen & Dick Rigby, Waterfronts, McGraw-Hill, Inc. New York, 1994, pp.297-300[3]Ann Breen & Dick Rigby, The New Waterfront, Thames and Hudson, London, 1996, pp.118-120[4]Ann Breen & Dick Rigby, The New Waterfront, Thames and Hudson, London, 1996, pp.52-55[5]Robert Holden, International Landscape Design, Laurence King Publishing, London, 1996, pp.10-27[6] A new concept in refrigerant control for heat pumps ,J.R.Harnish,IIR Conference Pa-per,Cleveland,Ohio.May,1996[7]Carrier Corporation-Catalog 523 848,1997[8]Waste Heat Management Handbook, Na-tional Bureau of Standardc Handbook 121, Pub-lica-tion PB 264959, February,1997Ten design principles for air to air heat pumps,Allen Trask,ASHRAE Journal,July,1997重庆科技学院学生毕业设计(论文)外文译文学院建建筑工程学院专业班级工管103学生姓名李学号201044241附件1:外文资料翻译译文混凝土施工事项T.Pauly, M.J.N.Priestley摘要:根据一般承认的惯例看,巴基斯坦的混凝土结构建筑物在结构上的质量,效用和安全需要上都留下了很多值得关注的问题。
外文翻译原文
Applications of magnesium alloys1 the automotive industryWith the rapid development of science and technology, modern automobile manufacturing material composition changed, the proportion of dense material, low density materials have dramatically increases.In the 1990s, the car from the lightweight, environment-friendly materials to the development trend of the increasingly clear direction, aluminum, magnesium alloys are materials become more widely used.According to the calculation, car every hundred kilometres 100kg, reduced quality can save petrol 0.3 L, every 10% lower quality of the car, he can reduce automobile emissions of 10%.And the related car lightweight technology into the 21st century automobile development frontier and hot.Auto lightweight aims to increase automobile fuel economy and improve the safety car.One of the important ways to lightweight is required in order to ensure the intensity and rigidity of lightweight materials selection, magnesium alloy under is the most ideal lightest metal structure material, thus become automobile reduce weight to improve its energy conservation and environmental protection material of choice.At present, the application of magnesium in the bus is: every car between changes in 17kg 0.5 ~, average 3kg/car is about.This number may be in the future is increasing rapidly, magnesium alloy will become the main components of the car.In the U.S., many automobile parts using magnesium alloy casting production, in some models for magnesium consumption is big 4 / 26.3-5.8 kg.The famous auto companies such as general motors and ford, Chrysler, in the past ten years has been committed to the new magnesium alloys and steering clutch, intake, and lighting grippers automotive components etc, development and application effect is very obvious.German vw created the world's economy, one of the car body 1L car than aluminum alloys using space frame, in addition to its 13kg light body shell, gearbox shell, pump seat is made with magnesium alloy frame, etc.Audi car company first launch magnesium alloy die casting instrument. The use of AudiA62.8 Audimultitronic stepless/manual transmission using magnesium alloy enclosure, one-piece total weight than Tiptronic 27kg light.Currently V olkswagen (VW) Passat, AudiA2, A4 and magnesium consumption reached A6 14kg/Taiwan. Japan's Toyota first create magnesium alloy wheels of automobiles, CAM cover parts.Toyota automobile steering with airbag, after using AM60B quality magnesium alloy die-casting, quality of steel products than in the past, stock respectively, and reduced by 15% and 45% steering system vibration.But with the Australian industry ministry mitsubishi, develop deluxe engine. Magnesium applications in automotive industry in China. Shanghai, faw, no.2, changan auto company in research and development and application of magnesium in auto field play an important role.Magnesium in the car, has the following advantages:1) used to be the lightest metal magnesium metal: the proportion of aluminum, zinc 2/3 of less than 1/4, steel or iron 1/4, containing 30% for the glass fiber and magnesium, polycarbonate composite proportion of no more than 10% of it. Thus the use of magnesium alloys can effectively reduce automobile quality.2) than high strength: experiments showed that the intensity of magnesium alloy and stainless steel than than high. So don't reduce parts under the condition of magnesium alloy castings and strength than aluminum casting about 25% weight.3)thermal conductivity:Although the thermal conductivity aluminum alloys, but less than steel double than plastic materials, 10 times higher. Therefore magnesium alloys are widely used in casting automobile wheel, can effectively, provide brake friction heat out braking stability.4) easy machining:Magnesium is very easy, machining, without the cooling liquid lubricant, under the condition of high load processing, can get clean processing.5) to vibration and impact absorption performance is good:The vibration energy absorption of magnesium alloys using ability, in the drive and transmission components can reduce vibration. At the same time, in the steering wheel and seat in the car using magnesium alloy after collision can absorb the impact energy is very good.6) fight depression and other properties: magnesium metal distortion ability, compared to the impact of less than other metals.7) good welding and casting of magnesium alloys melting performance: the latent heat and heat capacity and low alloy melt, than less energy and curing speed, low viscosity, dynamics, casting mould filling better than aluminum alloy die-casting production cycle is short, easily forming thin-walled structure. The affinity with iron and magnesium, quenching iron capacity is low, service mode, mould easily than aluminium alloy high 2.3 times.8) easy recycling: magnesium recovery can be directly melting, casting and then reduce its mechanical properties.Compared with other metal alloys, low melting point, heat fuses in small, renewable energy is consumed during the new materials consumed energy of 4%. In addition, the electromagnetic shielding alloys, good appearance. Magnesium is the most potential auto structural materials, the 21st century will be a magnesium applications with the rapid development of The Times, the significant weight loss of energy saving effect, reproducible and can be recycled and dimension stability of casting and aseismic characteristics, automobile industry of the world.2 communication electronics industryElectronic information industry due to the development of the market for digital electronic communication, and highly integrated products, light and recycling of increasing demands. When the window is more and more big, the function, the more hours more volume of mobile phone housing demand higher intensity, magnesium, and replace plastic under-voltage senior texture. For power is more and more big, the thickness of the thinner notebook computer, fever is a big problem.And the cooling performance, magnesium good soon become this kind of product selection of material shell. For example, thin cast magnesium alloys have excellent performance and its die-casting wall thickness of 0.8-1.5 mm, and maintain the strength, stiffness and impact resistance. Therefore, in the parenchyma, lightweight, impact resistance, electromagnetic shielding, cooling and environmental protection requirement, magnesium became the best choice of the manufacturer. In recent years, the electronicinformation industry of magnesium consumption increase sharply, it has become the global magnesium consumption of another important factor.At present, in the areas of 3C magnesium applications are almost cast magnesium alloys. Magnesium die-cast by AZ (Mg - A1 - Zn) series, this series has good comprehensive mechanical properties, casting properties and corrosion resistance, the highest yield strength, is the most widely used in die casting Mg, one of the most widely used 3C products of AZ91D for. Japan's first will die for 3C product, early in 1991-1995, IBM Japanese companies after years of effort, successfully in seven kinds of notebook computers used on magnesium alloy die-casting shell. This method is the most mature and 3C product in the most widely used methods.3 weapons and equipmentQuality is the impact weapons achieve rapid reaction ability of the battlefield, so, one of the main factors of modern high-tech war for weapon and equipment quality index proposed extremely stringent requirements. Developed countries all expensive, lightweight materials and structure of research of advanced manufacturing technology, in order to reduce weapons, improve the quality of weapon equipment, increase flexibility with elastic and field support system, improve the weapons and dosage of survival and combat soldiers battlefield.Low density of magnesium alloys are the lightest, metal structure material, it has the strength, stiffness and fatigue strength is high, high damping suspension performance, can withstand the tremendous impact load and electric conductivity and thermal conductivity, magnetic shielding, machining and forming technology, excellent polishing performance, easy to recycle and so on a series of advantages, is the ideal weapon lightweight materials.In recent years, the United States and western military power has begun to magnesium.Alloy material used to weapons equipment and related platforms, developing very rapidly. Magnesium alloy material within a period of time in the future, there may be more than aluminum alloy material, become the weapon equipment in one of the most widely used metal structural materials.In the weapon equipment, the applications of magnesium alloys mainly confined in aviation, ing magnesium alloy production of aerospace, main components: aircraft parts, plane wall panels, rigging, gasoline and fuel oil system components, partition, fuselage rack, plane truss, ribs, long CangTi every box, fighter aircraft cockpit cabin, pilot seat, operating system and bearings, aircraft landing rocker outside, the wheel hub, rim, aircraft motor shell, pump shell, instrument shell of castings, high air-sealed members and various accessories, organized the wing aircraft engine parts, rib, casting, centrifuge magazine, front after reducer, supporting shell and magazine cover, the shell after the helicopter engine speed reducer, computer magazine, magazine before the jet engine turbine compressor magazine, supporting shell, magazine, centrifuge cartridge into the qi, hydraulic device shell, Satellites and space probe, satellite Angle, bracket, bushings, beams and t-tube stent, fittings, and satellite antenna structure, space station frames on the pieces.In recent years, except in the military and civil aircraft and aircraft engine on the magnesium alloy material, abroad, in other applications of weapons are increasing. In the light, America has been used in portable firearm stents will magnesium, solo withcommunications equipment.U.S. manufacturing Racegun (strong propellant, advanced combat with handguns) trigger parts using magnesium alloys, quality, reduce rf time reduced by 45%, American ideal solo OICW combat system (comprehensive) shell adopts magnesium, quality components GMBH &co 6.37 8.17 from below, Hanter Firearms LNC M39M1 pistol with magnesium alloy QiangJi parts. Russia's production POSP6 x 12 guns with zoom lens shell adopts magnesium alloys, observations of magnesium alloy by Germany, the bottom, pistols gunstocks sub-machine gun, France WK50 type anti-tank QiangLiuDan application of magnesium alloys, an all-missile quality 800g only.In the gun and ammunition, Germany AMX magnesium alloy by the CN105F1-30 XianTang type of gun tube sheath, British 120mmBATL6Wombat recoilless antitank guns used alloys, greatly reduced, and the quality of the M8-0.5 the rifle, total quality 308kg before, Israel has played for times caliber shelling magnesium missile shell, either.In a chariot, American land raider buggies AAA V WE43A magnesium alloy by dynamic transmission gears, case. W274A1 type magnesium alloy by military jeep body and bridge shell, significantly reducing the quality, good flexibility and YueYeXing, four soldiers can lift it, some modifications are mounted cannon, became the most without recoil pocket of self-propelled guns.。
5、外文文献翻译(附原文)产业集群,区域品牌,Industrial cluster ,Regional brand
外文文献翻译(附原文)外文译文一:产业集群的竞争优势——以中国大连软件工业园为例Weilin Zhao,Chihiro Watanabe,Charla-Griffy-Brown[J]. Marketing Science,2009(2):123-125.摘要:本文本着为促进工业的发展的初衷探讨了中国软件公园的竞争优势。
产业集群深植于当地的制度系统,因此拥有特殊的竞争优势。
根据波特的“钻石”模型、SWOT模型的测试结果对中国大连软件园的案例进行了定性的分析。
产业集群是包括一系列在指定地理上集聚的公司,它扎根于当地政府、行业和学术的当地制度系统,以此获得大量的资源,从而获得产业经济发展的竞争优势。
为了成功驾驭中国经济范式从批量生产到开发新产品的转换,持续加强产业集群的竞争优势,促进工业和区域的经济发展是非常有必要的。
关键词:竞争优势;产业集群;当地制度系统;大连软件工业园;中国;科技园区;创新;区域发展产业集群产业集群是波特[1]也推而广之的一个经济发展的前沿概念。
作为一个在全球经济战略公认的专家,他指出了产业集群在促进区域经济发展中的作用。
他写道:集群的概念,“或出现在特定的地理位置与产业相关联的公司、供应商和机构,已成为了公司和政府思考和评估当地竞争优势和制定公共决策的一种新的要素。
但是,他至今也没有对产业集群做出准确的定义。
最近根据德瑞克、泰克拉[2]和李维[3]检查的关于产业集群和识别为“地理浓度的行业优势的文献取得了进展”。
“地理集中”定义了产业集群的一个关键而鲜明的基本性质。
产业由地区上特定的众多公司集聚而成,他们通常有共同市场、,有着共同的供应商,交易对象,教育机构和其它像知识及信息一样无形的东西,同样地,他们也面临相似的机会和威胁。
在全球产业集群有许多种发展模式。
比如美国加州的硅谷和马萨诸塞州的128鲁特都是知名的产业集群。
前者以微电子、生物技术、和风险资本市场而闻名,而后者则是以软件、计算机和通讯硬件享誉天下[4]。
外文翻译--外文原文
外文翻译--外文原文MCU DescriptionSCM is also known as micro-controller (Microcontroller Unit), commonly used letters of the acronym MCU MCU that it was first used in industrial control. Only a single chip by the CPU chip developed from a dedicated processor. The first design is by a large number of peripherals and CPU on a chip in the computer system, smaller, more easily integrated into a complex and demanding on the volume control device which. INTEL's Z80 is the first designed in accordance with this idea processor, then on the development of microcontroller and dedicated processors have parted ways.Are 8-bit microcontroller early or 4 bits. One of the most successful is the INTEL 8031, for a simple, reliable and good performance was a lot of praise. Then developed in 8031 out of MCS51 MCU Systems. SCM systems based on this system until now is still widely used. With the increased requirements of industrial control field, began a 16-bit microcontroller, because the cost is not satisfactory but have not been very widely used. After 90 years with the great development of consumer electronics, microcontroller technology has been a huge increase. With INTEL i960 series, especially the later series of widely used ARM, 32-bit microcontroller quickly replace high-end 16-bit MCU status and enter the mainstream market. The traditional 8-bit microcontroller performance have been the rapid increase capacity increase compared to 80 the number of times. Currently, high-end 32-bit microcontroller clocked over 300MHz, the performance catching the mid-90's dedicated processor, while the average model prices fall to one U.S. dollars, the most high-end [1] model only 10 dollars. Modern SCM systems are no longer only in the development and use of bare metal environment, a large number of proprietary embedded operating system is widely used in the full range of SCM. The handheld computers and cell phones as the core processing of high-end microcontroller can even use a dedicated Windows and Linux operating systems.SCM is more suitable than the specific processor used in embedded systems, so it was up to the application. In fact the number of SCM is the world's largest computer. Modern human life used in almost every piece of electronic and mechanical products will be integrated single chip. Phone, telephone, calculator, home appliances, electronic toys, handheld computers and computer accessories such as a mouse with a 1-2 in both the Department of SCM. Personal computer will have a large number ofSCM in the work. General car with more than 40 SCM, complex industrial control systems may even have hundreds of SCM in the same time work! SCM is not only far exceeds the number of PC and other computing the sum, or even more than the number of human beingsSingle chip, also known as single-chip microcontroller, it is not complete a certain logic chips, but to a computer system integrated into a chip. Equivalent to a micro-computer, and computer than just the lack of a microcontroller I / O devices. General talk: a chip becomes a computer. Its small size, light weight, cheap, for the study, application and development of facilities provided. At the same time, learning to use the MCU is to understand the principle and structure of the computer the best choice.SCM and the computer functions internally with similar modules, such as CPU, memory, parallel bus, the same effect as well, and hard disk memory devices, and different is its performance of these components were relatively weak many of our home computer, but the price is low , usually not more than 10 yuan you can do with it ...... some control for a class is not very complicated electrical work is enough of. We are using automatic drum washing machine, smoke hood, VCD and so on appliances which could see its shadow! ...... It is primarily as a control section of the core componentsIt is an online real-time control computer, control-line is that the scene is needed is a stronger anti-jamming ability, low cost, and this is, and off-line computer (such as home PC), the main difference.Single chipMCU is through running, and can be modified. Through different procedures to achieve different functions, in particular special unique features, this is another device much effort needs to be done, some great efforts are very difficult to do. A not very complex functions if the 50's with the United States developed 74 series, or the 60's CD4000 series of these pure hardware buttoned, then the circuit must be a large PCB board! But if the United States if the 70's with a series of successful SCM market, the result will be a drastic change! Just because you are prepared by microcomputer programs can achieve high intelligence, high efficiency and high reliability!As the microcontroller on the cost-sensitive, so now the dominant software or the lowest level assembly language, which is the lowest level in addition to more than binary machine code language, and as so low why is the use? Many high-levellanguage has reached the level of visual programming Why is not it? The reason is simply that there is no home computer as a single chip CPU, not as hard as a mass storage device. A visualization of small high-level language program which even if only one button, will reach tens of K of size! For the home PC's hard drive in terms of nothing, but in terms of the MCU is not acceptable. SCM in the utilization of hardware resources to be very high for the job so although the original is still in the compilation of a lot of use. The same token, if the giant computer operating system and applications run up to get home PC, home PC, also can not afford to.Can be said that the twentieth century across the three "power" era, that is, the age of electricity, the electronic age and has entered into the computer age. However, this computer, usually refers to the personal computer, referred to as PC. It consists of the host, keyboard, monitor and other components. Another type of computer, most people do not know how. This computer is to give all kinds of intelligent machines single chip (also known as micro-controller). As the name suggests, this computer system took only a minimal integrated circuit, can be a simple operation and control. Because it is small, usually hidden in the charged mechanical "stomach" in. It is in the device, like the human brain plays a role, it goes wrong, the whole plant was paralyzed. Now, this microcontroller has a very broad field of use, such as smart meters, real-time industrial control, communications equipment, navigation systems, and household appliances. Once all kinds of products were using SCM, can serve to upgrade the effectiveness of products, often in the product name preceded by the adjective - "intelligent," such as intelligent washing machines. Now some technical personnel of factories or other amateur electronics developers to engage in out of certain products, not the circuit is too complicated, that function is too simple and can easily be copied. The reason may be stuck in the product did not use a microcontroller or other programmable logic device.SCM historySCM was born in the late 20th century, 70, experienced SCM, MCU, SoC three stages.First model1.SCM the single chip microcomputer (Single Chip Microcomputer) stage, mainly seeking the best of the best single form of embedded systems architecture. "Innovation model" success, laying the SCM and general computer completely different path of development. In the open road of independent development ofembedded systems, Intel Corporation contributed.2.MCU the micro-controller (Micro Controller Unit) stage, the main direction of technology development: expanding to meet the embedded applications, the target system requirements for the various peripheral circuits and interface circuits, highlight the object of intelligent control. It involves the areas associated with the object system, therefore, the development of MCU's responsibility inevitably falls on electrical, electronics manufacturers. From this point of view, Intel faded MCU development has its objective factors. In the development of MCU, the most famous manufacturers as the number of Philips Corporation.Philips company in embedded applications, its great advantage, the MCS-51 single-chip micro-computer from the rapid development of the micro-controller. Therefore, when we look back at the path of development of embedded systems, do not forget Intel and Philips in History.Embedded SystemsEmbedded system microcontroller is an independent development path, the MCU important factor in the development stage, is seeking applications to maximize the solution on the chip; Therefore, the development of dedicated single chip SoC trend of the natural form. As the microelectronics, IC design, EDA tools development, application system based on MCU SoC design have greater development. Therefore, the understanding of the microcontroller chip microcomputer can be, extended to the single-chip micro-controller applications.MCU applicationsSCM now permeate all areas of our lives, which is almost difficult to find traces of the field without SCM. Missile navigation equipment, aircraft, all types of instrument control, computer network communications and data transmission, industrial automation, real-time process control and data processing, extensive use of various smart IC card, civilian luxury car security system, video recorder, camera, fully automatic washing machine control, and program-controlled toys, electronic pet, etc., which are inseparable from the microcontroller. Not to mention the area of robot control, intelligent instruments, medical equipment was. Therefore, the MCU learning, development and application of the large number of computer applications and intelligent control of the scientists, engineers.SCM is widely used in instruments and meters, household appliances, medical equipment, aerospace, specialized equipment, intelligent management and processcontrol fields, roughly divided into the following several areas:1. In the application of Intelligent InstrumentsSCM has a small size, low power consumption, controlling function, expansion flexibility, the advantages of miniaturization and ease of use, widely used instrument, combining different types of sensors can be realized Zhuru voltage, power, frequency, humidity, temperature, flow, speed, thickness, angle, length, hardness, elemental, physical pressure measurement. SCM makes use of digital instruments, intelligence, miniaturization, and functionality than electronic or digital circuits more powerful. Such as precision measuring equipment (power meter, oscilloscope, various analytical instrument).2. In the industrial control applicationWith the MCU can constitute a variety of control systems, data acquisition system. Such as factory assembly line of intelligent control3. In Household AppliancesCan be said that the appliances are basically using SCM, praise from the electric rice, washing machines, refrigerators, air conditioners, color TV, and other audio video equipment, to the electronic weighing equipment, varied, and omnipresent.4. In the field of computer networks and communications applicationsMCU general with modern communication interface, can be easy with the computer data communication, networking and communications in computer applications between devices had excellent material conditions, are basically all communication equipment to achieve a controlled by MCU from mobile phone, telephone, mini-program-controlled switchboards, building automated communications call system, train radio communication, to the daily work can be seen everywhere in the mobile phones, trunked mobile radio, walkie-talkies, etc.5. Microcomputer in the field of medical device applicationsSCM in the use of medical devices is also quite extensive, such as medical respirator, the various analyzers, monitors, ultrasound diagnostic equipment and hospital beds, etc. call system.6. In a variety of major appliances in the modular applicationsDesigned to achieve some special single specific function to be modular in a variety of circuit applications, without requiring the use of personnel to understand its internal structure. If music integrated single chip, seemingly simple function, miniature electronic chip in the net (the principle is different from the tape machine),you need a computer similar to the principle of the complex. Such as: music signal to digital form stored in memory (like ROM), read by the microcontroller, analog music into electrical signals (similar to the sound card).In large circuits, modular applications that greatly reduce the volume, simplifies the circuit and reduce the damage, error rate, but also easy to replace.7. Microcontroller in the application field of automotive equipmentSCM in automotive electronics is widely used, such as a vehicle engine controller, CAN bus-based Intelligent Electronic Control Engine, GPS navigation system, abs anti-lock braking system, brake system, etc..In addition, the MCU in business, finance, research, education, national defense, aerospace and other fields has a very wide range of applications.Application of six important part of learningMCU learning an important part of the six applications1, Bus:We know that a circuit is always made by the devices connected by wires, in analog circuits, the connection does not become a problem because the device is a serial relationship between the general, the device is not much connection between the , but the computer is not the same circuit, it is a microprocessor core, the device must be connected with the microprocessor, the device must be coordination between, so they need to connect on a lot, as if still analog circuit like the microprocessor and devices in the connection between the individual, the number of lines will be a little more surprising, therefore the introduction of the microprocessor bus Zhong Each device Gongtong access connections, all devices 8 Shuju line all received eight public online, that is the equivalent of all devices together in parallel, but only this does not work, if there are two devices send data at the same time, a 0, a 1, then, whether the receiver received what is it? This situation is not allowed, so to be controlled by controlling the line, time-sharing the device to work at any time only one device to send data (which can have multiple devices to receive both). Device's data connection is known as the data bus, the device is called line of control all the control bus. Internal or external memory in the microcontroller and other devices have memory cells, the memory cell to be assigned addresses, you can use, distribution, of course, to address given in the form of electrical signals, and as more memory cells, so, for the address allocation The line is also more of these lines is called the address bus.Second, data, address, command:The reason why these three together because of the nature of these three are the same - the number, or are a string of '0 'and '1' form the sequence. In other words, addresses, instructions are also data. Instruction: from single chip designer provides a number of commonly used instructions with mnemonic we have a strict correspondence between the developer can not be changed by the MCU. Address: the search for MCU internal, external storage units, input and output port based on the address of the internal unit value provided by the chip designer is good, can not be changed, the external unit can be single chip developers to decide, but there are a number of address units is a must (see procedures for the implementation of the process).Third, P0 port, P2 and P3 of the second function I use:Beginners often on the P0 port, P2 and P3 port I use the second function puzzled that the second function and have a switch between the original function of the process, or have a directive, in fact, the port The second feature is automatic, do not need instructions to convert. Such as P3.6, P3.7 respectively WR, RD signal, when the microchip processing machines external RAM or external I / O port, they are used as a second function, not as a general-purpose I / O port used, so long as a A microprocessor implementation of the MOVX instruction, there will be a corresponding signal sent from the P3.6 or P3.7, no prior use of commands. In fact 'not as a general-purpose I / O port use' is also not a 'no' but (user) 'not' as a general-purpose I / O port to use. You can arrange the order of a SETB P3.7's instructions, and when the MCU execution to the instruction, the also make P3.7 into a high, but users will not do so because this is usually will cause the system to collapse.Fourth, the program's implementation:Reduction in power after the 8051 microcontroller within the program counter (PC) in the value of 0000 ', the process is always from the 0000' units started, that is: the system must exist in ROM 0000 'this unit , and in 0000 'unit must be stored in a single instruction.5, the stack:Stack is a region, is used to store data, there is no special about the region itself is a part of internal RAM, special access to its data storage and the way that the so-called 'advanced post out backward first out ', and the stack has a special data transmission instructions that' PUSH 'and' POP ', has a special expertise in its servicesunit, that is, the stack pointer SP, whenever a PUSH instruction execution, SP on (in the Based on the original value) automatically add 1, whenever the implementation of a POP instruction, SP will (on the basis of the original value) automatically by 1. As the SP values can be changed with the instructions, so long as the beginning of the process to change the value of the SP, you can set the stack memory unit required, such as the program begins, with an MOV SP, # 5FH instructions When set on the stack starting from the memory unit 60H unit. There is always the beginning of the general procedure with such a directive to set the stack pointer, because boot, SP initial value of 07H, 08H This unit from the beginning to stack next, and 08H to 1FH 8031 is the second in the region, three or four working register area, often used, this will lead to confusion of data. Different authors when writing programs, initialize the stack is not exactly the same directive, which is the author's habit. When set up the stack zone, does not mean that the region become a special memory, it can still use the same memory region as normal, but generally the programmer does not regard it as an ordinary memory used.From the world of radio in the world to a single chipModern computer technology, industrial revolution, the world economy from the capital into the economy to knowledge economy. Field in the electronic world, from the 20th century into the era of radio to computer technology in the 21st century as the center of the intelligent modern era of electronic systems. The basic core of modern electronic systems are embedded computer systems (referred to as embedded systems), while the microcontroller is the most typical and most extensive and most popular embedded systems.First, radio has created generations of excellence in the worldFifties and sixties in the 20th century, the most representative of the advanced electronic technology is wireless technology, including radio broadcasting, radio, wireless communications (telegraph), Amateur Radio, radio positioning, navigation and other telemetry, remote control, remote technology. Early that these electronic technology led many young people into the wonderful digital world, radio show was a wonderful life, the prospects for science and technology. Electronics began to form a new discipline. Radio electronics, wireless communications began e-world journey. Radio technology not only as a representative of advanced science and technology at that time, but also from popular to professional fields of science, attracting the young people and enable them to find a lot of fun. Ore from the bedside to thesuperheterodyne radio radio; report issued from the radio amateur radio stations; from the telephone, electric bell to the radio control model. Became popular youth radio technology, science and technology education is the most popular and most extensive content. So far, many of the older generation of engineers, experts, Professor of the year are radio enthusiasts. Fun radio technology, radio technology, comprehensive training, from basic principles of electronics, electronic components to the radio-based remote control, telemetry, remote electronic systems, has trained several generations of technological excellence.Second, from the popularity of the radio era to era of electronic technologyThe early radio technology to promote the development of electronic technology, most notably electronic vacuum tube technology to semiconductor electronic technology. Semiconductor technology to realize the active device miniaturization and low cost, so more popular with radio technology and innovation, and to greatly broaden the number of non-radio-control areas.The development of semiconductor technology lead to the production of integrated circuit, forming the modern electronic technology leap from discrete electronics into the era of era of integrated circuits. Electronic design engineers no longer use the discrete electronic components designed circuit modules, and direct selection of integrated circuit components constitute a single system. They freed the design of the circuit unit dedicated to system design, greatly liberating the productive forces of science and technology, promote the wider spread of electronic systems.Semiconductor integrated circuits in the basic digital logic circuits first breakthrough.A large number of digital logic circuits, such as gates, counters, timers, shift registers, and analog switches, comparators, etc., for the electronic digital control provides excellent conditions for the traditional mechanical control to electronic control. Power electronic devices and sensor technology to make the original to the radio as the center of electronic technology turned to mechanical engineering in the field of digital control systems, testing in the field of information collection, movement of electrical mechanical servo drive control object.Semiconductor and integrated circuit technology will bring us a universal age of electronic technology, wireless technology as the field of electronic technology a part of.70 years into the 20th century, large scale integrated circuit appeared to promotethe conventional electronic circuit unit-specific electronic systems development. Many electronic systems unit into a dedicated integrated devices such as radios, electronic clocks, calculators, electronic engineers in these areas from the circuit, the system designed to debug into the device selection, peripheral device adapter work. Electronic technology, and electronic products enriched, electronic engineers to reduce the difficulty, but at the same time, radio technology, electronic technology has weakened the charm. The development of semiconductor integrated circuits classical electronic systems are maturing, remain in the large scale integrated circuit other than the shrinking of electronic technology, electronic technology is not the old days of radio fun times and comprehensive engineering training.Third, from the classic era of electronic technology to modern electronic technology of the times80 years into the 20th century, the century of economic change is the most important revolution in the computer. The computer revolution in the most important sign is the birth of the computer embedded applications. Modern computer numerical requirements should be born. A long period of time, is to develop the massive computer numerical duty. But the computer shows the logic operation, processing, control, attracting experts in the field of electronic control, they want development to meet the control object requirements of embedded applications, computer systems. If you meet the massive data-processing computer system known as general-purpose computer system, then the system can be the embedded object (such as ships, aircraft, motorcycles, etc.) in a computer system called the embedded computer. Clearly, both the direction of technology development are different. The former requires massive data storage, handling, processing and analysis of high-speed data transmission; while the latter requires reliable operation in the target environment, the external physical parameters on high-speed acquisition, analysis and processing logic and the rapid control of external objects. It will add an early general-purpose computer data acquisition unit, the output driver circuit reluctance to form a heat treatment furnace temperature control system. This general-purpose computer system is not possible for most of the electronic system used, and to make general-purpose computer system meets the requirements of embedded applications, will inevitably affect the development of high-speed numeric processing. In order to solve the contradiction between the development of computer technology, in the 20th century 70s, semiconductor experts another way, in full accordance with the electronic systemembedded computer application requirements, a micro-computer's basic system on a chip, the formation of the early SCM (Single Chip Microcomputer). After the advent of single chip in the computer industry began to appear in the general-purpose computer systems and embedded systems the two branches. Since then, both the embedded system, or general-purpose computer systems have been developed rapidly.Although the early general-purpose computer converted the embedded computer systems, and real embedded system began in the emergence of SCM. Because the microcontroller is designed specifically for embedded applications, the MCU can only achieve embedded applications. MCU embedded applications that best meet environmental requirements, for example, chip-level physical space, large-scale integrated circuits low-cost, good peripheral interface bus and outstanding control of instruction.A computer system microcontroller core, embedded electronic systems, intelligent electronic systems for the foundation. Therefore, the current single chip electronic system in widespread use of electronic systems to enable rapid transition to the classical modern intelligent electronic systems.4, single chip to create the modern era of electronic systemsA microcontroller and embedded systemsEmbedded computer systems from embedded applications, embedded systems for early general-purpose computer adapted to the object system embedded in a variety of electronic systems, such as the ship's autopilot, engine monitoring systems. Embedded system is primarily a computer system, followed by it being embedded into the object system, objects in the object system to achieve required data collection, processing, status display, the output control functions, as embedded in the object system, embedded system computer does not have an independent form and function of the computer. SCM is entirely in accordance with the requirements of embedded system design, so SCM is the most typical embedded systems. SCM is the early application of technical requirements in accordance with the design of embedded computer chip integration, hence the name single chip. Subsequently, the MCU embedded applications to meet the growing demands of its control functions and peripheral interface functions, in particular, highlight the control function, so has international name the single chip microcontroller (MCU, Microcontroller Unit).2 MCU modern electronic systems consisting of electronic systems will become mainstream。
外文文献翻译原文+译文
外文文献翻译原文Analysis of Con tin uous Prestressed Concrete BeamsChris BurgoyneMarch 26, 20051、IntroductionThis conference is devoted to the development of structural analysis rather than the strength of materials, but the effective use of prestressed concrete relies on an appropriate combination of structural analysis techniques with knowledge of the material behaviour. Design of prestressed concrete structures is usually left to specialists; the unwary will either make mistakes or spend inordinate time trying to extract a solution from the various equations.There are a number of fundamental differences between the behaviour of prestressed concrete and that of other materials. Structures are not unstressed when unloaded; the design space of feasible solutions is totally bounded;in hyperstatic structures, various states of self-stress can be induced by altering the cable profile, and all of these factors get influenced by creep and thermal effects. How were these problems recognised and how have they been tackled?Ever since the development of reinforced concrete by Hennebique at the end of the 19th century (Cusack 1984), it was recognised that steel and concrete could be more effectively combined if the steel was pretensioned, putting the concrete into compression. Cracking could be reduced, if not prevented altogether, which would increase stiffness and improve durability. Early attempts all failed because the initial prestress soon vanished, leaving the structure to be- have as though it was reinforced; good descriptions of these attempts are given by Leonhardt (1964) and Abeles (1964).It was Freyssineti’s observations of the sagging of the shallow arches on three bridges that he had just completed in 1927 over the River Allier near Vichy which led directly to prestressed concrete (Freyssinet 1956). Only the bridge at Boutiron survived WWII (Fig 1). Hitherto, it had been assumed that concrete had a Young’s modulus which remained fixed, but he recognised that the de- ferred strains due to creep explained why the prestress had been lost in the early trials. Freyssinet (Fig. 2) also correctly reasoned that high tensile steel had to be used, so that some prestress would remain after the creep had occurred, and alsothat high quality concrete should be used, since this minimised the total amount of creep. The history of Freyssineti’s early prestressed concrete work is written elsewhereFigure1:Boutiron Bridge,Vic h yFigure 2: Eugen FreyssinetAt about the same time work was underway on creep at the BRE laboratory in England ((Glanville 1930) and (1933)). It is debatable which man should be given credit for the discovery of creep but Freyssinet clearly gets the credit for successfully using the knowledge to prestress concrete.There are still problems associated with understanding how prestressed concrete works, partly because there is more than one way of thinking about it. These different philosophies are to some extent contradictory, and certainly confusing to the young engineer. It is also reflected, to a certain extent, in the various codes of practice.Permissible stress design philosophy sees prestressed concrete as a way of avoiding cracking by eliminating tensile stresses; the objective is for sufficient compression to remain after creep losses. Untensionedreinforcement, which attracts prestress due to creep, is anathema. This philosophy derives directly from Freyssinet’s logic and is primarily a working stress concept.Ultimate strength philosophy sees prestressing as a way of utilising high tensile steel as reinforcement. High strength steels have high elastic strain capacity, which could not be utilised when used as reinforcement; if the steel is pretensioned, much of that strain capacity is taken out before bonding the steel to the concrete. Structures designed this way are normally designed to be in compression everywhere under permanent loads, but allowed to crack under high live load. The idea derives directly from the work of Dischinger (1936) and his work on the bridge at Aue in 1939 (Schonberg and Fichter 1939), as well as that of Finsterwalder (1939). It is primarily an ultimate load concept. The idea of partial prestressing derives from these ideas.The Load-Balancing philosophy, introduced by T.Y. Lin, uses prestressing to counter the effect of the permanent loads (Lin 1963). The sag of the cables causes an upward force on the beam, which counteracts the load on the beam. Clearly, only one load can be balanced, but if this is taken as the total dead weight, then under that load the beam will perceive only the net axial prestress and will have no tendency to creep up or down.These three philosophies all have their champions, and heated debates take place between them as to which is the most fundamental.2、Section designFrom the outset it was recognised that prestressed concrete has to be checked at both the working load and the ultimate load. For steel structures, and those made from reinforced concrete, there is a fairly direct relationship between the load capacity under an allowable stress design, and that at the ultimate load under an ultimate strength design. Older codes were based on permissible stresses at the working load; new codes use moment capacities at the ultimate load. Different load factors are used in the two codes, but a structure which passes one code is likely to be acceptable under the other.For prestressed concrete, those ideas do not hold, since the structure is highly stressed, even when unloaded. A small increase of load can cause some stress limits to be breached, while a large increase in load might be needed to cross other limits. The designer has considerable freedom to vary both the working load and ultimate load capacities independently; both need to be checked.A designer normally has to check the tensile and compressive stresses, in both the top and bottom fibre of the section, for every load case. The critical sections are normally, but not always, the mid-span and the sections over piers but other sections may become critical ,when the cable profile has to be determined.The stresses at any position are made up of three components, one of which normally has a different sign from the other two; consistency of sign convention is essential.If P is the prestressing force and e its eccentricity, A and Z are the area of the cross-section and its elastic section modulus, while M is the applied moment, then where ft and fc are the permissible stresses in tension and compression.c e t f ZM Z P A P f ≤-+≤Thus, for any combination of P and M , the designer already has four in- equalities to deal with.The prestressing force differs over time, due to creep losses, and a designer isusually faced with at least three combinations of prestressing force and moment;• the applied moment at the time the prestress is first applied, before creep losses occur,• the maximum applied moment after creep losses, and• the minimum applied moment after creep losses.Figure 4: Gustave MagnelOther combinations may be needed in more complex cases. There are at least twelve inequalities that have to be satisfied at any cross-section, but since an I-section can be defined by six variables, and two are needed to define the prestress, the problem is over-specified and it is not immediately obvious which conditions are superfluous. In the hands of inexperienced engineers, the design process can be very long-winded. However, it is possible to separate out the design of the cross-section from the design of the prestress. By considering pairs of stress limits on the same fibre, but for different load cases, the effects of the prestress can be eliminated, leaving expressions of the form:rangestress e Perm issibl Range Mom entZ These inequalities, which can be evaluated exhaustively with little difficulty, allow the minimum size of the cross-section to be determined.Once a suitable cross-section has been found, the prestress can be designed using a construction due to Magnel (Fig.4). The stress limits can all be rearranged into the form:()M fZ PA Z e ++-≤1 By plotting these on a diagram of eccentricity versus the reciprocal of the prestressing force, a series of bound lines will be formed. Provided the inequalities (2) are satisfied, these bound lines will always leave a zone showing all feasible combinations of P and e. The most economical design, using the minimum prestress, usually lies on the right hand side of the diagram, where the design is limited by the permissible tensile stresses.Plotting the eccentricity on the vertical axis allows direct comparison with the crosssection, as shown in Fig. 5. Inequalities (3) make no reference to the physical dimensions of the structure, but these practical cover limits can be shown as wellA good designer knows how changes to the design and the loadings alter the Magnel diagram. Changing both the maximum andminimum bending moments, but keeping the range the same, raises and lowers the feasible region. If the moments become more sagging the feasible region gets lower in the beam.In general, as spans increase, the dead load moments increase in proportion to the live load. A stage will be reached where the economic point (A on Fig.5) moves outside the physical limits of the beam; Guyon (1951a) denoted the limiting condition as the critical span. Shorter spans will be governed by tensile stresses in the two extreme fibres, while longer spans will be governed by the limiting eccentricity and tensile stresses in the bottom fibre. However, it does not take a large increase in moment ,at which point compressive stresses will govern in the bottom fibre under maximum moment.Only when much longer spans are required, and the feasible region moves as far down as possible, does the structure become governed by compressive stresses in both fibres.3、Continuous beamsThe design of statically determinate beams is relatively straightforward; the engineer can work on the basis of the design of individual cross-sections, as outlined above. A number of complications arise when the structure is indeterminate which means that the designer has to consider, not only a critical section,but also the behaviour of the beam as a whole. These are due to the interaction of a number of factors, such as Creep, Temperature effects and Construction Sequence effects. It is the development of these ideas whichforms the core of this paper. The problems of continuity were addressed at a conference in London (Andrew and Witt 1951). The basic principles, and nomenclature, were already in use, but to modern eyes concentration on hand analysis techniques was unusual, and one of the principle concerns seems to have been the difficulty of estimating losses of prestressing force.3.1 Secondary MomentsA prestressing cable in a beam causes the structure to deflect. Unlike the statically determinate beam, where this motion is unrestrained, the movement causes a redistribution of the support reactions which in turn induces additional moments. These are often termed Secondary Moments, but they are not always small, or Parasitic Moments, but they are not always bad.Freyssinet’s bridge across the Marne at Luzancy, started in 1941 but not completed until 1946, is often thought of as a simply supported beam, but it was actually built as a two-hinged arch (Harris 1986), with support reactions adjusted by means of flat jacks and wedges which were later grouted-in (Fig.6). The same principles were applied in the later and larger beams built over the same river.Magnel built the first indeterminate beam bridge at Sclayn, in Belgium (Fig.7) in 1946. The cables are virtually straight, but he adjusted the deck profile so that the cables were close to the soffit near mid-span. Even with straight cables the sagging secondary momentsare large; about 50% of the hogging moment at the central support caused by dead and live load.The secondary moments cannot be found until the profile is known but the cablecannot be designed until the secondary moments are known. Guyon (1951b) introduced the concept of the concordant profile, which is a profile that causes no secondary moments; es and ep thus coincide. Any line of thrust is itself a concordant profile.The designer is then faced with a slightly simpler problem; a cable profile has to be chosen which not only satisfies the eccentricity limits (3) but is also concordant. That in itself is not a trivial operation, but is helped by the fact that the bending moment diagram that results from any load applied to a beam will itself be a concordant profile for a cable of constant force. Such loads are termed notional loads to distinguish them from the real loads on the structure. Superposition can be used to progressively build up a set of notional loads whose bending moment diagram gives the desired concordant profile.3.2 Temperature effectsTemperature variations apply to all structures but the effect on prestressed concrete beams can be more pronounced than in other structures. The temperature profile through the depth of a beam (Emerson 1973) can be split into three components for the purposes of calculation (Hambly 1991). The first causes a longitudinal expansion, which is normally released by the articulation of the structure; the second causes curvature which leads to deflection in all beams and reactant moments in continuous beams, while the third causes a set of self-equilibrating set of stresses across the cross-section.The reactant moments can be calculated and allowed-for, but it is the self- equilibrating stresses that cause the main problems for prestressed concrete beams. These beams normally have high thermal mass which means that daily temperature variations do not penetrate to the core of the structure. The result is a very non-uniform temperature distribution across the depth which in turn leads to significant self-equilibrating stresses. If the core of the structure is warm, while the surface is cool, such as at night, then quite large tensile stresses can be developed on the top and bottom surfaces. However, they only penetrate a very short distance into the concrete and the potential crack width is very small. It can be very expensive to overcome the tensile stress by changing the section or the prestress。
毕业论文外文翻译范例
外文原文(一)Savigny and his Anglo-American Disciple s*M. H. HoeflichFriedrich Carl von Savigny, nobleman, law reformer, champion of the revived German professoriate, and founder of the Historical School of jurisprudence, not only helped to revolutionize the study of law and legal institutions in Germany and in other civil law countries, but also exercised a profound influence on many of the most creative jurists and legal scholars in England and the United States. Nevertheless, tracing the influence of an individual is always a difficult task. It is especially difficult as regards Savigny and the approach to law and legal sources propounded by the Historical School. This difficulty arises, in part, because Savigny was not alone in adopting this approach. Hugo, for instance, espoused quite similar ideas in Germany; George Long echoed many of these concepts in England during the 1850s, and, of course, Sir Henry Sumner Maine also espoused many of these same concepts central to historical jurisprudence in England in the 1860s and 1870s. Thus, when one looks at the doctrinal writings of British and American jurists and legal scholars in the period before 1875, it is often impossible to say with any certainty that a particular idea which sounds very much the sort of thing that might, indeed, have been derived from Savigny's works, was, in fact, so derived. It is possible, nevertheless, to trace much of the influence of Savigny and his legal writings in the United States and in Great Britain during this period with some certainty because so great was his fame and so great was the respect accorded to his published work that explicit references to him and to his work abound in the doctrinal writing of this period, as well as in actual law cases in the courts. Thus, Max Gutzwiller, in his classic study Der einfluss Savignys auf die Entwicklung des International privatrechts, was able to show how Savigny's ideas on conflict of laws influenced such English and American scholars as Story, Phillimore, Burge, and Dicey. Similarly, Andreas Schwarz, in his "Einflusse Deutscher Zivilistik im Auslande," briefly sketched Savigny's influence upon John Austin, Frederick Pollock, and James Bryce. In this article I wish to examine Savigny's influence over a broader spectrum and to draw a picture of his general fame and reputation both in Britain and in the United States as the leading Romanist, legal historian, and German legal academic of his day. The picture of this Anglo-American respect accorded to Savigny and the historical school of jurisprudence which emerges from these sources is fascinating. It sheds light not only upon Savigny’s trans-channel, trans-Atlantic fame, but also upon the extraordinarily*M.H.Hoeflich, Savigny and his Anglo-American Disciples, American Journal of Comparative Law, vol.37, No.1, 1989.cosmopolitan outlook of many of the leading American and English jurists of the time. Of course, when one sets out to trace the influence of a particular individual and his work, it is necessary to demonstrate, if possible, precisely how knowledge of the man and his work was transmitted. In the case of Savigny and his work on Roman law and ideas of historical jurisprudence, there were three principal modes of transmission. First, there was the direct influence he exercised through his contacts with American lawyers and scholars. Second, there was the influence he exercised through his books. Third, there was the influence he exerted indirectly through intermediate scholars and their works. Let us examine each mode separately.I.INFLUENCE OF THE TRANSLATED WORKSWhile American and British interest in German legal scholarship was high in the antebellum period, the number of American and English jurists who could read German fluently was relatively low. Even those who borrowed from the Germans, for instance, Joseph Story, most often had to depend upon translations. It is thus quite important that Savigny’s works were amongst the most frequently translated into English, both in the United States and in Great Britain. His most influential early work, the Vom Beruf unserer Zeitfur Rechtsgeschichte und Gestzgebung, was translated into English by Abraham Hayward and published in London in 1831. Two years earlier the first volume of his History of Roman Law in the Middle Ages was translated by Cathcart and published in Edinburgh. In 1830, as well, a French translation was published at Paris. Sir Erskine Perry's translation of Savigny's Treatise on Possession was published in London in 1848. This was followed by Archibald Brown's epitome of the treatise on possession in 1872 and Rattigan's translation of the second volume of the System as Jural Relations or the Law of Persons in 1884. Guthrie published a translation of the seventh volume of the System as Private International Law at Edinburgh in 1869. Indeed, two English translations were even published in the far flung corners of the British Raj. A translation of the first volume of the System was published by William Holloway at Madras in 1867 and the volume on possession was translated by Kelleher and published at Calcutta in 1888. Thus, the determined English-speaking scholar had ample access to Savigny's works throughout the nineteenth century.Equally important for the dissemination of Savigny's ideas were those books and articles published in English that explained and analyzed his works. A number of these must have played an important role in this process. One of the earliest of these is John Reddie's Historical Notices of the Roman law and of the Progress of its Study in Germany, published at Edinburgh in 1826. Reddie was a noted Scots jurist and held the Gottingen J.U.D. The book, significantly, is dedicated to Gustav Hugo. It is of that genre known as an external history of Roman law-not so much a history of substantive Roman legal doctrine but rather a historyof Roman legal institutions and of the study of Roman law from antiquity through the nineteenth century. It is very much a polemic for the study of Roman law and for the Historical School. It imparts to the reader the excitement of Savigny and his followers about the study of law historically and it is clear that no reader of the work could possibly be left unmoved. It is, in short, the first work of public relations in English on behalf of Savigny and his ideas.Having mentioned Reddie's promotion of Savigny and the Historical School, it is important to understand the level of excitement with which things Roman and especially Roman law were greeted during this period. Many of the finest American jurists were attracted-to use Peter Stein's term-to Roman and Civil law, but attracted in a way that, at times, seems to have been more enthusiastic than intellectual. Similarly, Roman and Civil law excited much interest in Great Britain, as illustrated by the distinctly Roman influence to be found in the work of John Austin. The attraction of Roman and Civil law can be illustrated and best understood, perhaps, in the context of the publicity and excitement in the English-speaking world surrounding the discovery of the only complete manuscript of the classical Roman jurist Gaius' Institutes in Italy in 1816 by the ancient historian and German consul at Rome, B.G. Niebuhr. Niebuhr, the greatest ancient historian of his time, turned to Savigny for help with the Gaius manuscript (indeed, it was Savigny who recognized the manuscript for what it was) and, almost immediately, the books and journals-not just law journals by any means-were filled with accounts of the discovery, its importance to legal historical studies, and, of course, what it said. For instance, the second volume of the American Jurist contains a long article on the civil law by the scholarly Boston lawyer and classicist, John Pickering. The first quarter of the article is a gushing account of the discovery and first publication of the Gaius manuscript and a paean to Niebuhr and Savigny for their role in this. Similarly, in an article published in the London Law Magazine in 1829 on the civil law, the author contemptuously refers to a certain professor who continued to tell his students that the text of Gaius' Institutes was lost for all time. What could better show his ignorance of all things legal and literary than to be unaware of Niebuhr's great discovery?Another example of this reaction to the discovery of the Gaius palimpsest is to be found in David Irving's Introduction to the Study of the Civil Law. This volume is also more a history of Roman legal scholarship and sources than a study of substantive Roman law. Its pages are filled with references to Savigny's Geschichte and its approach clearly reflects the influence of the Historical School. Indeed, Irving speaks of Savigny's work as "one of the most remarkable productions of the age." He must have been truly impressed with German scholarship and must also have been able to convince the Faculty of Advocates, forwhom he was librarian, of the worth of German scholarship, for in 1820 the Faculty sent him to Gottingen so that he might study their law libraries. Irving devotes several pages of his elementary textbook on Roman law to the praise of the "remarkable" discovery of the Gaius palimpsest. He traces the discovery of the text by Niebuhr and Savigny in language that would have befitted an adventure tale. He elaborates on the various labors required to produce a new edition of the text and was particularly impressed by the use of a then new chemical process to make the under text of the palimpsest visible. He speaks of the reception of the new text as being greeted with "ardor and exultation" strong words for those who spend their lives amidst the "musty tomes" of the Roman law.This excitement over the Verona Gaius is really rather strange. Much of the substance of the Gaius text was already known to legal historians and civil lawyers from its incorporation into Justinian's Institutes and so, from a substantive legal perspective, the find was not crucial. The Gaius did provide new information on Roman procedural rules and it did also provide additional information for those scholars attempting to reconstruct pre-Justinianic Roman law. Nevertheless, these contributions alone seem hardly able to justify the excitement the discovery caused. Instead, I think that the Verona Gaius discovery simply hit a chord in the literary and legal community much the same as did the discovery of the Rosetta Stone or of Schliemann’s Troy. Here was a monument of a great civilization brought newly to light and able to be read for the first time in millenia. And just as the Rosetta Stone helped to establish the modern discipline of Egyptology and Schliemann's discoveries assured the development of classical archaeology as a modern academic discipline, the discovery of the Verona Gaius added to the attraction Roman law held for scholars and for lawyers, even amongst those who were not Romanists by profession. Ancillary to this, the discovery and publication of the Gaius manuscript also added to the fame of the two principals involved in the discovery, Niebuhr and Savigny. What this meant in the English-speaking world is that even those who could not or did not wish to read Savigny's technical works knew of him as one of the discoverers of the Gaius text. This fame itself may well have helped in spreading Savigny's legal and philosophical ideas, for, I would suggest, the Gaius "connection" may well have disposed people to read other of Savigny's writings, unconnected to the Gaius, because they were already familiar with his name.Another example of an English-speaking promoter of Savigny is Luther Stearns Cushing, a noted Boston lawyer who lectured on Roman law at the Harvard Law School in 1848-49 and again in 1851- 1852.Cushing published his lectures at Boston in 1854 under the title An Introduction to the Study of Roman Law. He devoted a full chapter to a description of the historical school and to the controversy betweenSavigny and Thibaut over codification. While Cushing attempted to portray fairly the arguments of both sides, he left no doubt as to his preference for Savigny's approach:The labors of the historical school have established an entirely new and distinct era in the study of the Roman jurisprudence; and though these writers cannot be said to have thrown their predecessors into the shade, it seems to be generally admitted, that almost every branch of the Roman law has received some important modification at their hands, and that a knowledge of their writings, to some extent, at least, is essentially necessary to its acquisition.译文(一)萨维尼和他的英美信徒们*M·H·豪弗里奇弗雷德里奇·卡尔·冯·萨维尼出身贵族,是一位出色的法律改革家,也是一位倡导重建德国教授协会的拥护者,还是历史法学派的创建人之一。
外文翻译--在组奖励中的激励强度的决定因素
外文文献翻译译文一、外文原文:原文:DETERMINANTS OF INCENTIVE INTENSITY IN GROUPBASED REWARDSTHEORY AND HYPOTHESESAgency Theory and Incentive IntensityA fundamental argument in the agency theory literature and in much of the compensation literature is that the incentive intensity of rewards—often measured as the variable portion of pay—enhances employee contributions to performance. Incentive-intensive pay increases effort and may increase the talent level of those attracted to a compensation plan. Higher incentive intensity increases the marginal gains in income that employees receive from increased effort. If increased effort has physical or psychological costs, agents will choose levels of effort whereby the marginal gain from effort equals its marginal cost. Therefore, when pay plans are more incentive-intensive, employees reach higher levels of effort before deciding that these increases fail to compensate for their personal costs. Research in a variety of fields confirms this relationship between incentive intensity and effort (e.g., Ehrenberg & Bognanno, 1990; Landau & Leventhal, 1976; Zenger, 1992).Higher incentive intensity may also help companies lure and keep talented workers (Lazear, 1986; Rynes, 1987; Zenger, 1994). Given the randomness of measured performance, as incentive intensity rises, so does the uncertainty of an individual's pay. The higher the incentive intensity, the more likely it is that only the very best performers (those who have the highest probability of generating strong measured performance) will find it efficient to assume the risk of an incentive-intensive contract.As suggested in empirical studies, employees with lower ability—those unlikely to generate high performance—will prefer contracts that place less emphasis on performance (Cable & Judge, 1994; U.S. Office of Personnel Management, 1988;Zenger, 1994).Incentive' intensity in group rewards should function much like incentive intensity in individual rewards: higher levels should motivate effort, lure talent, and thereby enhance performance. As Kruse argued in regard to profit-sharing plans, "The size of the profit share in relation to other employee compensation should clearly be an important factor in the impact of profit sharing upon workplace relations and performance.A profit share that, forexample, averages less than 1 percent of employee compensation is unlikely to be taken seriously by employees as an incentive for increased effort, monitoring, and cooperation with workers" (1993:81). By escalating the incentive intensity of group rewards (the incentive portion of pay), managers enhance the individual benefit from increased group effort and promote desirable self-selection. Although group incentive pay is less attractive to top talent than individual incentive pay (see Cable & Judge, 1994; Weiss, 1987), top talent should prefer highly incentive-intensive group pay to weakly incentive-intensive group pay. Kruse (1993) provided some empirical evidence of a relationship between incentive intensity and performance in group rewards. Thus, our motivation for exploring the determinants of incentive intensity stemmed from the underlying assumption that higher incentive intensity triggers higher effort, lures superior talent, and generally yields higher performance levels.Costs of Increasing Incentive IntensityThe rather low incentive intensity characteristic of rewards in many firms suggests significant impediments to raising incentive intensity. Agency theorists point to four impediments. First, incentive intensityis constrained by agents' inability to control performance measures (Lai & Srinivasan, 1993; Milgrom & Roberts, 1992; Weitzman, 1980). If agents cannot control performance measures, then imposing high levels of incentive intensity imposes substantial uncertainty on employees and provides rather modest motivational benefits.Second, incentive intensity is constrained by the inaccuracy of performance measures, or the weakness of the link between true and measured performance (Holmstrom & Milgrom, 1991; Milgrom & Roberts, 1992). If measured performance is only weakly correlated with true performance, aggressively rewarding measured performance may encourage agents to neglect unmeasured performance dimensions, thereby lowering true performance.Third, some agency theorists and scholars outside economics have argued that processes of pay comparison constrain incentive intensity within organizations (Lazear, 1989; Milgrom & Roberts, 1988; Pfeffer & Langton, 1993; Zenger, 1992). Higher incentive intensity generates greater variance in pay and magnifies the negative effects of comparison processes (Lazear, 1989; Pfeffer & Langton, 1993). Employees reduce their effort, leave a firm, or even sabotage its activities when they perceive pay differences as inequitable(Adams, 1965; Deutsch, 1985). Lowering incentive intensity reduces pay variance and thus diminishes the effects of these comparisons.Fourth, incentive intensity is constrained by "intertemporal" problems of incentive ratcheting and output restriction. Managers have an incentive to strategically alter incentive structures, adjusting payouts downward (or performance hurdles upward) once employees reveal their capacity to perform (Gibbons, 1987; Miller, 1992). Recognizing this managerial incentive, employees have an incentive to restrict output in anticipation of downward ratcheting of payouts should they reveal theircapacity for hard work (Mathewson, 1931; Whyte,1955). Such concerns may prompt the reduction or elimination of incentive intensity in rewards. Determinants of Incentive Intensity in Group-Based RewardsAlthough group rewards partially circumvent the impediments to incentive intensity detailed above, designers of group pay plans nonetheless confrontsimilar impediments.Control of performance measures. A primary advantage of group rewards is the capacity to link participants' pay to a performance measure over which they have rather complete control. However, this control is collective, with each individual having only a limited capacity to control the outcome. Consistent with agency theory, this inability to individually control observable performance measures encourages lower levels of incentive intensity(Lai & Srinivasan, 1993; Milgrom & Roberts, 1992;Weitzman, 1980).A group's size strongly infiuences its members' capacity to individually control their group's performance and subsequent individual pay. Clearly, a group member has more direct control over the performance of a small group than over that of a large group. Thus, to make a large portion of individual pay contingent on a large group's performance attaches pay to a measure over which any given employee has little control. Consequently, when groups are large, incentive intensity will be lower, refiecting low ability to control measured performance.The direct incentives triggered by group rewards may dissipate rather rapidly as groups increase in size. However, group rewards trigger mutual monitoring and "concertive" control (Barker, 1993); employees monitor and encourage their peers' performance and tightly screen new applicants (Welbourne,Balkin, & Gomez-Mejia, 1993). Such mutual monitoring or concertive control may extend the effectiveness of group rewards to size levels at which direct financial performance incentives are quite minimal.The effectiveness of mutual monitoring should, nonetheless, remain closely linked to group size and the level of performance measurement. Mutual monitoring should be most prevalent within small group plans where the contributions of specific colleagues more powerfully affect individual pay. Therefore, a greater capacity to control performance both directly and indirectly enables plan designers to escalate incentive intensity in small groups while imposing more modest levels of uncertainty.The effects of group size on incentive intensity,however, may diminish with size. Thus, although increasing a group from 10 to 20 members has a large effect on members' direct and indirect control of performance measures, increasing from 1,000 to1,010 members has rather limited bearing on such control. Hence,Hypothesis 1. Increases in group size have a negative, but diminishing, effect on the incentive intensity of group-based pay plans.The capacity to control group performance measures may also be influenced by the organizational level at which performance is measured, in part because organizational level is closely related to unit size. Our focus was on employee incentive plans, rather than managerial incentive plans. With the latter excluded, plans with lower-level measures,such as work team performance, are clearly more easily controlled by individual group members than plans that measure performance at higher organizational levels (such as divisions). Thus, Hypothesis 2. Group-based pay plans linked to performance measures at a lower organizational level will have greater incentive intensity than plans linked to performance measures at a higher organizational level.The composition of a group may also influence group members' capacity to control performance measures and thereby the optimal level of incentive intensity. Arguably, employees in managerial positions have greaterinfluence over individual performance than employees in lower-level positions (Baiman, Larker, & Rajan, 1993; Gerhart & Milkovich,1990). Managers and professionals have the authority to infiuence a broader set of performance determining decisions than do lower-level employees. Similar arguments can be made for professional employees having greater control over organizational outcomes than nonprofessional, nonmanagerial employees. Consistent witb agency theory reasoning, such increased control over outcomes should enable an increase in incentive intensity (Baiman et al., 1993). Empirical studies of managerial pay plans by Gerhart and Milkovich (1990) and Bushman, Indejejikian, and Smith (1994) have confirmed a positive relationship between incentive intensity and hierarchical level. Our focus was on employee-level pay plans, but many such plans also encompass management personnel. Hence, where managers and professionals comprise a significant percentage of plan participants, the enhanced ability of those groups to control performance should trigger the use of more incentive-intensive rewards. Thus, Hypothesis 3. Group incentive plans attached to groups in which a high proportion of participants are managers and professionals will have higher levels of incentive intensity than group incentive plans attached to groups in which a low proportion of participants are managers and professionals. Measurement accuracy and measurement complexity.The incentive intensity of group rewards should also depend on the complexity and accuracy of group performance measurement. If critical dimensions of performance are neglected, then aggressively rewarding measured performance yields dysfunctional outcomes. As discussed in writing on both agency theory and organizational behavior, performance measurement is particularly problematic when employees are assigned multiple tasks or when a single task has multiple performance dimensions (Holmstrom & Milgrom, 1991; Kerr,1975). Individuals respond to what is measured and rewarded andneglect other dimensions of performance(Kerr, 1975). As Holmstrom and Milgrom(1991) noted, simply adding measures to address the full spectrum of performance dimensions does not ensure optimal attention to all dimensions. Variability in the accuracy with which differing performance dimensions are evaluated, or variability in their ability to be controlled, creates incentives for employees to attend selectively to those dimensions more easily measured or controlled. Incentive intensity should, thus, be lower when jobs have dimensions or tasks that are difficult to measure. Doing otherwise only encourages allocations of effort that are less than optimal for a firm.Productivity and output volume are primary performance measures in many organizations. However, as has been argued in the total quality management literature, a focus on these primary performance indicators often leads to neglect of quality (Deming, 1993; Ishikawa, 1985). Although occasionally quality is quite easily measured, typically it is a performance dimension that is more difficult to accurately measure and control than other performance dimensions such as cost, output volume, or profitability. Consequently, when employees confront incentives that compensate attention both to quality and to other more accurately measured and more easily controlled performance attributes, they rationally neglect quality. Therefore, when quality is an important performance dimension, firms limit incentive intensity to increase attention to quality (Holmstrom & Milgrom,1991; Laffont & Tirole, 1989). Not surprisingly.leaders in the quality movement have recommended the avoidance of performance-based rewards (Deming, 1993; Ishikawa, 1985: 26-27].Thus, Hypothesis 4. Group incentive plans that reward quality will have lower incentive intensity than plans that do not measure and reward quality.Numerous performance measures in a group pay plan may further indicatecomplex and difficult performance measurement. Having numerous performance measures implies a broad range of important performance dimensions, some of which are potentially problematic to measure. Adding performance measures to a group incentive plan may focus attention on dimensions that would otherwise be neglected, but such additions cannot trigger optimal allocations of effort, as previously discussed. Escalating incentive intensity in work settings with such complex measurement may heighten neglect of those dimensions that are difficult or impossible to measure. Thus, in response to measurement complexity, plan designers may restrict incentive intensity to limit misallocation of effort. Hence, Hypothesis 5. Group incentive plans with a large number of performance measures will have lower incentive intensity than group incentive plans with few performance measures.Comparison processes and firm size.Groupbased pay plans are implemented within a broad organizational setting—a setting in which employees actively engage in processes of social comparison around the topic of pay. Agency theorists, such as Lazear (1989) and Milgrom and Roberts (1988), and psychologists and sociologists, such as Deutsch (1985) and Pfeffer and Langton (1993), have noted the potential desirability of pay equality as a means of promoting harmony and avoiding costly pay comparisons. Highly exaggerated self-perceptions (Meyer, 1975) ensure that pay differences are viewed as inequitable. Given individuals' costly responses to perceived inequity—such as departure and reduced effort (Adams, 1965; Deutsch, 1985)—firms choose to reduce performance-based variance in individual pay (Lazear, 1989; Zenger, 1992,1994).TODD R.ZENGER,C.R.MARSHALL.Determinants of incentive intensity in groupbased rewards[J].Academy of Management Joumal.2000, Vol. 43. No.2. 149-163.二、翻译文章译文:在组奖励中的激励强度的决定因素理论和假设代理理论和激励强度在代理理论文学和文学补偿多少中的一个基本论点是,奖励的激励强度经常是衡量薪酬,提高员工的贡献业绩的表现。
机械类外文文献翻译(中英文翻译)
机械类外文文献翻译(中英文翻译)英文原文Mechanical Design and Manufacturing ProcessesMechanical design is the application of science and technology to devise new or improved products for the purpose of satisfying human needs. It is a vast field of engineering technology which not only concerns itself with the original conception of the product in terms of its size, shape and construction details, but also considers the various factors involved in the manufacture, marketing and use of the product.People who perform the various functions of mechanical design are typically called designers, or design engineers. Mechanical design is basically a creative activity. However, in addition to being innovative, a design engineer must also have a solid background in the areas of mechanical drawing, kinematics, dynamics, materials engineering, strength of materials and manufacturing processes.As stated previously, the purpose of mechanical design is to produce a product which will serve a need for man. Inventions, discoveries and scientific knowledge by themselves do not necessarily benefit people; only if they are incorporated into a designed product will a benefit be derived. It should be recognized, therefore, that a human need must be identified before a particular product is designed.Mechanical design should be considered to be an opportunity to use innovative talents to envision a design of a product, to analyze the systemand then make sound judgments on how the product is to be manufactured. It is important to understand the fundamentals of engineering rather than memorize mere facts and equations. There are no facts or equations which alone can be used to provide all the correct decisions required to produce a good design.On the other hand, any calculations made must be done with the utmost care and precision. For example, if a decimal point is misplaced, an otherwise acceptable design may not function.Good designs require trying new ideas and being willing to take a certain amount of risk, knowing that if the new idea does not work the existing method can be reinstated. Thus a designer must have patience, since there is no assurance of success for the time and effort expended. Creating a completely new design generally requires that many old and well-established methods be thrust aside. This is not easy since many people cling to familiar ideas, techniques and attitudes. A design engineer should constantly search for ways to improve an existing product and must decide what old, proven concepts should be used and what new, untried ideas should be incorporated.New designs generally have "bugs" or unforeseen problems which must be worked out before the superior characteristics of the new designs can be enjoyed. Thus there is a chance for a superior product, but only at higher risk. It should be emphasized that, if a design does not warrant radical new methods, such methods should not be applied merely for the sake of change.During the beginning stages of design, creativity should be allowedto flourish without a great number of constraints. Even though many impractical ideas may arise, it is usually easy to eliminate them in the early stages of design before firm details are required by manufacturing. In this way, innovative ideas are not inhibited. Quite often, more than one design is developed, up to the point where they can be compared against each other. It is entirely possible that the design which is ultimately accepted will use ideas existing in one of the rejected designs that did not show as much overall promise.Psychologists frequently talk about trying to fit people to the machines they operate. It is essentially the responsibility of the design engineer to strive to fit machines to people. This is not an easy task, since there is really no average person for which certain operating dimensions and procedures are optimum.Another important point which should be recognized is that a design engineer must be able to communicate ideas to other people if they are to be incorporated. Communicating the design to others is the final, vital step in the design process. Undoubtedly many great designs, inventions, and creative works have been lost to mankind simply because the originators were unable or unwilling to explain their accomplishments to others. Presentation is a selling job. The engineer, when presenting a new solution to administrative, management, or supervisory persons, is attempting to sell or to prove to them that this solution is a better one. Unless this can be done successfully, the time and effort spent on obtaining the solution have been largely wasted.Basically, there are only three means of communication available tous. These are the written, the oral, and the graphical forms. Therefore the successful engineer will be technically competent and versatile in all three forms of communication. A technically competent person who lacks ability in any one of these forms is severely handicapped. If ability in all three forms is lacking, no one will ever know how competent that person is!The competent engineer should not be afraid of the possibility of not succeeding in a presentation. In fact, occasional failure should be expected because failure or criticism seems to accompany every really creative idea. There is a great deal to be learned from a failure, and the greatest gains are obtained by those willing to risk defeat. In the final analysis, the real failure would lie in deciding not to make the presentation at all. To communicate effectively, the following questions must be answered:(1) Does the design really serve a human need?(2) Will it be competitive with existing products of rival companies?(3) Is it economical to produce?(4) Can it be readily maintained?(5) Will it sell and make a profit?Only time will provide the true answers to the preceding questions, but the product should be designed, manufactured and marketed only with initial affirmative answers. The design engineer also must communicate the finalized design to manufacturing through the use of detail and assembly drawings.Quite often, a problem will occur during the manufacturing cycle [3].It may be that a change is required in the dimensioning or tolerancing of a part so that it can be more readily produced. This fails in the category of engineering changes which must be approved by the design engineer so that the product function will not be adversely affected. In other cases, a deficiency in the design may appear during assembly or testing just prior to shipping. These realities simply bear out the fact that design is a living process. There is always a better way to do it and the designer should constantly strive towards finding that better way.Designing starts with a need, real or imagined. Existing apparatus may need improvements in durability, efficiently, weight, speed, or cost. New apparatus may be needed to perform a function previously done by men, such as computation, assembly, or servicing. With the objective wholly or partly defined, the next step in design is the conception of mechanisms and their arrangements that will perform the needed functions.For this, freehand sketching is of great value, not only as a record of one's thoughts and as an aid in discussion with others, but particularly for communication with one's own mind, as a stimulant for creative ideas.When the general shape and a few dimensions of the several components become apparent, analysis can begin in earnest. The analysis will have as its objective satisfactory or superior performance, plus safety and durability with minimum weight, and a competitive east. Optimum proportions and dimensions will be sought for each critically loaded section, together with a balance between the strength of the several components. Materials and their treatment will be chosen. These important objectives can be attained only by analysis based upon the principles ofmechanics, such as those of statics for reaction forces and for the optimumutilization of friction; of dynamics for inertia, acceleration, and energy; of elasticity and strength of materials for stress。
应收账款【外文翻译】
外文文献翻译一、外文原文原文:Accounts Receivable IssuesFor many companies, the accounts receivable portfolio is its largest asset. Thus, it deserves special care and attention. Effective handling of the portfolio can add to the bottom line, while neglect can cost companies in unseen losses.Accounts Receivable Strategies to Energize the Bottom LineDon’t be surprised to find the big shots from finance suddenly looking over your shoulder questioning the ways your credit department operates. Accounts receivable has become the darling of those executives desperate to optimize working capital and improve their balance sheet.Here’s a roundup of some of the tactics that have been collected from the best credit managers to squeeze every last cent out of their accounts receivable portfolio: ·Have invoices printed and mailed as quickly as possible. Most customers start the clock ticking when the invoice arrives in their offices. The sooner you can get the invoice to them, the sooner they will pay you. While this strategy will not affect days sales outstanding(DSO),it will improve the bottom line.·Look for ways to improve invoice accuracy without delaying the mail date.·Offer more stringent terms where appropriate in your annual credit reviews and with new customers. Consider whether shorter terms might be better for you company.·Offer financial inducements to customers who agree to pay your invoices electronically.·If you have not had a lockbox study performed in the last few years, have one done to determine your optimal lockbox location.·With customers who have a history of paying late, begin your collection effortsbefore the due date. Call to inquire whether they have the invoice and if everything is in order. Resolve any problems quickly at this point.·If you have been giving a grace period to those taking discounts after the discount period, reduce or eliminate it.·Resolve all discrepancies quickly so payment can be made promptly.·If a customer indicates it has a problem with part of an invoice, authorize partial payments.·Keep a log of customer problems and analyze it once a month to discover weaknesses in your procedures that cause these quandaries.·Apply cash the same day the payment is received. Collectors can then spend their time with customers who have not paid rather than annoying ones who have already sent their payment.·Deal with a bank that makes lockbox information available immediately by fax, or preferably, online. Then when a customer claims it has made a payment , the collector will be able to verify this.·Look into ways to accept P-cards from customers placing small orders and those who cannot be extended credit on open account terms.·Benchmark department and individual collectors’ performance to pinpoint those areas and individuals in need of additional training.Review your own policies and procedures to determine if there are any areas that could be tweaked to improve cash flow. Then, when the call comes from executive quarters, you will be ready, and they will be hard pressed to find ways that you fell down on the job.Dealing with Purchase OrdersLeading credit managers have learned to pay attention to the purchase orders that their companies receive. Specifically, they want to ensure that the purchase order accepted by the salesperson does not include clause that will ultimately cause trouble for their companies, or even legal difficulties later on. Realistically, the salesperson should have caught the problem, but he or she rarely does. When the customer doesn’tpay due to one of these techn icalities, it’s not the salesperson who will get blamed.To help avoid a purchase order disaster, credit professionals can take the following steps:1.Simply read the purchase order. Vendors often slip clauses into purchase orders that you would never agree to. One favorite is to include a statementsaying the seller will be paid as soon as its customer pays the buyer. This is arisk few companies are willing to tolerate.2.Prioritize attachments. Typically, buyers write purchase orders that contain attachments. These include drawings, specifications, supplementary termsand conditions for work done on company premises, or safety rules for thesupplier.When including attachments, it is recommended that one of them be a listof priorities to guard against any inconsistencies in the documents. Thepurchase order should “clearly reference all the attachments, and there shouldbe a recitation as to which attachments are controlling over the others.” In theevent of any inconsistency between or among these documents, the purchaseorder shall be controlling over any attachments, and the attachments shall beinterpreted using the priority listed.3.Take care when reference is made to a buyer’s documents in the purchase order. There are likely to be both helpful and harmful statements in thosedocuments that reference the buyer’s material. The buyer may have printedits own terms and conditions on the back of a document. By referring to thedocument in the purchase order, you may inadvertently refer not only to theprice, but also to terms and conditions, which may include warrantydisclaimers and limitations of remedies that your company does not intend togive.Instead, the recommendation is not to refer to the buyers’ documents.Insist that the information is specified in the purchase order. If this is notpractical, the following language might work:” Any reference to thepurchaser’s quotation contained in this purchase order is a reference forconvenience only, and no such reference shall be deemed to include any ofthe purchaser’s standard terms and conditions of sale. The seller expresslyrejects anything in any of the buyer’s documents that is inconsistent with theseller’s standard terms and conditions.”Another favorite is to include terms and conditions on the back of thepurchase order written in very small print and a pale (almost undecipherable)color.4.Be careful of confirming purchase orders. Often, buyers will place orders via telephone, only to later confirm them with a written purchase order. In oralcontracts, the buyer will often want the purchase order to be more than justan offer. Therefore, the buyer will try to show on the purchase order that it isa confirming purchase order and cement the oral contract made over thephone. If the buyer does so, the confirming purchase order will satisfy theUniform Commerical Code (UCC) requirement of a written confirmationunless the other side objects to it within ten days.More than one cunning purchaser has slipped terms into a confirmingpurchase order that were nothing like those agreed to orally. Don’t fall intothe trap of assuming that the confirming purchase order confirms what wasactually said on the phone.Credit professionals who take these few extra steps with regard to purchase orders will limit their troubles.Quality of Accounts Receivable: Days Sales OutstandingMany credit professionals are measured on their effectiveness by reviewing the accounts receivable portfolio. The most common measurement is the length of time a sale stays outstanding before being paid. The Credit Research Foundation (CRF) defines DSO as the average time in days that receivables are outstanding. It helps determine if a change in receivables is due to a change in sales, or to another factor such as a change in selling terms. An analyst might compare the day’s sales in receivables with the company’s credit terms as an indication of how efficiently thecompany manages its receivables. Days sales outstanding is occasionally referred to as days receivable outstanding, as well. The formula to calculate DSO is:365e Re Sales t N Annual ceivablesGrossQuality of Accounts Receivable: Collection Effectiveness IndexSome feel that the quality of the portfolio is dependent to a large extent on the efforts of the collection staff. This is measured by the collection effectiveness index (CEI). The CRF says this percentage expresses the effectiveness of collection efforts over time. The closer to 100% the ratio gets, the more effective the collection effort. It is a measure of the quality of collection of receivables, not of time. Here’s the formula to calculate the CEI:Daysor Months of Number N ceivables Current Ending N Sales Credit ceivables Beginning ceivablesTotal Ending N Sales Credit ceivables Beginning =⨯-+-+100Re )(Re Re )(ReQuality of Accounts Receivable: Best Possible Days Sales OutstandingMany credit professionals find fault with using DSO to measure theirperformance. They feel that a better measure is one based on average terms based on customer payment patterns. The CRF says that this figure expresses the best possible level of receivables. The CRF believes this measure should be used together with DSO. The closer the overall DSO is to the average terms based on customer payment patterns (best possible DSO [BPDSO]),the closer the receivables are to the optimal level. The formula for calculating BPDSO is:AnalyzedPeriod for Sales Credit Analyzed Period in Days of Number ceivables Current ⨯ReBad-Debt ReservesInevitably, no matter how good the credit professional, a company will have a customer that does not pay its debts. Most companies understand that bad debts are simply part of doing business and reserve for bad debts. In fact, many believe that acompany with no bad debts is not doing a good job. The reason that being that if the company loosened its credit terms slightly, the company would greatly increase its sales and, even after accounting for the bad debts, its profits. Thus, most companies plan for bad debt, monitor it, and periodically, depending on the company’s outlook, revise projections and credit policy to allow for an increase or decrease.For example, as the economy goes into a recession, most companies will experience an increase in bad debts if their credit policy remains static. So, in light of declining economic conditions, companies should either increase their bad-debt reserves or tighten the credit policy. Similarly, if the economy is improving, a company would take reverse actions, either decreasing the reserve for bad debts or loosening the credit policy.Many companies take advantage or a favorable economy to expand their customer base. They might simultaneously increase the bad-debt reserve and loosen credit policy. Obviously, these decisions are typically made at a fairly high level. Other factors will also come into play in establishing a bad-debt reserve. Industry conditions are key and can often be quite different than the state of the economy. This is especially true when competition comes from foreign markets.There is no one set way to calculate the reserve for bad debts. Many simply take a percentage of sales or outstanding accounts receivable, or they make some other relatively uncomplicated calculation.How to Reduce Your Bad-Debt Write-OffsMost credit and collection professionals would love to be able to brag about having no bad-debt write-offs. Few can. While a goal of reducing the amount of bad debt write-offs to zero might be unrealistic in most industries, keeping that number as low as possible is something within the control of today’s credit managers. The following seven techniques will help you keep your numbers as low as possible:1.Call early. Don’t wait until the ac count goes 30 or even 60 days past duebefore calling customers about late payments. Such delays can mean that, in the case of a financially unstable company, a second and perhaps even a thirdshipment will be made to a customer who ultimately will pay for naught. Some professionals even call a few days before the payment is due to ensure that everything is in order and the customer has everything it needs to make a timely payment. By beginning your calling campaign as early as possible, it is possible o uncover shaky situations. Even if payment is not received for the first delivery, future order are not accepted, effectively reducing bad-debt write-offs.municate, communicate, communicate. Keep the dialogue open with everyone involved. This not only includes your customers, but the sales force as well. In many cases, they are in a better position than the credit manager to know when a customer is on thin ice. With good lines of communication between sales and credit, it is possible to avoid taking some of those orders that will ultimately have to be written off.3.Follow up, follow up, follow up. Continual follow up with customers is important, whether you’re trying to collect on a timely basis or attempting to avoid a bad-debt write-off. If the customer knows you will call every few days or will be calling to track the status of promises made, it is much more likely to pay. This can also be the case of the squeaky wheel getting the grease, or in this case the money, when cash is tight.4.Systematize. Many collection professionals keep track of promises and deadlines by hand, on a pad or calendar. Items tend to fall through the cracks with this approach. Invest some money either in prepackaged software or in developing your own in-house, and the likelihood of losing track of customers diminishes. Some accounting programs have a tracking capability that many have not taken the time to learn. If your software has such a facility, use it.5.Specialize. Set up a group of one or more individuals who do nothing but try to collect receivables that are overdue. By having experts on staff to handle such work, you will improve your collection rate and speed.6.Credit hold. Putting customers on credit hole early in the picture will sometimes entice a payment from someone who really had no intention of paying you. This technique is particularly effective with customers who rely heavily onyour product and would be hard put to get it elsewhere. Of course, if you sell something that many other vendors sell as well, putting a potentially good customer on hold could backfire.7.Small claims court. Some credit professionals have had great success incollecting smaller amounts by taking the customer to small claims court. The limits for such actions vary by state but can be as high as $10,000.While these techniques will not necessarily squeeze money from a bankrupt client, they will help you get as much as possible as soon as possible from as many of your customers as possible. This can be especially important in avoiding preference actions with clients who eventually do file. The quicker you get the clock ticking, the more likely you are to be able to avoid preference claims.Source: Mary S. Schaeffer, 2002, Essentials of Credit, Collections, and Accounts Receivable, John Wiley & Sons, Inc.( October 01, 2002 ):pp81-102.二、翻译文章译文:应收账款对许多公司来说,应收账款是其最大的资产。
土木工程毕业论文中英文翻译
外文翻译班级:xxx学号:xxx姓名:xxx一、外文原文:Structural Systems to resist lateral loads Commonly Used structural SystemsWith loads measured in tens of thousands kips, there is little room in the design of high-rise buildings for excessively complex thoughts. Indeed, the better high-rise buildings carry the universal traits of simplicity of thought and clarity of expression.It does not follow that there is no room for grand thoughts. Indeed, it is with such grand thoughts that the new family of high-rise buildings has evolved. Perhaps more important, the new concepts of but a few years ago have become commonplace in today’ s technology.Omitting some concepts that are related strictly to the materials of construction, the most commonly used structural systems used in high-rise buildings can be categorized as follows:1.Moment-resisting frames.2.Braced frames, including eccentrically braced frames.3.Shear walls, including steel plate shear walls.4.Tube-in-tube structures.5.Core-interactive structures.6.Cellular or bundled-tube systems.Particularly with the recent trend toward more complex forms, but in response also to the need for increased stiffness to resist the forces from wind and earthquake, most high-rise buildings have structural systems built up of combinations of frames, braced bents, shear walls, and related systems. Further, for the taller buildings, the majorities are composed of interactive elements in three-dimensional arrays.The method of combining these elements is the very essence of the design process for high-rise buildings. These combinations need evolve in response to environmental, functional, and cost considerations so as to provide efficient structures that provoke the architectural development to new heights. This is not to say that imaginative structural design can create great architecture. To the contrary, many examples of fine architecture have been created with only moderate support from the structural engineer, while only fine structure, not great architecture, can be developed without the genius and the leadership of a talented architect. In any event, the best of both is needed to formulate a truly extraordinary design of a high-rise building.While comprehensive discussions of these seven systems are generally available in the literature, further discussion is warranted here .The essence of the design process is distributed throughout the discussion.Moment-Resisting FramesPerhaps the most commonly used system in low-to medium-rise buildings, the moment-resisting frame, is characterized by linear horizontal and vertical members connected essentially rigidly at their joints. Such frames are used as a stand-alone system or in combination with other systems so as to provide the needed resistance to horizontal loads. In the taller of high-rise buildings, the system is likely to be found inappropriate for a stand-alone system, this because of the difficulty in mobilizing sufficient stiffness under lateral forces.Analysis can be accomplished by STRESS, STRUDL, or a host of other appropriate computer programs; analysis by the so-called portal method of the cantilever method has no place in today’s technology.Because of the intrinsic flexibility of the column/girder intersection, and because preliminary designs should aim to highlight weaknesses of systems, it is not unusual to use center-to-center dimensions for the frame in the preliminary analysis. Of course, in the latter phases of design, a realistic appraisal in-joint deformation is essential.Braced Frame sThe braced frame, intrinsically stiffer than the moment –resisting frame, finds also greater application to higher-rise buildings. The system is characterized by linear horizontal, vertical, and diagonal members, connected simply or rigidly at their joints. It is used commonly inconjunction with other systems for taller buildings and as a stand-alone system in low-to medium-rise buildings.While the use of structural steel in braced frames is common, concrete frames are more likely to be of the larger-scale variety.Of special interest in areas of high seismicity is the use of the eccentric braced frame.Again, analysis can be by STRESS, STRUDL, or any one of a series of two –or three dimensional analysis computer programs. And again, center-to-center dimensions are used commonly in the preliminary analysis. Shear wallsThe shear wall is yet another step forward along a progression of ever-stiffer structural systems. The system is characterized by relatively thin, generally but not always concrete elements that provide both structural strength and separation between building functions.In high-rise buildings, shear wall systems tend to have a relatively high aspect ratio, that is, their height tends to be large compared to their width. Lacking tension in the foundation system, any structural element is limited in its ability to resist overturning moment by the width of the system and by the gravity load supported by the element. Limited to a narrow overturning, One obvious use of the system, which does have the needed width, is in the exterior walls of building, where the requirement for windows is kept small.Structural steel shear walls, generally stiffened against buckling by a concrete overlay, have found application where shear loads are high. The system, intrinsically more economical than steel bracing, is particularly effective in carrying shear loads down through the taller floors in the areas immediately above grade. The system has the further advantage of having high ductility a feature of particular importance in areas of high seismicity.The analysis of shear wall systems is made complex because of the inevitable presence of large openings through these walls. Preliminary analysis can be by truss-analogy, by the finite element method, or by making use of a proprietary computer program designed to consider the interaction, or coupling, of shear walls.Framed or Braced TubesThe concept of the framed or braced or braced tube erupted into the technology with the IBM Building in Pittsburgh, but was followed immediately with the twin 110-story towers of the World Trade Center, New York and a number of other buildings .The system is characterized by three –dimensional frames, braced frames, or shear walls, forming a closed surface more or less cylindrical in nature, but of nearly any plan configuration. Because those columns that resist lateral forces are placed as far as possible from the cancroids of the system, the overall moment of inertia is increased and stiffness is very high.The analysis of tubular structures is done using three-dimensional concepts, or by two- dimensional analogy, where possible, whichever method is used, it must be capable of accounting for the effects of shear lag.The presence of shear lag, detected first in aircraft structures, is a serious limitation in the stiffness of framed tubes. The concept has limited recent applications of framed tubes to the shear of 60 stories. Designers have developed various techniques for reducing the effects of shear lag, most noticeably the use of belt trusses. This system finds application in buildings perhaps 40stories and higher. However, except for possible aesthetic considerations, belt trusses interfere with nearly every building function associated with the outside wall; the trusses are placed often at mechanical floors, mush to the disapproval of the designers of the mechanical systems. Nevertheless, as a cost-effective structural system, the belt truss works well and will likely find continued approval from designers. Numerous studies have sought to optimize the location of these trusses, with the optimum location very dependent on the number of trusses provided. Experience would indicate, however, that the location of these trusses is provided by the optimization of mechanical systems and by aesthetic considerations, as the economics of the structural system is not highly sensitive to belt truss location.Tube-in-Tube StructuresThe tubular framing system mobilizes every column in the exterior wallin resisting over-turning and shearing forces. The term‘tube-in-tube’is largely self-explanatory in that a second ring of columns, the ring surrounding the central service core of the building, is used as an inner framed or braced tube. The purpose of the second tube is to increase resistance to over turning and to increase lateral stiffness. The tubes need not be of the same character; that is, one tube could be framed, while the other could be braced.In considering this system, is important to understand clearly the difference between the shear and the flexural components of deflection, the terms being taken from beam analogy. In a framed tube, the shear component of deflection is associated with the bending deformation of columns and girders , the webs of the framed tube while the flexural component is associated with the axial shortening and lengthening of columns , the flanges of the framed tube. In a braced tube, the shear component of deflection is associated with the axial deformation of diagonals while the flexural component of deflection is associated with the axial shortening and lengthening of columns.Following beam analogy, if plane surfaces remain plane , the floor slabs,then axial stresses in the columns of the outer tube, being farther form the neutral axis, will be substantially larger than the axial stresses in the inner tube. However, in the tube-in-tube design, when optimized, the axial stresses in the inner ring of columns may be as high, or evenhigher, than the axial stresses in the outer ring. This seeming anomaly is associated with differences in the shearing component of stiffness between the two systems. This is easiest to under-stand where the inner tube is conceived as a braced , shear-stiff tube while the outer tube is conceived as a framed , shear-flexible tube.Core Interactive StructuresCore interactive structures are a special case of a tube-in-tube wherein the two tubes are coupled together with some form of three-dimensional space frame. Indeed, the system is used often wherein the shear stiffness of the outer tube is zero. The United States Steel Building, Pittsburgh, illustrates the system very well. Here, the inner tube is a braced frame, the outer tube has no shear stiffness, and the two systems are coupled if they were considered as systems passing in a straight line from the “hat” structure. Note that the exterior columns would be improperly modeled if they were considered as systems passing in a straight line from the “hat” to the foundations; these columns are perhaps 15% stiffer as they follow the elastic curve of the braced core. Note also that the axial forces associated with the lateral forces in the inner columns change from tension to compression over the height of the tube, with the inflection point at about 5/8 of the height of the tube. The outer columns, of course, carry the same axial force under lateral load for the full height of the columns because the columns because the shearstiffness of the system is close to zero.The space structures of outrigger girders or trusses, that connect the inner tube to the outer tube, are located often at several levels in the building. The AT&T headquarters is an example of an astonishing array of interactive elements:1.The structural system is 94 ft wide, 196ft long, and 601ft high.2.Two inner tubes are provided, each 31ft by 40 ft , centered 90 ft apartin the long direction of the building.3.The inner tubes are braced in the short direction, but with zero shearstiffness in the long direction.4.A single outer tube is supplied, which encircles the buildingperimeter.5.The outer tube is a moment-resisting frame, but with zero shearstiffness for the center50ft of each of the long sides.6.A space-truss hat structure is provided at the top of the building.7.A similar space truss is located near the bottom of the building8.The entire assembly is laterally supported at the base on twinsteel-plate tubes, because the shear stiffness of the outer tube goes to zero at the base of the building.Cellular structuresA classic example of a cellular structure is the Sears Tower, Chicago,a bundled tube structure of nine separate tubes. While the Sears Towercontains nine nearly identical tubes, the basic structural system has special application for buildings of irregular shape, as the several tubes need not be similar in plan shape, It is not uncommon that some of the individual tubes one of the strengths and one of the weaknesses of the system.This special weakness of this system, particularly in framed tubes, has to do with the concept of differential column shortening. The shortening of a column under load is given by the expression△=ΣfL/EFor buildings of 12 ft floor-to-floor distances and an average compressive stress of 15 ksi 138MPa, the shortening of a column under load is 15 1212/29,000 or per story. At 50 stories, the column will have shortened to in. 94mm less than its unstressed length. Where one cell of a bundled tube system is, say, 50stories high and an adjacent cell is, say, 100stories high, those columns near the boundary between .the two systems need to have this differential deflection reconciled.Major structural work has been found to be needed at such locations. In at least one building, the Rialto Project, Melbourne, the structural engineer found it necessary to vertically pre-stress the lower height columns so as to reconcile the differential deflections of columns in close proximity with the post-tensioning of the shorter column simulatingthe weight to be added on to adjacent, higher columns.二、原文翻译:抗侧向荷载的结构体系常用的结构体系若已测出荷载量达数千万磅重,那么在高层建筑设计中就没有多少可以进行极其复杂的构思余地了;确实,较好的高层建筑普遍具有构思简单、表现明晰的特点;这并不是说没有进行宏观构思的余地;实际上,正是因为有了这种宏观的构思,新奇的高层建筑体系才得以发展,可能更重要的是:几年以前才出现的一些新概念在今天的技术中已经变得平常了;如果忽略一些与建筑材料密切相关的概念不谈,高层建筑里最为常用的结构体系便可分为如下几类:1.抗弯矩框架;2.支撑框架,包括偏心支撑框架;3.剪力墙,包括钢板剪力墙;4.筒中框架;5.筒中筒结构;6.核心交互结构;7.框格体系或束筒体系;特别是由于最近趋向于更复杂的建筑形式,同时也需要增加刚度以抵抗几力和地震力,大多数高层建筑都具有由框架、支撑构架、剪力墙和相关体系相结合而构成的体系;而且,就较高的建筑物而言,大多数都是由交互式构件组成三维陈列;将这些构件结合起来的方法正是高层建筑设计方法的本质;其结合方式需要在考虑环境、功能和费用后再发展,以便提供促使建筑发展达到新高度的有效结构;这并不是说富于想象力的结构设计就能够创造出伟大建筑;正相反,有许多例优美的建筑仅得到结构工程师适当的支持就被创造出来了,然而,如果没有天赋甚厚的建筑师的创造力的指导,那么,得以发展的就只能是好的结构,并非是伟大的建筑;无论如何,要想创造出高层建筑真正非凡的设计,两者都需要最好的;虽然在文献中通常可以见到有关这七种体系的全面性讨论,但是在这里还值得进一步讨论;设计方法的本质贯穿于整个讨论;设计方法的本质贯穿于整个讨论中;抗弯矩框架抗弯矩框架也许是低,中高度的建筑中常用的体系,它具有线性水平构件和垂直构件在接头处基本刚接之特点;这种框架用作独立的体系,或者和其他体系结合起来使用,以便提供所需要水平荷载抵抗力;对于较高的高层建筑,可能会发现该本系不宜作为独立体系,这是因为在侧向力的作用下难以调动足够的刚度;我们可以利用STRESS,STRUDL 或者其他大量合适的计算机程序进行结构分析;所谓的门架法分析或悬臂法分析在当今的技术中无一席之地,由于柱梁节点固有柔性,并且由于初步设计应该力求突出体系的弱点,所以在初析中使用框架的中心距尺寸设计是司空惯的;当然,在设计的后期阶段,实际地评价结点的变形很有必要;支撑框架支撑框架实际上刚度比抗弯矩框架强,在高层建筑中也得到更广泛的应用;这种体系以其结点处铰接或则接的线性水平构件、垂直构件和斜撑构件而具特色,它通常与其他体系共同用于较高的建筑,并且作为一种独立的体系用在低、中高度的建筑中;尤其引人关注的是,在强震区使用偏心支撑框架;此外,可以利用STRESS,STRUDL,或一系列二维或三维计算机分析程序中的任何一种进行结构分析;另外,初步分析中常用中心距尺寸;剪力墙剪力墙在加强结构体系刚性的发展过程中又前进了一步;该体系的特点是具有相当薄的,通常是而不总是混凝土的构件,这种构件既可提供结构强度,又可提供建筑物功能上的分隔;在高层建筑中,剪力墙体系趋向于具有相对大的高宽经,即与宽度相比,其高度偏大;由于基础体系缺少应力,任何一种结构构件抗倾覆弯矩的能力都受到体系的宽度和构件承受的重力荷载的限制;由于剪力墙宽度狭狭窄受限,所以需要以某种方式加以扩大,以便提从所需的抗倾覆能力;在窗户需要量小的建筑物外墙中明显地使用了这种确有所需要宽度的体系;钢结构剪力墙通常由混凝土覆盖层来加强以抵抗失稳,这在剪切荷载大的地方已得到应用;这种体系实际上比钢支撑经济,对于使剪切荷载由位于地面正上方区域内比较高的楼层向下移特别有效;这种体系还具有高延性之优点,这种特性在强震区特别重要;由于这些墙内必然出同一些大孔,使得剪力墙体系分析变得错综复杂;可以通过桁架模似法、有限元法,或者通过利用为考虑剪力墙的交互作用或扭转功能设计的专门计处机程序进行初步分析框架或支撑式筒体结构:框架或支撑式筒体最先应用于IBM公司在Pittsburgh的一幢办公楼,随后立即被应用于纽约双子座的110层世界贸易中心摩天大楼和其他的建筑中;这种系统有以下几个显着的特征:三维结构、支撑式结构、或由剪力墙形成的一个性质上差不多是圆柱体的闭合曲面,但又有任意的平面构成;由于这些抵抗侧向荷载的柱子差不多都被设置在整个系统的中心,所以整体的惯性得到提高,刚度也是很大的;在可能的情况下,通过三维概念的应用、二维的类比,我们可以进行筒体结构的分析;不管应用那种方法,都必须考虑剪力滞后的影响;这种最先在航天器结构中研究的剪力滞后出现后,对筒体结构的刚度是一个很大的限制;这种观念已经影响了筒体结构在60层以上建筑中的应用;设计者已经开发出了很多的技术,用以减小剪力滞后的影响,这其中最有名的是桁架的应用;框架或支撑式筒体在40层或稍高的建筑中找到了自己的用武之地;除了一些美观的考虑外,桁架几乎很少涉及与外墙联系的每个建筑功能,而悬索一般设置在机械的地板上,这就令机械体系设计师们很不赞成;但是,作为一个性价比较好的结构体系,桁架能充分发挥它的性能,所以它会得到设计师们持续的支持;由于其最佳位置正取决于所提供的桁架的数量,因此很多研究已经试图完善这些构件的位置;实验表明:由于这种结构体系的经济性并不十分受桁架位置的影响,所以这些桁架的位置主要取决于机械系统的完善,审美的要求,筒中筒结构:筒体结构系统能使外墙中的柱具有灵活性,用以抵抗颠覆和剪切力;“筒中筒”这个名字顾名思义就是在建筑物的核心承重部分又被包围了第二层的一系列柱子,它们被当作是框架和支撑筒来使用;配置第二层柱的目的是增强抗颠覆能力和增大侧移刚度;这些筒体不是同样的功能,也就是说,有些筒体是结构的,而有些筒体是用来支撑的;在考虑这种筒体时,清楚的认识和区别变形的剪切和弯曲分量是很重要的,这源于对梁的对比分析;在结构筒中,剪切构件的偏角和柱、纵梁例如:结构筒中的网等的弯曲有关,同时,弯曲构件的偏角取决于柱子的轴心压缩和延伸例如:结构筒的边缘等;在支撑筒中,剪切构件的偏角和对角线的轴心变形有关,而弯曲构件的偏角则与柱子的轴心压缩和延伸有关;根据梁的对比分析,如果平面保持原形例如:厚楼板,那么外层筒中柱的轴心压力就会与中心筒柱的轴心压力相差甚远,而且稳定的大于中心筒;但是在筒中筒结构的设计中,当发展到极限时,内部轴心压力会很高的,甚至远远大于外部的柱子;这种反常的现象是由于两种体系中的剪切构件的刚度不同;这很容易去理解,内筒可以看成是一个支撑或者说是剪切刚性的筒,而外筒可以看成是一个结构或者说是剪切弹性的筒;核心交互式结构:核心交互式结构属于两个筒与某些形式的三维空间框架相配合的筒中筒特殊情况;事实上,这种体系常用于那种外筒剪切刚度为零的结构;位于Pittsburgh的美国钢铁大楼证实了这种体系是能很好的工作的;在核心交互式结构中,内筒是一个支撑结构,外筒没有任何剪切刚度,而且两种结构体系能通过一个空间结构或“帽”式结构共同起作用;需要指出的是,如果把外部的柱子看成是一种从“帽”到基础的直线体系,这将是不合适的;根据支撑核心的弹性曲线,这些柱子只发挥了刚度的15%;同样需要指出的是,内柱中与侧向力有关的轴向力沿筒高度由拉力变为压力,同时变化点位于筒高度的约5/8处;当然,外柱也传递相同的轴向力,这种轴向力低于作用在整个柱子高度的侧向荷载,因为这个体系的剪切刚度接近于零;把内外筒相连接的空间结构、悬臂梁或桁架经常遵照一些规范来布置;美国电话电报总局就是一个布置交互式构件的生动例子;1、结构体系长米,宽米,高米;2、布置了两个筒,每个筒的尺寸是米×米,在长方向上有米的间隔;3、在短方向上内筒被支撑起来,但是在长方向上没有剪切刚度;4、环绕着建筑物布置了一个外筒;5、外筒是一个瞬时抵抗结构,但是在每个长方向的中心米都没有剪切刚度;6、在建筑的顶部布置了一个空间桁架构成的“帽式”结构;7、在建筑的底部布置了一个相似的空间桁架结构;8、由于外筒的剪切刚度在建筑的底部接近零,整个建筑基本上由两个钢板筒来支持;框格体系或束筒体系结构:位于美国芝加哥的西尔斯大厦是箱式结构的经典之作,它由九个相互独立的筒组成的一个集中筒;由于西尔斯大厦包括九个几乎垂直的筒,而且筒在平面上无须相似,基本的结构体系在不规则形状的建筑中得到特别的应用;一些单个的筒高于建筑一点或很多是很常见的;事实上,这种体系的重要特征就在于它既有坚固的一面,也有脆弱的一面;这种体系的脆弱,特别是在结构筒中,与柱子的压缩变形有很大的关系,柱子的压缩变形有下式计算:△=ΣfL/E对于那些层高为米左右和平均压力为138MPa的建筑,在荷载作用下每层柱子的压缩变形为1512/29000或毫米;在第50层柱子会压缩94毫米,小于它未受压的长度;这些柱子在50层的时候和100层的时候的变形是不一样的,位于这两种体系之间接近于边缘的那些柱需要使这种不均匀的变形得以调解;主要的结构工作都集中在布置中;在Melbourne的Rialto项目中,结构工程师发现至少有一幢建筑,很有必要垂直预压低高度的柱子,以便使柱不均匀的变形差得以调解,调解的方法近似于后拉伸法,即较短的柱转移重量到较高的邻柱上;。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Reforming Agricultural Trade: Not Just for the Wealthy CountriesIn the early 1990s, Mozambique removed a ban on raw cashew exports, which was originally imposed to guarantee a source of raw nuts to its local processing industry and to prevent a drop in exports of processed nuts. As a result, a million cashew farmers received higher prices for the nuts in the domestic market. But, at least half the higher prices received for exports of these nuts went to traders, and not to farmers, so there was no increase in production in response to the higher prices. At the same time, Mozambique’s nut-processing industry lost its guaranteed supply of raw nuts, and was forced to shut down processing plants and lay off 7,000 workers (FAO 2003).In Zambia, before liberalization, maize producers benefited from subsidies to the mining sector, which lowered the price of fertilizer. A State buyer further subsidized small farmers. When these subsidies were removed, and the para-State privatized, larger farmers close to international markets saw few changes, but small farmers in remote areas were left without a formal market for their maize.In Vietnam, trade liberalization was accompanied by tax reductions, land reforms, and marketing reforms that allowed farmers to benefit from increased sales to the market. As Vietnam made these investments, it began to phase out domestic subsidies and reduce border protection against imports. An aggressive program of targeted rural investments accompanied these reforms. During this liberalization, Vietnam’s overall economy grew at 7% annually, agricultural output grew by 6%, and the proportion of undernourished people fell from 27% to 19% of the population. Vietnam moved from being a net importer of food to a net exporter (FAO 2003).Similarly, in Zimbabwe, before liberalization of the cotton sector, the government was the single buyer of cotton from farmers, offering low prices to subsidized textile firms. Facing lower prices, commercial farmers diversified into other crops (tobacco, horticulture) but smaller farmers who could not diversify suffered. Internal liberalization eliminated price controls and privatized the marketing board. The result was higher cotton prices and competition among the three principal buyers. Poorer farmers benefited through increased market opportunities, as well as better extension and services. As a result, agricultural employment rose by 40%, with production of traditional and non-traditional crops increasing.Policy reforms can decrease employment in the short run, but in general, changes in employment caused by trade liberalization are small relative to the overall size of the economy and the natural dynamics of the labor market. But, for some countries that rely heavily on one sector and do not have flexible economies, the transition can be difficult. Even though there are long-term and economy-wide benefits to trade liberalization, there may be short-term disruptions and economic shocks which may be hard for the poor to endure.Once a government decides to undertake a reform, the focus should be on easing the impact of reforms on the losers –either through education, retraining, or income assistance. Government policy should also focus on helping those who will be able to compete in the new environment to take advantage of new opportunities. Even though trade on balance has a positive impact on growth, and therefore on poverty alleviation, developing countries should pursue trade liberalization with a pro-poor strategy. In other words, they should focus on liberalizing those sectors that will absorb non-skilled labor from rural areas, as agriculture becomes more competitive. The focus should be on trade liberalization that will enhanceeconomic sectors that have the potential to employ people in deprived areas. Trade liberalization must be complemented by policies to improve education, rural roads, communications, etc., so that liberalization can be positive for people living in rural areas, not just in urban centers or favored areas. These underlying issues need to be addressed if trade (or any growth) is to reach the poorest; or the reforms and liberalization need to be directed toward smallholders, and landless and unskilled labor.BUT THE POOR IN D EVELOPING COUNTRIES DON’T BENEFIT EQUALLY All policies create winners and losers. Continuing the status quo simply maintains the current cast of winners and losers. Too often in developing countries, the winners from current policies are not the poor living in rural areas. Policy reforms (whether in trade or in other areas) simply create a different set of winners and losers.Notwithstanding the overall positive analyses of the impact of trade liberalization on developing countries as a group, there are significant variations by country, commodity, and different sectors within developing countries. Most analysts combine all but the largest developing countries into regional groupings, so it is difficult to determine the precise impacts on individual countries. Even those studies that show long-term or eventual gains for rural households or for the poor do not focus on the costs imposed during the transition from one regime to another. It is even more difficult to evaluate the impact on different types of producers within different countries, such as smallholders and subsistence farmers. Also, economic models cannot evaluate how trade policies will affect poverty among different households, or among women and children within households.Allen Winters (2002) has proposed a useful set of questions that policy-makers should ask when they consider trade reforms:1. Will the effects of changed border prices be passed through the economy? If not, the effects –positive or negative – on poverty will be muted.2. Is reform likely to destroy or create markets? Will it allow poor consumers to obtain new goods?3. Are reforms likely to affect different household members – women, children – differently?4. Will spillovers be concentrated on areas/activities that are relevant to the poor?5. What factors – land, labor, and capital – are used in what sectors? How responsive is the supply of those factors to changes in prices?6. Will reform reduce or increase government revenue? By how much?7. Will reforms allow people to combine their domestic and international activities, or will it require them to switch from one to another?8. Does the reform depend on or affect the ability of poor people to assume risks?9. Will reforms cause major shocks for certain regions within the country?10. Will transitional unemployment be concentrated among the poor?Although trade liberalization is often blamed for increasing poverty in developing countries, the links between trade liberalization and poverty are more complex. Clearly, more open trade regimes lead to higher rates of economic growth, and without economic growth any effort to alleviate poverty, hunger, and malnutrition will be unproductive. But, without accompanying national policies in education, health, land reforms, micro-credit, infrastructure, and governance, economic growth (whether derived from trade or other sources) is much less likely to alleviate poverty, hunger, and malnutrition in the poorest developing countries.CONCLUSIONSThe imperative to dismantle unjust structures and to halt injurious actions is enshrined in the Millennium Development Goals, and in the goals of the Doha Development Round. This imperative has been primarily directed at the OECD countries that maintain high levels of agricultural subsidies and protection against many commodities that are vital to the economic well-being of developing countries. The OECD countries must reduce their trade barriers, reduce and reform their domestic subsidies; but, as this chapter makes clear, the OECD reforms must be accompanied by trade policy reforms in the developing countries as well.Open trade is one of the strongest forces for economic development and growth. Developing countries and civil society groups who oppose these trade reforms i n order to ‘protect’ subsistence farmers are doing these farmers a disservice. Developing countries and civil society are correct in the narrow view that markets cannot solve every problem, and that there is a role for government and for public policies. As the Doha negotiators get down to business, their energies would be better used in ensuring that developing countries begin to prepare for a more open trade regime by enacting policies that promote overall economic growth and that promote agricultural development. Their energies would be better spent convincing the population (taxpayers and consumers) in developed countries of the need for agricultural trade reform, and in convincing the multilateral aid agencies to help developing countries invest in public goods and public policies to ensure that trade policy reforms are pro-poor.It is clear from an examination of the evidence that trade reform, by itself, does not exacerbate poverty in developing countries. Rather, the failure of trade reforms to alleviate poverty lies in the underlying economic structures, adverse domestic policies, and the lack of strong flanking measures. To ensure that trade reform is pro-poor, the key is not to seek additional exemptions from trade disciplines for developing countries, which will only be met with counter-demands for other exemptions by developed countries, but to ensure that the WTO agreement is strong and effective in disciplining subsidies and reducing barriers to trade by all countries.Open trade is a key determinant of economic growth, and economic growth is the only path to poverty alleviation. This is equally true in agriculture as in other sectors of the economy. In most cases, trade reforms in agriculture will benefit the poor in developing countries. In cases where the impact of trade reforms is ambiguous or negative, the answer is not to postpone trade reform. Rather, trade reforms must be accompanied by flanking policies that make needed investments or that provide needed compensation, so that trade-led growth can benefit the poor.。