中北大学毕业设计英文文献及中文翻译
中北大学毕业设计外文翻译

气动加热及应力的数据模拟:化学气相沉积硫化锌高超音速飞行器摘要:超音速飞行器在红外窗口设计方面,对气动力的强烈程度和气动加热的严重程度要求更为严格。
本文使用有限元分析提出基于热应力场的分布在红外窗口流场对超音速飞行器的研究。
评估提供了理论指导气动加热的影响和力量在红外窗口材料。
气动热流从Mach 3 - 6马赫航班在15公里的高度在标准大气通过流场分析。
温度和压力反应然后调查下常数传热系数边界条件对不同马赫数。
数值结果表明,最大应力高于材料强度在6马赫,这意味着材料可能出现的失败。
的最大应力和温度低于熔点的材料强度和在其他情况下,所以材料安全的。
关键词:化学气相沉积(CVD)的硫化锌,红外窗口材料,热应激反应,高超音速车辆doi:10.1631 / jzus.A1300341 文档代码:A CLC数量:V211简介飞机和航天器结构设计超音速和高超音速飞行是接受在发射和严重的气动加热再入阶段的操作,这是所致的空气边界层逐渐减慢(阮et al .,2010)。
因此,所有车辆的外部表面加热。
这将会导致不均匀的瞬态温度生产动态热应力和变形。
因此,高加热与冲击前缘是车辆的重要问题设计。
除了表面的融化和消融飞机空气动力学可以摄动,导致不可接受的飞行轨迹的偏差。
另一个问题信号折射穿过震惊吗热气体层在汽车前面的头(萨拉瓦南et al .,2009)。
近年来,已经有了意义投资发展的高超音速汽车技术。
高超音速飞行开始1949年2月,当一个女军团(WAC)下士从我们捕获的v - 2火箭点燃火箭(太阳和吴,2003)。
后来,广泛的数值分析(Jain和海斯,2004;Di Clemente et al .,2009;Gerdroodbary Hosseinalipour,2010)压力、传热和表面温度定或不稳定传热边界层高超音速流。
尽管一些飞行实验也进行了(Di Clemente等,2007;马里尼et al .,2007)收购空气动力学加热在飞行条来评估这些数据预测方法,飞行数据并不适合完成验证。
毕业设计外文翻译英文加中文

A Comparison of Soft Start Mechanisms for Mining BeltConveyors1800 Washington Road Pittsburgh, PA 15241 Belt Conveyors are an important method for transportation of bulk materials in the mining industry. The control of the application of the starting torque from the belt drive system to the belt fabric affects the performance, life cost, and reliability of the conveyor. This paper examines applications of each starting method within the coal mining industry.INTRODUCTIONThe force required to move a belt conveyor must be transmitted by the drive pulley via friction between the drive pulley and the belt fabric. In order to transmit power there must be a difference in the belt tension as it approaches and leaves the drive pulley. These conditions are true for steady state running, starting, and stopping. Traditionally, belt designs are based on static calculations of running forces. Since starting and stopping are not examined in detail, safety factors are applied to static loadings (Harrison, 1987). This paper will primarily address the starting or acceleration duty of the conveyor. The belt designer must control starting acceleration to prevent excessive tension in the belt fabric and forces in the belt drive system (Suttees, 1986). High acceleration forces can adversely affect the belt fabric, belt splices, drive pulleys, idler pulleys, shafts, bearings, speed reducers, and couplings. Uncontrolled acceleration forces can cause belt conveyor system performance problems with vertical curves, excessive belt take-up movement, loss of drive pulley friction, spillage of materials, and festooning of the belt fabric. The belt designer is confronted with two problems, The belt drive system must produce a minimum torque powerful enough to start the conveyor, and controlled such that the acceleration forces are within safe limits. Smooth starting of the conveyor can be accomplished by the use of drive torque control equipment, either mechanical or electrical, or a combination of the two (CEM, 1979).SOFT START MECHANISM EVALUATION CRITERIONWhat is the best belt conveyor drive system? The answer depends on many variables. The best system is one that provides acceptable control for starting, running, and stopping at a reasonable cost and with high reliability (Lewdly and Sugarcane, 1978). Belt Drive System For the purposes of this paper we will assume that belt conveyors are almost always driven byelectrical prime movers (Goodyear Tire and Rubber, 1982). The belt "drive system" shall consist of multiple components including the electrical prime mover, the electrical motor starter with control system, the motor coupling, the speed reducer, the low speed coupling, the belt drive pulley, and the pulley brake or hold back (Cur, 1986). It is important that the belt designer examine the applicability of each system component to the particular application. For the purpose of this paper, we will assume that all drive system components are located in the fresh air, non-permissible, areas of the mine, or in non-hazardous, National Electrical Code, Article 500 explosion-proof, areas of the surface of the mine.Belt Drive Component Attributes SizeCertain drive components are available and practical in different size ranges. For this discussion, we will assume that belt drive systems range from fractional horsepower to multiples of thousands of horsepower. Small drive systems are often below 50 horsepower. Medium systems range from 50 to 1000 horsepower. Large systems can be considered above 1000 horsepower. Division of sizes into these groups is entirely arbitrary. Care must be taken to resist the temptation to over motor or under motor a belt flight to enhance standardization. An over motored drive results in poor efficiency and the potential for high torques, while an under motored drive could result in destructive overspending on regeneration, or overheating with shortened motor life (Lords, et al., 1978).Torque ControlBelt designers try to limit the starting torque to no more than 150% of the running torque (CEMA, 1979; Goodyear, 1982). The limit on the applied starting torque is often the limit of rating of the belt carcass, belt splice, pulley lagging, or shaft deflections. On larger belts and belts with optimized sized components, torque limits of 110% through 125% are common (Elberton, 1986). In addition to a torque limit, the belt starter may be required to limit torque increments that would stretch belting and cause traveling waves. An ideal starting control system would apply a pretension torque to the belt at rest up to the point of breakaway, or movement of the entire belt, then a torque equal to the movement requirements of the belt with load plus a constant torque to accelerate the inertia of the system components from rest to final running speed. This would minimize system transient forces and belt stretch (Shultz, 1992). Different drive systems exhibit varying ability to control the application of torques to the belt at rest and at different speeds. Also, the conveyor itself exhibits two extremes of loading. An empty belt normally presents the smallest required torque for breakaway and acceleration, while a fully loaded belt presents the highest required torque. A mining drive system must be capable of scaling the applied torque from a 2/1 ratio for a horizontal simple belt arrangement, to a 10/1 ranges for an inclined or complex belt profile.Thermal RatingDuring starting and running, each drive system may dissipate waste heat. The waste heat may be liberated in the electrical motor, the electrical controls,, the couplings, the speed reducer, or the belt braking system. The thermal load of each start Is dependent on the amount of belt load and the duration of the start. The designer must fulfill the application requirements for repeated starts after running the conveyor at full load. Typical mining belt starting duties vary from 3 to 10 starts per hour equally spaced, or 2 to 4 starts in succession. Repeated starting may require the dreading or over sizing of system components. There is a direct relationship between thermal rating for repeated starts and costs. Variable Speed. Some belt drive systems are suitable for controlling the starting torque and speed, but only run at constant speed. Some belt applications would require a drive system capable of running for extended periods at less than full speed. This is useful when the drive load must be shared with other drives, the belt is used as a process feeder for rate control of the conveyed material, the belt speed is optimized for the haulage rate, the belt is used at slower speeds to transport men or materials, or the belt is run a slow inspection or inching speed for maintenance purposes (Hager, 1991). The variable speed belt drive will require a control system based on some algorithm to regulate operating speed. Regeneration or Overhauling Load. Some belt profiles present the potential for overhauling loads where the belt system supplies energy to the drive system. Not all drive systems have the ability to accept regenerated energy from the load. Some drives can accept energy from the load and return it to the power line for use by other loads. Other drives accept energy from the load and dissipate it into designated dynamic or mechanical braking elements. Some belt profiles switch from motoring to regeneration during operation. Can the drive system accept regenerated energy of a certain magnitude for the application? Does the drive system have to control or modulate the amount of retarding force during overhauling? Does the overhauling occur when running and starting? Maintenance and Supporting Systems. Each drive system will require periodic preventative maintenance. Replaceable items would include motor brushes, bearings, brake pads, dissipation resistors, oils, and cooling water. If the drive system is conservatively engineered and operated, the lower stress on consumables will result in lower maintenance costs. Some drives require supporting systems such as circulating oil for lubrication, cooling air or water, environmental dust filtering, or computer instrumentation. The maintenance of the supporting systems can affect the reliability of the drive system.CostThe drive designer will examine the cost of each drive system. The total cost is the sum of the first capital cost to acquire the drive, the cost to install and commission the drive, thecost to operate the drive, and the cost to maintain the drive. The cost for power to operate the drive may vary widely with different locations. The designer strives to meet all system performance requirements at lowest total cost. Often more than one drive system may satisfy all system performance criterions at competitive costs.ComplexityThe preferred drive arrangement is the simplest, such as a single motor driving through a single head pulley.However,mechanical, economic,and functional requirements often necessitate the use of complex drives.The belt designer must balance the need for sophistication against the problems that accompany complex systems. Complex systems require additional design engineering for successful deployment. An often-overlooked cost in a complex system is the cost of training onsite personnel, or the cost of downtime as a result of insufficient training.SOFT START DRIVE CONTROL LOGICEach drive system will require a control system to regulate the starting mechanism. The most common type of control used on smaller to medium sized drives with simple profiles is termed "Open Loop Acceleration Control". In open loop, the control system is previously configured to sequence the starting mechanism in a prescribed manner, usually based on time. In open loop control, drive-operating parameters such as current, torque, or speed do not influence sequence operation. This method presumes that the control designer has adequately modeled drive system performance on the conveyor. For larger or more complex belts, "Closed Loop" or "Feedback" control may he utilized. In closed loop control, during starting, the control system monitors via sensors drive operating parameters such as current level of the motor, speed of the belt, or force on the belt, and modifies the starting sequence to control, limit, or optimize one or wore parameters. Closed loop control systems modify the starting applied force between an empty and fully loaded conveyor. The constants in the mathematical model related to the measured variable versus the system drive response are termed the tuning constants. These constants must be properly adjusted for successful application to each conveyor. The most common schemes for closed loop control of conveyor starts are tachometer feedback for speed control and load cell force or drive force feedback for torque control. On some complex systems, It is desirable to have the closed loop control system adjust itself for various encountered conveyor conditions. This is termed "Adaptive Control". These extremes can involve vast variations in loadings, temperature of the belting, location of the loading on the profile, or multiple drive options on the conveyor. There are three commonadaptive methods. The first involves decisions made before the start, or 'Restart Conditioning'. If the control system could know that the belt is empty, it would reduce initial force and lengthen the application of acceleration force to full speed. If the belt is loaded, the control system would apply pretension forces under stall for less time and supply sufficient torque to adequately accelerate the belt in a timely manner. Since the belt only became loaded during previous running by loading the drive, the average drive current can be sampled when running and retained in a first-in-first-out buffer memory that reflects the belt conveyance time. Then at shutdown the FIFO average may be use4 to precondition some open loop and closed loop set points for the next start. The second method involves decisions that are based on drive observations that occur during initial starting or "Motion Proving'. This usually involves a comparison In time of the drive current or force versus the belt speed. if the drive current or force required early in the sequence is low and motion is initiated, the belt must be unloaded. If the drive current or force required is high and motion is slow in starting, the conveyor must be loaded. This decision can be divided in zones and used to modify the middle and finish of the start sequence control. The third method involves a comparison of the belt speed versus time for this start against historical limits of belt acceleration, or 'Acceleration Envelope Monitoring'. At start, the belt speed is measured versus time. This is compared with two limiting belt speed curves that are retained in control system memory. The first curve profiles the empty belt when accelerated, and the second one the fully loaded belt. Thus, if the current speed versus time is lower than the loaded profile, it may indicate that the belt is overloaded, impeded, or drive malfunction. If the current speed versus time is higher than the empty profile, it may indicate a broken belt, coupling, or drive malfunction. In either case, the current start is aborted and an alarm issued.CONCLUSIONThe best belt starting system is one that provides acceptable performance under all belt load Conditions at a reasonable cost with high reliability. No one starting system meets all needs. The belt designer must define the starting system attributes that are required for each belt. In general, the AC induction motor with full voltage starting is confined to small belts with simple profiles. The AC induction motor with reduced voltage SCR starting is the base case mining starter for underground belts from small to medium sizes. With recent improvements, the AC motor with fixed fill fluid couplings is the base case for medium to large conveyors with simple profiles. The Wound Rotor Induction Motor drive is the traditional choice for medium to large belts with repeated starting duty or complex profilesthat require precise torque control. The DC motor drive, Variable Fill Hydrokinetic drive, and the Variable Mechanical Transmission drive compete for application on belts with extreme profiles or variable speed at running requirements. The choice is dependent on location environment, competitive price, operating energy losses, speed response, and user familiarity. AC Variable Frequency drive and Brush less DC applications are limited to small to medium sized belts that require precise speed control due to higher present costs and complexity. However, with continuing competitive and technical improvements, the use of synthesized waveform electronic drives will expand.REFERENCES[1]Michael L. Nave, P.E.1989.CONSOL Inc.煤矿业带式输送机几种软起动方式的比较1800 年华盛顿路匹兹堡, PA 15241带式运送机是采矿工业运输大批原料的重要方法。
毕业设计外文参考资料及译文

Fundamental information, including the effects of porosity, water-to-cement ratio, cement paste characteristic, volume fraction of coarse aggregates, size of coarse aggregates on pervious concrete strength, had been studied [3, 9−12]. However, for the reason that the porosity played a key role in the functional and structural performances of pervious concretes [13 − 14], there was still a need to understand more about the mechanical responses of pervious concretes proportioned for desired levels of porosities. Although it was possible to have widely different pore structure features for a given porosity, or similar pore structure features for varied porosities in pervious concrete, it was imperative to focus on the mechanical responses of pervious concrete at different designed porosities. However, compared with the related research on conventional concrete, very limited study had been conducted on the fracture and fatigue behaviors of pervious concrete, which were especially important for pavement concrete subjected to heavy traffic and to severe seasonal temperature change. The presented work outlined the raw materials and mixing proportions to produce high-strength supplementary cementitious material (SCM) modified pervious concrete (SPC) and polymer-intensified pervious concrete (PPC) at different porosities within the range of 15%−25%. Then, the mechanical properties of pervious concrete, including the compressive and flexural strengths, fracture energy, as well as fatigue property, were investigated in details.
毕设外文文献+翻译1

毕设外文文献+翻译1外文翻译外文原文CHANGING ROLES OF THE CLIENTS、ARCHITECTSAND CONTRACTORS THROUGH BIMAbstract:Purpose –This paper aims to present a general review of the practical implications of building information modelling (BIM) based on literature and case studies. It seeks to address the necessity for applying BIM and re-organising the processes and roles in hospital building projects. This type of project is complex due to complicated functional and technical requirements, decision making involving a large number of stakeholders, and long-term development processes.Design/methodology/approach–Through desk research and referring to the ongoing European research project InPro, the framework for integrated collaboration and the use of BIM are analysed.Findings –One of the main findings is the identification of the main factors for a successful collaboration using BIM, which can be recognised as “POWER”: product information sharing (P),organisational roles synergy (O), work processes coordination (W), environment for teamwork (E), and reference data consolidation (R).Originality/value –This paper contributes to the actual discussion in science and practice on the changing roles and processes that are required to develop and operate sustainable buildings with the support of integrated ICT frameworks and tools. It presents the state-of-the-art of European research projects and some of the first real cases of BIM application inhospital building projects.Keywords:Europe, Hospitals, The Netherlands, Construction works, Response flexibility, Project planningPaper type :General review1. IntroductionHospital building projects, are of key importance, and involve significant investment, and usually take a long-term development period. Hospital building projects are also very complex due to the complicated requirements regarding hygiene, safety, special equipments, and handling of a large amount of data. The building process is very dynamic and comprises iterative phases and intermediate changes. Many actors with shifting agendas, roles and responsibilities are actively involved, such as: the healthcare institutions, national and local governments, project developers, financial institutions, architects, contractors, advisors, facility managers, and equipment manufacturers and suppliers. Such building projects are very much influenced, by the healthcare policy, which changes rapidly in response to the medical, societal and technological developments, and varies greatly between countries (World Health Organization, 2000). In The Netherlands, for example, the way a building project in the healthcare sector is organised is undergoing a major reform due to a fundamental change in the Dutch health policy that was introduced in 2008.The rapidly changing context posts a need for a building with flexibility over its lifecycle. In order to incorporate life-cycle considerations in the building design, construction technique, and facility management strategy, a multidisciplinary collaboration is required. Despite the attempt for establishing integrated collaboration, healthcare building projects still facesserious problems in practice, such as: budget overrun, delay, and sub-optimal quality in terms of flexibility, end-user?s dissatisfaction, and energy inefficiency. It is evident that the lack of communication and coordination between the actors involved in the different phases of a building project is among the most important reasons behind these problems. The communication between different stakeholders becomes critical, as each stakeholder possesses different setof skills. As a result, the processes for extraction, interpretation, and communication of complex design information from drawings and documents are often time-consuming and difficult. Advanced visualisation technologies, like 4D planning have tremendous potential to increase the communication efficiency and interpretation ability of the project team members. However, their use as an effective communication tool is still limited and not fully explored. There are also other barriers in the information transfer and integration, for instance: many existing ICT systems do not support the openness of the data and structure that is prerequisite for an effective collaboration between different building actors or disciplines.Building information modelling (BIM) offers an integrated solution to the previously mentioned problems. Therefore, BIM is increasingly used as an ICT support in complex building projects. An effective multidisciplinary collaboration supported by an optimal use of BIM require changing roles of the clients, architects, and contractors; new contractual relationships; and re-organised collaborative processes. Unfortunately, there are still gaps in the practical knowledge on how to manage the building actors to collaborate effectively in their changing roles, and todevelop and utilise BIM as an optimal ICT support of the collaboration.This paper presents a general review of the practical implications of building information modelling (BIM) based on literature review and case studies. In the next sections, based on literature and recent findings from European research project InPro, the framework for integrated collaboration and the use of BIM are analysed. Subsequently, through the observation of two ongoing pilot projects in The Netherlands, the changing roles of clients, architects, and contractors through BIM application are investigated. In conclusion, the critical success factors as well as the main barriers of a successful integrated collaboration using BIM are identified.2. Changing roles through integrated collaboration and life-cycle design approachesA hospital building project involves various actors, roles, and knowledge domains. In The Netherlands, the changing roles of clients, architects, and contractors in hospital building projects are inevitable due the new healthcare policy. Previously under the Healthcare Institutions Act (WTZi), healthcare institutions were required to obtain both a license and a building permit for new construction projects and major renovations. The permit was issued by the Dutch Ministry of Health. The healthcare institutions were then eligible to receive financial support from the government. Since 2008, new legislation on the management of hospital building projects and real estate has come into force. In this new legislation, a permit for hospital building project under the WTZi is no longer obligatory, nor obtainable (Dutch Ministry of Health, Welfare and Sport, 2008). This change allows more freedom from the state-directed policy, and respectively,allocates more responsibilities to the healthcare organisations to deal with the financing and management of their real estate. The new policy implies that the healthcare institutions are fully responsible to man age and finance their building projects and real estate. The government?s support for the costs of healthcare facilities will no longer be given separately, but will be included in the fee for healthcare services. This means that healthcare institutions must earn back their investment on real estate through their services. This new policy intends to stimulate sustainable innovations in the design, procurement and management of healthcare buildings, which will contribute to effective and efficient primary healthcare services.The new strategy for building projects and real estate management endorses an integrated collaboration approach. In order to assure the sustainability during construction, use, and maintenance, the end-users, facility managers, contractors and specialist contractors need to be involved in the planning and design processes. The implications of the new strategy are reflected in the changing roles of the building actors and in the new procurement method.In the traditional procurement method, the design, and its details, are developed by the architect, and design engineers. Then, the client (the healthcare institution) sends an application to the Ministry of Healthto obtain an approval on the building permit and the financial support from the government. Following this, a contractor is selected through a tender process that emphasises the search for the lowest-price bidder. During the construction period, changes often take place due to constructability problems of the design and new requirements from the client.Because of the high level of technical complexity, and moreover, decision-making complexities, the whole process from initiation until delivery of a hospital building project can take up to ten years time. After the delivery, the healthcare institution is fully in charge of the operation of the facilities. Redesigns and changes also take place in the use phase to cope with new functions and developments in the medical world.The integrated procurement pictures a new contractual relationship between the parties involved in a building project. Instead of a relationship between the client and architect for design, and the client and contractor for construction, in an integrated procurement the client only holds a contractual relationship with the main party that is responsible for both design and construction. The traditional borders between tasks and occupational groups become blurred since architects, consulting firms, contractors, subcontractors, and suppliers all stand on the supply side in the building process while the client on the demand side. Such configuration puts the architect, engineer and contractor in a very different position that influences not only their roles, but also their responsibilities, tasks and communication with the client, the users, the team and other stakeholders.The transition from traditional to integrated procurement method requires a shift of mindset of the parties on both the demand and supply sides. It is essential for the client and contractor to have a fair and open collaboration in which both can optimally use their competencies. The effectiveness of integrated collaboration is also determined by the client?s capacity and strategy to organize innovative tendering procedures.A new challenge emerges in case of positioning an architect in a partnership with the contractor instead of with the client. In case of the architect enters a partnership with the contractor, an important issues is how to ensure the realisation of the architectural values as well as innovative engineering through an efficient construction process. In another case, the architect can stand at the client?s side in a strategic advisory role instead of being the designer. In this case, the architect?s responsibility is translating client?s requirements and wishes into the architectural values to be included in the design specification, and evaluating the contractor?s proposal against this. In any of this new role, the architect holds the responsibilities as stakeholder interest facilitator, custodian of customer value and custodian of design models.The transition from traditional to integrated procurement method also brings consequences in the payment schemes. In the traditional building process, the honorarium for the architect is usually based on a percentage of the project costs; this may simply mean that the more expensive the building is, the higher the honorarium will be. The engineer receives the honorarium based on the complexity of the design and the intensity of the assignment. A highly complex building, which takes a number of redesigns, is usually favourable for the engineers in terms of honorarium. A traditional contractor usually receives the commission based on the tender to construct the building at the lowest price by meeting the minimum specifications given by the client. Extra work due to modifications is charged separately to the client. After the delivery, the contractor is no longer responsible for the long-term use of the building. In the traditional procurement method, all risks are placed with theclient.In integrated procurement method, the payment is based on the achieved building performance; thus, the payment is non-adversarial. Since the architect, engineer and contractor have a wider responsibility on the quality of the design and the building, the payment is linked to a measurement system of the functional and technical performance of the building over a certain period of time. The honorarium becomes an incentive to achieve the optimal quality. If the building actors succeed to deliver a higher added-value thatexceed the minimum client?s requirements, they will receive a bonus in accordance to the client?s extra gain. The level of transparency is also improved. Open book accounting is an excellent instrument provided that the stakeholders agree on the information to be shared and to its level of detail (InPro, 2009).Next to the adoption of integrated procurement method, the new real estate strategy for hospital building projects addresses an innovative product development and life-cycle design approaches. A sustainable business case for the investment and exploitation of hospital buildings relies on dynamic life-cycle management that includes considerations and analysis of the market development over time next to the building life-cycle costs (investment/initial cost, operational cost, and logistic cost). Compared to the conventional life-cycle costing method, the dynamic life-cycle management encompasses a shift from focusing only on minimizing the costs to focusing on maximizing the total benefit that can be gained. One of the determining factors for a successful implementation of dynamic life-cycle management is the sustainable design of the building and building components, which means that the design carriessufficient flexibility to accommodate possible changes in the long term (Prins, 1992).Designing based on the principles of life-cycle management affects the role of the architect, as he needs to be well informed about the usage scenarios and related financial arrangements, the changing social and physical environments, and new technologies. Design needs to integrate people activities and business strategies over time. In this context, the architect is required to align the design strategies with the organisational, local and global policies on finance, business operations, health and safety, environment, etc.The combination of process and product innovation, and the changing roles of the building actors can be accommodated by integrated project delivery or IPD (AIA California Council, 2007). IPD is an approach that integrates people, systems, business structures and practices into a process that collaboratively harnesses the talents and insights of all participants to reduce waste and optimize efficiency through all phases of design, fabrication and construction. IPD principles can be applied to a variety of contractual arrangements. IPD teams will usually include members well beyond the basic triad of client, architect, and contractor. At a minimum, though, an Integrated Project should include a tight collaboration between the client, the architect, and the main contractor ultimately responsible for construction of the project, from the early design until the project handover. The key to a successful IPD is assembling a team that is committed to collaborative processes and is capable of working together effectively. IPD is built on collaboration. As a result, it can only be successful if the participants share and apply common values and goals.3. Changing roles through BIM applicationBuilding information model (BIM) comprises ICT frameworks and tools that can support the integrated collaboration based on life-cycle design approach. BIM is a digital representation of physical and functional characteristics of a facility. As such it serves as a shared knowledge resource for information about a facility forming a reliable basis for decisions during its lifecycle from inception onward (National Institute of Building Sciences NIBS, 2007). BIM facilitates time and place independent collaborative working. A basic premise of BIM is collaboration by different stakeholders at different phases of the life cycle of a facility to insert, extract, update or modify information in the BIM to support and reflect the roles of that stakeholder. BIM in its ultimate form, as a shared digital representation founded on open standards for interoperability, can become a virtual information model to be handed from the design team to the contractor and subcontractors and then to the client.BIM is not the same as the earlier known computer aided design (CAD). BIM goes further than an application to generate digital (2D or 3D) drawings. BIM is an integrated model in which all process and product information is combined, stored, elaborated, and interactively distributed to all relevant building actors. As a central model for all involved actors throughout the project lifecycle, BIM develops andevolves as the project progresses. Using BIM, the proposed design and engineering solutions can be measured against the client?s requirements and expected building performance. The functionalities of BIM to support the design process extend to multidimensional (nD), including: three-dimensional visualisation and detailing, clash detection, material schedule, planning, costestimate, production and logistic information, and as-built documents. During the construction process, BIM can support the communication between the building site, the factory and the design office– which is crucial for an effective and efficient prefabrication and assembly processes as well as to prevent or solve problems related to unforeseen errors or modifications. When the building is in use, BIM can be used in combination with the intelligent building systems to provide and maintain up-to-date information of the building performance, including the life-cycle cost.To unleash the full potential of more efficient information exchange in the AEC/FM industry in collaborative working using BIM, both high quality open international standards and high quality implementations of these standards must be in place. The IFC open standard is generally agreed to be of high quality and is widely implemented in software. Unfortunately, the certification process allows poor quality implementations to be certified and essentially renders the certified software useless for any practical usage with IFC. IFC compliant BIM is actually used less than manual drafting for architects and contractors, and show about the same usage for engineers. A recent survey shows that CAD (as a closed-system) is still the major form of technique used in design work (over 60 per cent) while BIM is used in around 20 percent of projects for architects and in around 10 per cent of projects for engineers and contractors.The application of BIM to support an optimal cross-disciplinary and cross-phase collaboration opens a new dimension in the roles and relationships between the building actors. Several most relevant issues are: the new role of a model manager; the agreement on the access right and IntellectualProperty Right (IPR); the liability and payment arrangement according to the type of contract and in relation to the integrated procurement; and the use of open international standards.Collaborative working using BIM demands a new expert role of a model manager who possesses ICT as well as construction process know-how (InPro, 2009). The model manager deals with the system as well as with the actors. He provides and maintains technological solutions required for BIM functionalities, manages the information flow, and improves the ICT skills of the stakeholders. The model manager does not take decisions on design and engineering solutions, nor the organisational processes, but his roles in the chain of decision making are focused on:the development of BIM, the definition of the structure and detail level of the model, and the deployment of relevant BIM tools, such as for models checking, merging, and clash detections;the contribution to collaboration methods, especially decision making and communication protocols, task planning, and risk management;and the management of information, in terms of data flow and storage, identification of communication errors, and decision or process (re-)tracking.Regarding the legal and organisational issues, one of the actual questions is: “In what way does the intellectual property right (IPR) in collaborative working using BIM differ from the IPR in a traditional teamwork?”. In terms of combine d work, the IPR of each element is at tached to its creator. Although it seems to be a fully integrated design, BIM actually resulted from a combination of works/elements; for instance: the outline of the building design, is created by the architect, the design for theelectrical system, is created by the electrical contractor, etc. Thus, in case of BIM as a combined work, the IPR is similar to traditional teamwork. Working with BIM with authorship registration functionalities may actually make it easier to keep track of the IPR.How does collaborative working, using BIM, effect the contractual relationship? On the one hand,collaborative working using BIM does not necessarily change the liability position in the contract nor does it obligate an alliance contract. The General Principles of BIM A ddendum confirms: …This does not effectuate or require a restructuring of contractual relationships or shifting of risks between or among the Project Participants other than as specifically required per the Protocol Addendum and its Attachments? (ConsensusDOCS, 2008). On the other hand, changes in terms of payment schemes can be anticipated. Collaborative processes using BIM will lead to the shifting of activities from to the early design phase. Much, if not all, activities in the detailed engineering and specification phase will be done in the earlier phases. It means that significant payment for the engineering phase, which may count up to 40 per cent of the design cost, can no longer be expected. As engineering work is done concurrently with the design, a new proportion of the payment in the early design phase is necessary.4. Review of ongoing hospital building projects using BIMIn The Netherlands, the changing roles in hospital building projects are part of the strategy, which aims at achieving a sustainable real estate in response to the changing healthcare policy. Referring to literature and previous research, the main factors that influence the success of the changing roles can be concluded as: the implementation of an integrated procurementmethod and a life-cycle design approach for a sustainable collaborative process; the agreement on the BIM structure and the intellectual rights; and the integration of the role of a model manager. The preceding sections have discussed the conceptual thinking on how to deal with these factors effectively. This current section observes two actual projects and compares the actual practice with the conceptual view respectively.The main issues, which are observed in the case studies, are: the selected procurement method and the roles of the involved parties within this method;the implementation of the life-cycle design approach;the type, structure, and functionalities of BIM used in the project;the openness in data sharing and transfer of the model, and the intended use of BIM in the future; and the roles and tasks of the model manager.The pilot experience of hospital building projects using BIM in the Netherlands can be observed at University Medical Centre St Radboud (further referred as UMC) and Maxima Medical Centre (further referred as MMC). At UMC, the new building project for the Faculty of Dentistry in the city of Nijmegen has been dedicated as a BIM pilot project. At MMC, BIM is used in designing new buildings for Medical Simulation and Mother-and-Child Centre in the city of Veldhoven.The first case is a project at the University Medical Centre (UMC) St Radboud. UMC is more than just a hospital. UMC combines medical services, education and research. More than 8500 staff and 3000 students work at UMC. As a part of the innovative real estate strategy, UMC has considered to use BIM for its building projects. The new development of the Faculty ofDentistry and the surrounding buildings on the Kapittelweg in Nijmegen has been chosen as a pilot project to gather practical knowledge and experience on collaborative processes with BIM support.The main ambition to be achieved through the use of BIM in the building projects at UMC can be summarised as follows: using 3D visualisation to enhance the coordination and communication among the building actors, and the user participation in design;integrating the architectural design with structural analysis, energy analysis, cost estimation, and planning;interactively evaluating the design solutions against the programme of requirements and specifications;reducing redesign/remake costs through clash detection during the design process; andoptimising the management of the facility through the registration of medical installations andequipments, fixed and flexible furniture, product and output specifications, and operational data.The second case is a project at the Maxima Medical Centre (MMC). MMC is a large hospital resulted from a merger between the Diaconessenhuis in Eindhoven and St Joseph Hospital in Veldhoven. Annually the 3,400 staff of MMC provides medical services to more than 450,000 visitors and patients. A large-scaled extension project of the hospital in Veldhoven is a part of its real estate strategy. A medical simulation centre and a women-and-children medical centre are among the most important new facilities within this extension project. The design has been developed using 3D modelling with several functionalities of BIM.The findings from both cases and the analysis are as follows.Both UMC and MMC opted for a traditional procurement method in which the client directly contracted an architect, a structural engineer, and a mechanical, electrical and plumbing (MEP) consultant in the design team. Once the design and detailed specifications are finished, a tender procedure will follow to select a contractor. Despite the choice for this traditional method, many attempts have been made for a closer and more effective multidisciplinary collaboration. UMC dedicated a relatively long preparation phase with the architect, structural engineer and MEP consultant before the design commenced. This preparation phase was aimed at creating a common vision on the optimal way for collaboration using BIM as an ICT support. Some results of this preparation phase are: a document that defines the common ambition for the project and the collaborative working process and a semi-formal agreement that states the commitment of the building actors for collaboration. Other than UMC, MMC selected an architecture firm with an in-house engineering department. Thus, the collaboration between the architect and structural engineer can take place within the same firm using the same software application.Regarding the life-cycle design approach, the main attention is given on life-cycle costs, maintenance needs, and facility management. Using BIM, both hospitals intend to get a much better insight in these aspects over the life-cycle period. The life-cycle sustainability criteria are included in the assignments for the design teams. Multidisciplinary designers and engineers are asked to collaborate more closely and to interact with the end-users to address life-cycle requirements. However, ensuring the building actors to engage in an integrated collaboration to generate sustainable design solutions that meet the life-cycle。
毕业设计外文文献翻译范文

毕业设计外文文献翻译专业学生姓名班级学号指导教师优集学院外文资料名称:Knowledge-Based Engineeri--ng Design Methodology外文资料出处:Int.J.Engng Ed.Vol.16.No.1附件: 1.外文资料翻译译文2.外文原文基于知识工程(KBE)设计方法D. E. CALKINS1.背景复杂系统的发展需要很多工程和管理方面的知识、决策,它要满足很多竞争性的要求。
设计被认为是决定产品最终形态、成本、可靠性、市场接受程度的首要因素。
高级别的工程设计和分析过程(概念设计阶段)特别重要,因为大多数的生命周期成本和整体系统的质量都在这个阶段。
产品成本的压缩最可能发生在产品设计的最初阶段。
整个生命周期阶段大约百分之七十的成本花费在概念设计阶段结束时,缩短设计周期的关键是缩短概念设计阶段,这样同时也减少了工程的重新设计工作量。
工程权衡过程中采用良好的估计和非正式的启发进行概念设计。
传统CAD工具对概念设计阶段的支持非常有限。
有必要,进行涉及多个学科的交流合作来快速进行设计分析(包括性能,成本,可靠性等)。
最后,必须能够管理大量的特定领域的知识。
解决方案是在概念设计阶段包含进更过资源,通过消除重新设计来缩短整个产品的时间。
所有这些因素都主张采取综合设计工具和环境,以在早期的综合设计阶段提供帮助。
这种集成设计工具能够使由不同学科的工程师、设计者在面对复杂的需求和约束时能够对设计意图达成共识。
那个设计工具可以让设计团队研究在更高级别上的更多配置细节。
问题就是架构一个设计工具,以满足所有这些要求。
2.虚拟(数字)原型模型现在需要是一种代表产品设计为得到一将允许一产品的早发展和评价的真实事实上原型的过程的方式。
虚拟样机将取代传统的物理样机,并允许设计工程师,研究“假设”的情况,同时反复更新他们的设计。
真正的虚拟原型,不仅代表形状和形式,即几何形状,它也代表如重量,材料,性能和制造工艺的非几何属性。
毕业设计外文原文及翻译

Thermal analysis for the feed drive system of a CNC machineAbstractA high-speed drive system generates more heat through friction at contact areas, such as the ball-screw and the nut, thereby causing thermal expansion which adversely affects machining accuracy. Therefore, the thermal deformation of a ball-screw is oneof the most important objects to consider for high-accuracy and high-speed machine tools. The objective of this work was to analyze the temperature increase and the thermal deformation of a ball-screw feed drive system. The temperature increase was measured using thermocouples, while a laser interferometer and a capacitance probe were applied to measure the thermal error of the ball-screw.Finite element method was used to analyze the thermal behavior of a ball-screw. The measured data were compared with numerical simulation results. Inverse analysis was applied to estimate the strength of the heat source from the measured temperature profile.The generated heat sources for different feed rates were investigated.Keywords:Machine tool; Ball-screw; Thermal error; Finite element method; Thermocouple1. IntroductionPrecise positioning systems with high speed, high resolution and long stroke become more important in ultra-precision machining. The development of high-speed feed drive systems has been a major issue in the machine-tool industry. A high-speed feed drive system reduces necessary non-cutting time. However, due to the backlash and friction force between the ball-screw and the nut, it is difficult to provide a highly precise feed drive system.Most current research is focused on the thermal error compensation of the whole machine tools. Thermally induced error is a time-dependent nonlinear process caused by nonuniform temperature variation in the machine structure. The interaction between the heat source location, its intensity, thermal expansion coefficient and the machine system configuration creates complex thermal behavior . Researchers have employed various techniques namely finite element methods,coordinate transformation methods, neural net-works etc., in modelling the thermal characteristicsA high-speed drive system generates more heat through friction at contact areas, such as the ball-screw and the nut, thereby causing thermal expansion which adversely affects machining accuracy. Therefore, the thermal deformation of a ball-screw is one of the most important objects to consider for high-accuracy and high-speed machine tools [5]. In order to achieve high-precision positioning, pre-load on the ball-screw is necessary to eliminate backlash. ball-screw pre-load also plays an important role in improving rigidity, noise, accuracy and life of the positioning stage [6]. However, pre-load also produces significant friction between the ball-screw and the nut that generates greater heat, leading to large thermal deformation of the ball-screw and causing low positioning accuracy. Consequently, the accuracy of the main system, such as a machine tool, is affected. There-fore, anoptimum pre-load of the ball-screw is one of the most important things to consider for machine tools with high accuracy and great rigidity.Only a few researchers have tackled this problem with some success. Huang used the multiple regression method to analyze the thermal deformation of a ball-screw feed drive system. Three temperature increases at front bearing, nut and back bearing were selected as independent variables of the analysis model. The multiple-regression model may be used to predict the thermal deformation of the ball-screw. Kim et al. Analyzed the temperature distribution along a ball-screw system using finite element methods with bilinear type of elements. Heat induced due to friction is the main source of deformation in a ball-screw system, the heat generated being dependent on the pre-load, the lubrication of the nut and the assembly conditions. The proposed FEM model was based on the assumption that the screw shaft and nut are a solid and hollow shaft respectively. Yun et al. used the modified lumped capacitance method and genius education algorithm to analyze the linear positioning error of the ball-screw.The objective of this work was to analyze the temperature increase and the thermal deformation of a ball-screw feed drive system. The temperature increase was measured using thermocouples while a laser interferometer and a capacitance probe were applied to meas-ure the thermal error of the ball-screw. Finite element method was also applied to simulate the thermal behavior of the ball-screw. Measured data were compared with numerical simulation results. Inverse analysis was applied to estimate the strength of the heat source from the measured temperature pro file. Generated heat sources for different feed rates were investigated.2 Experimental set-up and procedureIn this study, the object used to investigate the thermal characteristics of a ball-screw system is a machine center as shown in Fig. 1. The maximum rapid speed along thex-axis of the machine center is 40 m/min and the x-axis travel is 800 mm. The table repeatedly moved along the x-axis with a stroke of 600 mm. The main heat sourceFig. 1. Photograph of machine center.of the ball-screw system is the friction caused by a moving nut and rotating bearings. The produced temperature increase and thermal deformation were measured to study the thermal characteristics of the ball-screw system.In order to measure the temperature increase and the thermal deformation of a ball-screw system under long-term movement of the nut, experiments were performed with the arrangement shown in Fig. 2. Temperatures at nine points were measured as shown in Fig. 2a .Two thermocouples (numbered 1 and 8) were located on the rear and front bearing surfaces, respectively. They were used to measure the surface temperatures of these two support bearings. The last one (numbered 9) was used to measure the room temperature. The recorded room temperature was to eliminate the effect of environmental variation. These three thermocouples were used for continuous acquisition under moving conditions. The other six thermocouples (numbered 2 –7) were used to measure the surface temperatures of the ball-screw. Because the moving nut covered most of the ball-screw, thermocouples cannot be consistently fixed on the ball-screw. While temperature measurement was necessary, the ball-screw stopped running and these six thermocouples were quickly attached to specified locations of the ball-screw. Having collected the required data, the thermocouples were quickly removed.Thermal deformation errors were simultaneously measured with two methods. Because a thrust bearing is used on the driving side of the ball-screw, this end is considered to be fixed. A capacitance probe was installed next to the driven side of the ball-screw with a direction perpendicular to the side surface as shown in Fig. 2b. This probe was used to record the whole thermal deformation of the ball-screw. The values can be collected continuously during running conditions. The second method is used to measure the thermal error distribution at some specified time. Before the feed drive system started to operate, the original positional error distribution was measured with a laser interferometer (HP5528A). The table moved step-by-step (the increment ofFig. 2. Locations of measured points for (a) temperatures and (b) thermal errors.each step was 100 mm) and the positioning error was recorded at each step. Then the feed drive system started to operate and generate heat. After a certain time interval, the feed drive system stopped to measure thermal errors. In the same way, the positioning error distribution was again collected with the laser interferometer. Subtracting the actual error from the original error of the x-coordinates, the results are net thermal errors. Having collected the temperature increase (with thermocouples) and deformation distribution, the feed drive system starts running again.In this study, three feed rates (10, 15 and 20 m/min) along the x-axis and three different pre-loads (0, 150 and 300 kgf·cm) were used. The table moved along the x-axis in a reciprocating motion and the stroke was 600mm. The point temperatures and thermal errors were measured at sampling intervals of 10 min. Each stopping time was only about 10 s. These procedures were operated repeatedly until the temperature reached a steady state.3. Experimental results and discussionThe developed experimental setup was utilized for three constant feed rates (running at 10, 15 and 20m/min, respectively). The table reciprocated until point temperatures and thermal errors reached a steady state. Firstly, the ball-screw pre-load was zero and its thermal characteristics were studied. In Fig. 3, temperature variationsFig. 3. (a) Measured temperature increase and (b) thermal error over time for feed rate of 10 m/min and zero-pre-load.and thermal errors of the feed drive system are shown over time for a feed rate of 10 m/min. Measurements can also be made for feed rates of 15 and 20m/min. Themeasured data at a steady state are shown in Tables 1 and 2 . A brief discussion can be made as fol-lows.1. The higher feed rate produces larger frictional heat at the interface between the ball-screw and the nut. The frictional heat generated by the support bearings and the motor also increases with the feed rate. Therefore, the temperature of the ball-screw increases with the feed rate.2. The table travels over the middle part with a 600 mm stroke. The central part of the ball-screw reveals a higher temperature increase. Support bearings do not have high temperature increase because the bearing pre-load is zero.3. A higher rotational speed brings a larger thermal expansion for the ball-screw. The middle part of the screw has a slightly larger thermal expansion because of its higher temperature increase. However, this phenomenon is not obvious. The thermal error at some specified point of the ball-screw is approximately proportional to the distance between this point and the front end (the motor-driving side of the screw). Secondly, the ball-screw pre-load was set at 150kgf·cm and its thermal characteristics were studied. In Figs. 4 –5, temperature variations around the feed drive system and thermal errors are shown over time for feed rates of 10 and 15 m/min. Measured data are shown in Tables 1 and 2. Results reveal two interesting phenomena shown as follows.1. Temperature increases of measured points grow gradually until the ball-screw reaches a steady state except for the temperature increase of the bearing on the driven side. The temperature of this bearing quickly reaches a maximum value and then gradually drops.2. The thermal errors of P6, P7 and P8 are negative at the steady state. It means that these three points move to the driving side due to thermal expansion, while other points move to the driven side. Furthermore, the thermal errors of P4 to P8 show a gradual decrease after 60 min.These phenomena are different from previous results with no pre-load. Some experiments were carried out to study these phenomena. We found that the two bearing stands bent if the ball-screw was pre-loaded. After the pre-load was applied on the ball-screw, the original positional error distribution was measured using a laser interferometer. At this moment, the bending effects on error distribution were includ- Table 1Temperature distribution at steady state with different pre-loads and feed rates (unit: °C)Table 2Thermal error distribution at steady state with different pre-loads and feed rates (unit:µm)Fig. 4. (a) Measured temperature increase Fig. 5. (a) Measured temperature increase and (b) thermal error over time for feed and (b) thermal error over time for feed rate of 10 m/min and pre-load of rate of 15 m/min and pre-load of150kgf ·cm. 150kgf ·cm.-ed in the measured positioning error. The feed drive system starts to run and the ball-screw expands. The expansion relaxes the pre-load of the ball-screw and the bending deformation of two bearing stands. Therefore, the points on the driving side move closer to the motor, thereby thermal errors are negative, nevertheless, the points on the driven side move to the free end, thereby thermal errors are positive.The temperature change of the rear bearing was also investigated. A journal bearing was applied on the driven side and a thrust bearing was applied on the driving side. The pre-load of the ball-screw increases the pre-load of the bearing on the driven side. When the feed drive system runs, the bearing temperature on the driven side sharplyincreases due to the rising pre-load. However, the thermal expansion of the ball-screw relaxes the ball-screw and decreases pre-load of the bearing on the driven side. Therefore, the temperature gradually decreases to a steady state.Finally, the ball-screw pre-load was set to 300kgf·cm and its thermal characteristics were studied. In Figs. 6 and 7, temperature variations around the feed drive systemFig. 6. (a) Measured temperature increase and (b) thermal error over time for feed rate of 10 m/min and pre-load of 300kgf ·cm.Fig. 7. (a) Measured temperature increase and (b) thermal error over time for feed rate of 15 m/min and pre-load of 300kgf ·cm.and thermal errors are shown over time for feed rates of 10 and 15 m/min. The tendency with a 300kgf·cm is similar to that with a 150kgf·cm. Measured data are shown in Tables 1 and 2.4. Numerical simulationThe main heat source of a ball-screw system is the friction caused by a moving nut and the support bearings. In this study, temperature distribution was calculated using the FEM based on the following assumptions:1. The screw shaft is a solid cylinder.2. Friction heat generation between the moving nut and the screw shaft is uniform at contacting surface and is proportional to contacting time.3. Heat generation at support bearings is also constant per unit area and unit time.4. Convective heat coefficients are always constant during moving. The radiation term is neglected.The problem is de fined as transient heat conduction in non-deforming media without radiation. A classical form of the initial/boundary value problem is shown below:where is the internal heat generation rate, q the entering heat flux, a unit outward normal vector, the ambient temperature and h the convective heattrans-fer coefficient at a given boundary. A simplified heat transfer model of the ball-screw system is described in Fig. 8 along with the boundary conditions. The nut moves reciprocally with a stroke, s. The length of the nut is w. According to the previously mentioned assumption, No. 2, frictional heat fluxes on the balls-crew are shown as in Fig. 8b . Both ends of the balls-crew are subjected to frictional heat fluxq and q caused by the support bearings. Heat fluxes on rear and front ends are13 respectively. Other surfaces are subjected to convection heat transfer as shown in Fig.8c .To obtain an approximate solution, Eqs. (1)–(3) may be transformed through discretization into algebraic expressions that may be solved for unknowns. In orderto allow the replacement of the continuous system by an equivalent discrete system, the original domain is divided into elements. Four-node tetrahedral elements are chosen in this study. Elements and nodes of the balls-crew for FEM are shown in Fig.9.Once temperature distribution is obtained, the thermal expansion of the balls-crew may be predicted. In the case of linearly elastic isotropic three-dimensional solid, stress–strain relations are given by Hooke ’s law as [9]:of balls-crew.Fig. 9. Elements and nodes of ball-screw for FEM.where [C] is a matrix of elastic coefficients and 0ε→is the vector of initial strains. In the case of heating of an isotropic material, the initial strain vector is given by:where a is the coefficient of thermal expansion and T is the temperature change. Three unknowns 123,,q q and q are to be determined with inverse analysis. Firstly, initial guessing of these heat fluxes is applied in FEM simulation to obtain the temperature distribution of the balls-crew. If numerical results do not agree with the measured temperature distribution, the values of 123,,q q and q are adjusted iteratively until numerical and simulation results are in good agreement.Calculated values of 123,,q q and q for an un-pre-loaded ball-screw are listed in Table3. Measured and simulated temperature distributions for feed rates of 10, 15 and 20 m/min are indicated in Fig. 10. For each feed rate, it shows a good agreement between measured and simulated temperature distributions. The numerical program can also be used to simulate the thermal expansion of the ball-screw based on the calculated heatTable 3Values of heat flux at different locations (unit:2W m)/Fig. 10. Temperature increase from experimental measurement and numerical simulation for feed rate of (a) 10 m/min, (b) 15 m/min and (c) 20 m/min.fluxes. Measured and simulated thermal expansions of the ball-screw are compared as shown in Table 4. Thermal expansions also show good agreement with each other. From Table 3, the heat flux increases with the feed rate. Approximate linear relation can be found between the heat flux and the feed rate under the same operating condition.5. ConclusionsThis paper proposes a systematic method to investigate the thermal characteristics of a feed drive system. The approach measures the temperature increase and the thermal deformation under long-term movement of the working table. A simplified FEM model for the ball-screw was developed. The FEM model incorporated with themeasured temperature distribution was used to determine the strength of the frictional heat source by inverse analysis. The strength of the heat source was applied to the FEM model to calculate the thermal errors of the feed drive system. Calculated and measured thermal errors were found to agree with each other. From the results, the following conclusions can be drawn:1. The positional accuracy increases while closer to the driving side of the ball-screw. The thermal error increases with the distance to the driven side of the ball-screw. The maximum thermal error occurs at the driven side of the ball-screw (free end). This value can be taken as the total thermal error of the ball-screw and may be measured with a capacitance probe.2. The ball-screw pre-load raises the temperature increases of both support bearings, especially the bearing on the driven side. The surface temperature of the ball-screw decreases because the thermal effects relax the pre-load, thereby decreasing the friction between the nut and the ball-screw.3. The thermal expansion of the ball-screw increases with the feed rate, thereby increasing the positional error. However, the increasing pre-load reduces thermal errors and improves the positional accuracy of the feed drive system.4.Two bearing stands may bend if the ball-screw is pre-loaded. The thermal expansi Table 4Thermal errors at different feed rates-on relaxes the pre-load of the ball-screw and the bending deformation of two bearing stands. Therefore, the points on the motor side move closer to the motor and the thermal errors are negative; nevertheless, the points on the free side move to the free end and the thermal errors are positive.数控加工中心进给驱动系统的热分析摘要高速驱动系统在接触区域(如滚珠丝杠和螺母)通过摩擦产生大量的热,从而导致热膨胀,热膨胀严重地影响机械加工精度。
毕设外文文献翻译

毕设外文文献翻译移动机器人微型化:在控制算法方面的一个调查研究工具Francesco Mondada, Edoardo Franzi, and Paolo Ienne Francesco Mondada, Edoardo Franzi, and Paolo IenneFrancesco Mondada, Edoardo Franzi, and Paolo Ienne摘要:一个自主式移动机器人与现实世界互动的严重依赖于机器人形态和它的环境。
制造这些方面的模型是非常复杂的,因为模拟仿真不足以准确的验证控制算法。
如果仿真环境是高效的,那么用于实际机器人实验的工具往往是不够的。
传统的编程语言和工具对于真实的实验很少能提供足够多的支持,从而阻碍了对控制算法的理解,使得实验复杂而且费时。
有这样一个圆柱形状的微型机器人:直径55毫米,高度30毫米。
由于其体积小,实验可以在一个小工作区内迅速有效的进行。
可以设计小型的外围设备,并将其与基本模块相连,也可以利用一个通用的通信方案。
一个串行连接提供调试期间运行在工作站上的控制算法,从而使用户可以使用所有可用的图形工具。
一旦调试成功,该算法可以被下载到机器人,也可以在它自己的处理器中运行。
机器人组实验是很难用于现有的硬件。
所描述机器人的型号和价格提出了一种集体行为的成本效益调查的方式。
这方面的调查研究促进了本文中描述的机器人的设计。
在不久的将来计划进行大约二十个单位的实验。
关键字:移动机器人微型化Khepera机器人控制算法绘图器一、简介现在,移动机器人领域受到人们的高度重视。
一种全自主移动机器人有广泛的工业应用,包括自动清洗建筑物和工厂,用于移动监控系统,为工厂输送无固定装置需要配件,以及水果的采摘。
这些移动机器人应用已经超出了目前的技术范围,并显示了传统设计方法的不足。
人们试图用一些新的控制方法来改善机器人与现实世界的互动,以实现任务的自治。
布鲁克斯提出了一个包容结构的例子,该结构支持并行处理,以及强大的模块化。
毕业设计说明书外文文献及翻译

外文文献:Embedded Systems Design using the TI MSP430 Series(selection)This book is intended for the embedded engineer who is new to the field, and as an introduction and reference for those experienced with micro-controller development, but are new to the MSP430 family of devices. I , either professionally or academically. As an example, the book de- scribes interrupt functionality in detail, but assumes that you, the reader, already know what an interrupt is and in this book is identical to that which is available from the TI documentation, this book is intended to supplement, not replace that valuable source of information. The Users Guides and Application Notes together offer a depth and breadth of technical information that would be difficult to replicate in a single source. The intent of this book is to , along with some , RISC-type, Neumann CPU core. The '430 is competitive in price with the 8-bit controller market, and supports both 8 and 16-bit instructions, allowing migration from most similarly sized platforms.The family of devices ranges from the very small (1k ROM, 128 bytes for RAM, sub-dollar) up to larger (60k ROM, 2k RAM, with prices in the $10 range) devices. Currently, there are at least 40 flavors available, with more being added regularly. The devices are split into three families: the MSP430x3xx, which is a basic unit, the MSP430x1xx, which is a more feature-rich family, and the MSP430x4xx, which is similar to the '1xx, with a built in LCD driver. You will find these referred to as '1xx, '3xx, and '4xx devices throughout this book.Part Numbering ConventionPart numbers for MSP430 devices are determined based on their capabilities. All device part numbers follow the following template:MSP430M t F a F b McM: Memory TypeC: ROMF: FlashP: OTPE: EPROM (for developmental use. There are few of these.)F, F b: Family and Featuresa10, 11: Basic12, 13: Hardware UART14: Hardware UART, Hardware Multiplier31, 32: LCD Controller33: LCD Controller, Hardware UART, Hardware Multiplier41: LCD Controller43: LCD Controller, Hardware UART44: LCD Controller, Hardware UART, Hardware MultiplierM c: Memory Capacity0: 1kb ROM, 128b RAM1: 2kb ROM, 128b RAM2: 4kb ROM, 256b RAM3: 8kb ROM, 256b RAM4: 12kb ROM, 512b RAM5: 16kb ROM, 512b RAM6: 24kb ROM, 1kb RAM7: 32kb ROM, 1kb RAM8: 48kb ROM, 2kb RAM9: 60kb ROM, 2kb RAMExample: The MSP430F435 is a Flash memory device with an LCD controller, a features not consistently represented (type of ADC, number of timers, etc), and there are some other inconsistencies (for example, the 33 family their numbering scheme. Rather, once you chapter 1, the MSP430 utilizes a 16-bit RISC architecture, which is capable of processing instructions on either bytes or words. The CPU is identical for all members of the '430 family. It consists of a 3-stage instruction pipeline, instruction decoding, a 16-bit ALU, four dedicated-use registers, and twelve working (or scratchpad) registers. The CPU is connected to its memory through two 16-bit busses, one for addressing, and the other for data. All memory, including RAM, ROM, information memory, special function registers, and peripheral registers are mapped into a single, contiguous address space.This architecture is unique for several reasons. First, the designers at Texas Instrumentsawful lot of space for future development. Almost available special function registers are implemented.Second, there are plenty of working registers. After years of be much more efficient, especially in the most other small processors. But, beyond that, this architecture is simple, efficient and clean. There are two busses, a single linear memory space, a rather vanilla processor core, and all peripherals are memory-mapped.CPU FeaturesThe ALUThe '430 processor includes a pretty typical ALU (arithmetic logic unit). The ALU , subtraction, comparison and logical (AND, OR, XOR) operations. ALU operations can affect the overflow, zero, negative, and carry flags. The all devices, is implemented as a peripheral device, and is not part of the ALU (see Chapter 6).Working RegistersThe '430 gives the developer twelve 16-bit working registers, R4 through R15. (R0 through R3 are used for other functions, as described later.) They are used for register mode operations (see Addressing Modes, Chapter 8), which are much more efficient than operations which require memory access. Some guidelines for their use:Use these registers as much as possible. Any variable which is accessed often should reside in one of these locations, for the sake of efficiency.Generally speaking, you may select any of these registers for any purpose, either data or address. However, some development tools will reserve R4 and R5 for debug information. Different compilers will use these registers in different fashions, as well. Understand your tools.Be consistent about use of the working registers. Clearly document their use. I about 8 months ago, that performs extensive operations on R8, R9, and R15. Unfortunately, I don't know today what the values in R8, R9 and R15 represent. This was code I wrote to quickly validate an algorithm, rather than production code, so I didn't document it sufficiently. Now, it is relative gibberish. Don't let this to you. No matter as constant generators, so that register mode may be used instead of immediate mode for some common constants. (R2 is a dual use register. It serves as the Status Register, as well.) Generated constants include some common single-bit values (0001h, 0002h, 0004h, and 0008h), zero (0000h), and an all 1s field (0FFFFh). Generationis based on the W(S) value in the instruction word, and is described by the table below.W(S) value in R2 value in R300 ————0000h01 (0) (absolute mode) 0001h10 0004h 0002h11 0008h 0FFFFhProgram CounterThe Program Counter is located in R0. Since individual memory location addresses are 8-bit, but all instructions are 16 bit, the PC is constrained to even numbers (i.e. the LSB of the PC is always zero). Generally speaking, it is best to avoid direct manipulation of the PC. One exception to this rule of thumb is the implementation of a switch, where the code jumps to a spot, dependent on a given value. (I.e., if value=0, jump to location0, if value=1, jump to location1, etc.) This process is shown in Example 3.1.Example 3.1 Switch Statement via Manual PC ControlMov value,R15 ;put the switch value into R15Cmp R15,#8 ;range checkingJge outofrange ;if R15>7,do not use PC switchCmp #0,R15 ;more range checkingJn outofrange ;Rla R15 ;multiply R15 by two,since PC is always evenRla R15 ;double R15again,since symbolic jmp is 2 words longAdd R15,PC ;PC goes to proper jumpJmp value0Jmp value1Jmp value2Jmp value3Jmp value4Jmp value5Jmp value6Jmp value7OutofrangeJmp RangeErrorThis is a relatively common approach, and most C compilers will implement switch statements with something similar. When implementing this manually (i.e., in assembly language), the programmer needs to keep several things in mind:Always do proper range checking. In the example, we checked for conditions outside both ends of the valid range. If this is not performed correctly, the code can jump to an unintended location.Pay close attention to the addressing modes of the jump statements. The second doubling of R15, prior to the add statement, is added because the jump statement requires two words when symbolic mode addressing is used.Be careful that none of your interrupt the example). If the interrupt procedure is to push the register to the stack at the beginning of the ISR, and to pop the register at the end of the ISR. (See Example 3.2.)Example 3.2 PushPop Combination in ISRTimer_A_Hi_InterruptPush R12 ;We will use R12Mov P1IN,R12 ;use R12 as we pleaseRla R12Rla R12Mov R12&BAR ;Done with R12Pop R12 ;Restore previous value to R12Reti ;return from interruptORG 0FFF0hDW Timer_A_Hi_InterruptStatus RegisterThe Status Register is implemented in R2, and is comprised of various system flags. The flags are all directly accessible by code, and all but three of them are changed automatically by the processor itself. The 7 most significant bits are undefined. The bits of the SR are: • The Carry Flag (C)Location: SR(0) (the LSB)Function: Identifies when an operation results in a carry. Can be set or cleared by software, or automatically.1=Carry occurred0=No carry occurred•The Zero Flag (Z)Location: SR(1)Function: Identifies when an operation results in a zero. Can be set or cleared by software, or automatically.1=Zero result occurred0=Nonzero result occurred• The Negative Flag (N)Location: SR(2)Function: Identifies when an operation results in a negative. Can be set or cleared by software, or automatically. This flag reflects the value of the MSB of theoperation result (Bit 7 for byte operations, and bit 15 for word operations).1=Negative result occurred0=Positive result occurred• The Global Interrupt Enable (GIE)Location: SR(3)Function: Enables or disables all maskable interrupts. Can be set or cleared by software, or automatically. Interrupts automatically reset this bit, and the reti instructionautomatically sets it.1=Interrupts Enabled0=Interrupts Disabled• The CPU off bit (CPUOff)Location: SR(4)Function: Enables or disables the CPU core. Can be cleared by software, and is reset by enabled interrupts. None of the memory, peripherals, or clocks are affected bythis bit. This bit is used as a power saving feature.1=CPU is on0=CPU is off• The Oscillator off bit (OSCOff)Location: SR(5)Function: Enables or disables the crystal oscillator circuit (LFXT1). Can be cleared by software, and is reset by enabled external interrupts. OSCOff shuts downeverything, including peripherals. RAM and register contents are preserved.This bit is used as a power saving feature.1=LFXT1 is on0=LFXT1 is off• The System Clock Generator (SCG1,SCG0)Location: SR(7),SR(6)Function: These bits, along with OSCOff and CPUOff define the power mode of the device.• The Overflow Flag (V)Location: SR(8)Function: I dentifies when an operation results in an overflow. Can be set or cleared by software, or automatically. Overflow occurs when two positive numbers areadded together, and the result is negative, or when two negative numbers areadded together, and the result is positive.1=Overflow result occurred0=No overflow result occurredFour of these flags (Overflow, Negative, Carry, and Zero) drive program control, via instructions such as cmp (compare) and jz (jump if Zero flag is set). You will see these flags referred to often in this book, as their function represents a fundamental building block. The instruction set is detailed in Chapter 9, and each base instruction description there details the interaction between flags and instructions. As a programmer, you need to understand this interaction.Stack PointerThe Stack Pointer is implemented in R1. Like the Program Counter, the LSB is fixed as azero value, so the value is always even. The stack is implemented in RAM, and it is common practice to start the SP at the top ( one word in RAM (SP=SP-2), and puts the value to be pushed at the new SP. Pop does the reverse. Call statements and interrupts push the PC, and ret and reti statements pop the value from the TOS (top of stack) back into the PC. I , and don't fiddle with it manually after that. As long as you are wary of two stack conditions, the stack pointer manages itself. These two conditions are:Asymmetric pushpop combinations. Every push should empty stack, the SP moves out of RAM, and the program will fail.Stack encroachment. Remember, the stack is implemented in RAM. If your program RegistersSpecial function registers are, as you might of these registers, at memory locations 0000h through 000Fh. However, only the first six are used. Locations 0000h and 0001h contain interrupt enables, and locations 0002h and 0003h contain interrupt flags. These are described in Chapter 3.Locations 0004h and 0005h contain module enable flags. Currently, only two bits are implemented in each byte. These bits are used for the USARTs.Peripheral RegistersAll on-chip peripheral registers are mapped into memory, immediately after the special function registers. There are two types of peripheral registers: byte-addressable, which are mapped in the space from 010h to 0FFh, and word-addressable, which are mapped from 0100h to 01FFh.RAMRAM always begins at location 0200h, and is contiguous up to its final address. RAM is used for all scratchpad variables, global variables, and the stack. Some rules of thumb for RAM usage: The developer needs to be careful that scratchpad allocation and stack usage do not encroach on each other, or on global variables. Accidental sharing of RAM is a very common bug, and can be difficult to chase down. You need to clearly understand you need, and always deallocate as quickly as is reasonable. You can never flash devices only, located in memory locations 0C00h through 0FFFh. It is the only the flash devices. This memory contains the bootstrap loader, which is used for programming of flash blocks, via a USART module.Information Memory (flash devices only)Flash devices in the '430 family memory. This information memory acts as onboard EEPROM, allowing critical variables to be preserved through power down. It is divided into two 128-byte segments. The first of these segments is located at addresses 01000h through 0107Fh, and the second is at 01080h through 010FFh.Code MemoryCode memory is always contiguous at the end of the address space (i.e. always runs to location 0FFFFh). So, for 8k devices, code runs from 0E000h to 0FFFFh, and for the 60k devices, the code runs from 01100h to 0FFFFh. All code, tables, and this memory space.Interrupt VectorsInterrupt vectors are located at the very end of memory space, in locations 0FFE0h through 0FFFEh. Programming and use of these are described in detail in Chapter 3.Memory TypesThe MSP430 is available with any one of several different memory types. The memory type is identified by the letter immediately following "MSP430" in the part numbers. (Example: All MSP430Fxxx parts are flash decices).ROMROM devices, also known as masked devices, are identified by the letter "C" in the part numbers. They are strict ROM devices, shipped pre-programmed. They for -recurring engineering) costs, masked ROM is only cost-efficient when the process, the NRE costs acronym for "one time programmable", which pretty well describes the functionality of these devices. Identified by the letter "P" in the part number, OTP parts are a good compromise between ROM and flash parts. OTPs are shipped blank, and can be programmed at any time. They are typically more expensive than ROM. They also require programming, which can be a be a useful intermediate step when you are still uncertain about the stability of the design.EPROMTI offers windowed EPROM versions of several devices, intended for use in development. They are identified by the letter "E" in the part number. These devices are electrically programmable, and UV-erasable. EPROM devices are only available for a few devices, and typically cost on the order of $50 each. They are not intended for production use, but make ideal platforms for emulating ROM devices in development.FlashFlash devices, identified by the letter "F" in the part number, the past few years. They are more expensive, but code space can be erased and reprogrammed, thousands of times if necessary. This capability allows for features such as downloadable firmware, and lets the developer substitute code space for an external EEPROM.中文译文:利用TI的MSP430系列的嵌入式系统设计(节选)这本书是写给新进入此领域的嵌入式工程师,作为一个关于微控制器的开发经验的介绍和依据,但新的MSP430系列的设备。
毕业设计外文资料翻译——翻译译文

毕业设计外文资料翻译(二)外文出处:Jules Houde 《Sustainable development slowed down by bad construction practices and natural and technological disasters》2、外文资料翻译译文混凝土结构的耐久性即使是工程师认为的最耐久和最合理的混凝土材料,在一定的条件下,混凝土也会由于开裂、钢筋锈蚀、化学侵蚀等一系列不利因素的影响而易受伤害。
近年来报道了各种关于混凝土结构耐久性不合格的例子。
尤其令人震惊的是混凝土的结构过早恶化的迹象越来越多。
每年为了维护混凝土的耐久性,其成本不断增加。
根据最近在国内和国际中的调查揭示,这些成本在八十年代间翻了一番,并将会在九十年代变成三倍。
越来越多的混凝土结构耐久性不合格的案例使从事混凝土行业的商家措手不及。
混凝土结构不仅代表了社会的巨大投资,也代表了如果耐久性问题不及时解决可能遇到的成本,更代表着,混凝土作为主要建筑材料,其耐久性问题可能导致的全球不公平竞争以及行业信誉等等问题。
因此,国际混凝土行业受到了强烈要求制定和实施合理的措施以解决当前耐久性问题的双重的挑战,即:找到有效措施来解决现有结构剩余寿命过早恶化的威胁。
纳入新的结构知识、经验和新的研究结果,以便监测结构耐久性,从而确保未来混凝土结构所需的服务性能。
所有参与规划、设计和施工过程的人,应该具有获得对可能恶化的过程和决定性影响参数的最低理解的可能性。
这种基本知识能力是要在正确的时间做出正确的决定,以确保混凝土结构耐久性要求的前提。
加固保护混凝土中的钢筋受到碱性的钝化层(pH值大于12.5)保护而阻止了锈蚀。
这种钝化层阻碍钢溶解。
因此,即使所有其它条件都满足(主要是氧气和水分),钢筋受到锈蚀也都是不可能的。
混凝土的碳化作用或是氯离子的活动可以降低局部面积或更大面积的pH值。
当加固层的pH值低于9或是氯化物含量超过一个临界值时,钝化层和防腐保护层就会失效,钢筋受腐蚀是可能的。
毕业论文外文文献及翻译

毕业设计说明书英文文献及中文翻译班姓学专级:名:院:业:10210A02 1021010633梁卓越软件学院软件工程(软件开发与测试)指导教师:韩涛常旭青2014 年 6月学号:英文文献出自《IBM System Journal,2006,44(2):33-37》作者:Malcolm DavisS truts——An Open-source MVC Implementation This article introduces Struts, a Model-View-Controller implementation that uses servlets and JavaServer Pages (JSP) technology.Struts can help you control change in your Web project and promote specialization. Even if you never implement a system with Struts, you may get some ideas for your future servlets and JSP page implementationIntroductionKids in grade school put HTML pages on the Internet. However,there is a monumental difference between a grade school page and a professionally developed Web site. The page designer(or HTML developer)must understand colors, the customer,product flow, page layout, browser compatibility, image creation, JavaScript, and more. Putting a great looking site together takes a lot of work, and most Java developers are more interested in creating a great looking object interface than a user interface. JavaServer Pages (JSP) technology provides the glue between the page designer and the Java developer.If you have worked on a large-scale Web application, you understand the term change.Model-View-Controller(MVC) is a design pattern put together to help control change.MVC decouples interface from business logic and data. Struts is an MVC implementation that uses Servlets 2.2 and JSP 1.1 tags, from the J2EE specifications, as part of the implementation.You may never implement a system with Struts, but looking at Struts may give you some ideas on your future Servlets and JSP implementations.Model-View-Controller(MVC)JSP tags solved only part of our problem.We still have issues with validation, flow control, and updating the state of the application. This is where MVC comes to the rescue. MVC helps resolve some of the issues with the single module approach by dividing the problem into three categories:ModelThe model contains the core of the application's functionality.The modelencapsulates the state of the application. Sometimes the only functionality itcontains is state. It knows nothing about the view or controller.ViewThe view provides the presentation of the model. It is the look of theapplication.The view can access the model getters,but it has no knowledge ofthe setters.In addition,it knows nothing about the controller.The view shouldbe notified when changes to the model occur.ControllerThe controller reacts to the user input. It creates and sets the model.MVC Model 2The Web brought some unique challenges to software developers,mostnotably the stateless connection between the client and the server.Thisstateless behavior made it difficult for the model to notify the view of changes.On the Web, the browser has to re-query the server to discover modification to the state of the application.Another noticeable change is that the view uses different technology forimplementation than the model or controller.Of course,we could use Java(or PERL,C/C++or what ever) code to generate HTML. There are severaldisadvantages to that approach:Java programmers should develop services,not HTML.Changes to layout would require changes to code.Customers of the service should be able to create pages to meet their specificneeds.The page designer isn't able to have direct involvement in page development.HTML embedded into code is ugly.For the Web, the classical form of MVC needed to change. Figure4displaysthe Web adaptation of MVC,also commonly known as MVC Model 2 orMVC2.Struts detailsDisplayed in Figure 6 is a stripped-down UML diagram of theorg.apache.struts.action package.Figure6 shows the minimal relationshipsamong ActionServlet(Controller),ActionForm (Form State),and Action(Model Wrapper).The ActionServlet classDo you remember the days of function mappings? You would map some input event to a pointer to a function.If you where slick, you would place the configuration information into a file and load the file at run time.Function pointer arrays were the good old days of structured programming in C.Life is better now that we have Java technology,XML,J2EE,and all that.The Struts Controller is a servlet that maps events(an event generally being anHTTP post)to classes.And guess what -- the Controller uses a configurationfile so you don_t have to hard-code the values.Life changes, but stays thesame.ActionServlet is the Command part of the MVC implementation and is thecore of the Framework.ActionServlet (Command) creates and uses Action,an ActionForm, and ActionForward. As mentioned earlier, the struts-config.xmlfile configures the Command.During the creation of the Web project, Actionand ActionForm are extended to solve the specific problem space. The filestruts-config.xml instructs ActionServlet on how to use the extended classes.There are several advantages to this approach:The entire logical flow of the application is in a hierarchical text file. Thismakes it easier to view and understand, especially with large applications.The page designer does not have to wade through Java code to understand the flow of the application.The Java developer does not need to recompile code when making flowchanges.Command functionality can be added by extending ActionServlet.The ActionForm classActionForm maintains the session state for the Web application.ActionForm is an abstract class that is sub-classed for each input form model. When I sayinput form model, I am saying ActionForm represents a general concept ofdata that is set or updated by a HTML form.For instance,you may have aUserActionForm that is set by an HTML Form.The Struts framework will:Check to see if a UserActionForm exists;if not, it will create an instance ofthe class.Struts will set the state of the UserActionForm using corresponding fieldsfrom the HttpServletRequest.No more dreadful request.getParameter()calls.For instance,the Struts framework will take fname from request stream andcall UserActionForm.setFname().The Struts framework updates the state of the UserActionForm before passing it to the business wrapper UserAction.Before passing it to the Action class,Struts will also conduct form statevalidation by calling the validation()method on UserActionForm.Note: This is not always wise to do. There might be ways of using UserActionForm inother pages or business objects, where the validation might be different.Validation of the state might be better in the UserAction class.The UserActionForm can be maintained at a session level.Notes:The struts-config.xml file controls which HTML form request maps to whichActionForm.Multiple requests can be mapped UserActionForm.UserActionForm can be mapped over multiple pages for things such aswizards.The Action classThe Action class is a wrapper around the business logic.The purpose of Action class is to translate the HttpServletRequest to the business logic. To use Action, subclass and overwrite the process()method.The ActionServlet(Command)passes the parameterized classes to ActionForm using the perform()method.Again, no more dreadful request.getParameter()calls.By thetime the event gets here,the input form data (or HTML form data)has already been translated out of the request stream and into an ActionForm class.Struts,an MVC2 implementationStruts is a set of cooperating classes, servlets,and JSP tags that make up a reusable MVC2design.This definition implies that Struts is a framework, rather than a library, but Struts also contains an extensive tag library and utility classes that work independently of the framework. Figure5 displays an overview of Struts.Struts overviewClient browserAn HTTP request from the client browser creates an event.The Web container will respond with an HTTP response.ControllerThe Controller receives the request from the browser,and makes the decisionwhere to send the request.With Struts,the Controller is a command designpattern implemented as a servlet.The struts-config.xml file configures theController.Business logicThe business logic updates the state of the model and helps control the flow of the application.With Struts this is done with an Action class as a thin wrapper to the actual business logic.Model stateThe model represents the state of the application.The business objects updatethe application state. ActionForm bean represents the Model state at a sessionor request level,and not at a persistent level. The JSP file reads informationfrom the ActionForm bean using JSP tags.ViewThe view is simply a JSP file. There is no flow logic,no business logic, and no model information-- just tags. Tags are one of the things that make Strutsunique compared to other frameworks like Velocity.Note:"Think thin"when extending the Action class. The Action class should control the flow and not the logic of the application.By placing the business logic in a separate package or EJB,we allow flexibility and reuse.Another way of thinking about Action class is as the Adapter design pattern. The purpose of the Action is to "Convert the interface of a class into another interface the clients expect.Adapter lets classes work together that couldn_t otherwise because of incompatibility interface"(from Design Patterns - Elements of Reusable OO Software by Gof).The client in this instance is the ActionServlet that knows nothing about our specific business class interface. Therefore, Struts provides a business interface it does understand,Action. By extending the Action, we make our business interface compatible with Struts business interface. (An interesting observation is that Action isa class and not an interface.Action started as an interface and changed into a class over time.Nothing's perfect.)The Error classesThe UML diagram (Figure6)also included ActionError and ActionErrors. ActionError encapsulates an individual error message.ActionErrors is a container of ActionError classes that the View can access using tags.ActionErrors is Struts way of keeping up with a list of errors.The ActionMapping classAn incoming event is normally in the form of an HTTP request, which the servlet Container turns into an HttpServletRequest.The Controller looks at the incoming event and dispatches the request to an Action class. The struts-config.xml determines what Action class the Controller calls. The struts-config.xml configuration information is translated into a set of ActionMapping, which are put into container of ActionMappings. (If you have not noticed it,classes that end with s are containers) The ActionMapping contains the knowledge of how a specific event maps to specific Actions.The ActionServlet(Command) passes the ActionMapping to the Action class via the perform()method. This allows Action to access the information to control flow. ActionMappingsActionMappings is a collection of ActionMapping objects.Struts prosUse of JSP tag mechanismThe tag feature promotes reusable code and abstracts Java code from the JSPfile.This feature allows nice integration into JSP-based development tools that allow authoring with tags.Tag libraryWhy re-invent the wheel,or a tag library?If you cannot find something youneed in the library, contribute.In addition,Struts provides a starting point ifyou are learning JSP tag technology.Open sourceYou have all the advantages of open source,such as being able to see the codeand having everyone else using the library reviewing the code. Many eyesmake for great code review.Sample MVC implementationStruts offers some insight if you want to create your own MVCimplementation.Manage the problem spaceDivide and conquer is a nice way of solving the problem and making theproblem manageable. Of course,the sword cuts both ways. The problem ismore complex and needs more management.Struts consYouthStruts development is still in preliminary form. They are working towardreleasing a version 1.0,but as with any 1.0version,it does not provide all thebells and whistles.ChangeThe framework is undergoing a rapid amount of change.A great deal ofchange has occurred between Struts0.5 and1.0. You may want to downloadthe most current Struts nightly distributions,to avoid deprecated methods.Inthe last 6 months,I have seen the Struts library grow from90K to over270K.I had to modify my examples several times because of changes in Struts,and Iam not going to guarantee my examples will work with the version of Strutsyou download.C orrect level of abstractionDoes Struts provide the correct level of abstraction? What is the proper level of abstraction for the page designer?That is the $64K question.Should we allowa page designer access to Java code in page development? Some frameworkslike Velocity say no, and provide yet another language to learn for Webdevelopment. There is some validity to limiting Java code access in UIdevelopment.Most importantly,give a page designer a little bit of Java,andhe will use a lot of Java.I saw this happen all the time in Microsoft ASPdevelopment.In ASP development,you were supposed to create COM objectsand then write a little ASP script to glue it all together. Instead,the ASPdevelopers would go crazy with ASP script.I would hear"Why wait for aCOM developer to create it when I can program it directly with VBScript?"Struts helps limit the amount of Java code required in a JSP file via taglibraries.One such library is the Logic Tag,which manages conditionalgeneration of output,but this does not prevent the UI developer from goingnuts with Java code.Whatever type of framework you decide to use, youshould understand the environment in which you are deploying andmaintaining the framework.Of course,this task is easier said than done.Limited scopeStruts is a Web-based MVC solution that is meant be implemented withHTML, JSP files, and servlets.J2EE application supportStruts requires a servlet container that supports JSP1.1 and Servlet 2.2specifications.This alone will not solve all your install issues,unless you areusing Tomcat3.2.I have had a great deal of problems installing the librarywith Netscape iPlanet 6.0, which is supposedly the first J2EE-compliantapplication server. I recommend visiting the Struts User Mailing List archive(see Resources) when you run into problems.ComplexitySeparating the problem into parts introduces complexity.There is no questionthat some education will have to go on to understand Struts. With the constantchanges occurring, this can be frustrating at times.Welcome to the Web.Where is...I could point out other issues,for instance, where are the client side validations,adaptable workflow, and dynamic strategy pattern for the controller? However,at this point, it is too easy to be a critic, and some of the issues are insignificant, or are reasonable for a1.0release.The way the Struts team goes at it, Strutsmight have these features by the time you read this article, or soon after. Future of StrutsThings change rapidly in this new age of software development.In less than 5 years, I have seen things go from cgi/perl, to ISAPI/NSAPI, to ASP with VB, and now Java and J2EE. Sun is working hard to adapt changes to the JSP/servlet architecture, just as they have in the past with the Java language and API. You can obtain drafts of the new JSP 1.2 and Servlet 2.3 specifications from the Sun Web site. Additionally,a standard tag library for JSP files is appearing.中文翻译Struts——一种开源MVC的实现这篇文章介绍 Struts,一个使用servlet 和JavaServer Pages 技术的一种 Model-View-Controller 的实现。
毕业设计的论文中英翻译

Anti-Aircraft Fire Control and the Development of IntegratedSystems at SperryT he dawn of the electrical age brought new types of control systems. Able to transmit data between distributed components and effect action at a distance, these systems employed feedback devices as well as human beings to close control loops at every level. By the time theories of feedback and stability began to become practical for engineers in the 1930s a tradition of remote and automatic control engineering had developed that built distributed control systems with centralized information processors. These two strands of technology, control theory and control systems, came together to produce the large-scale integrated systems typical of World War II and after.Elmer Ambrose Sperry (I860-1930) and the company he founded, the Sperry Gyroscope Company, led the engineering of control systems between 1910 and 1940. Sperry and his engineers built distributed data transmission systems that laid the foundations of today‟s command and control systems. Sperry‟s fire control systems included more than governors or stabilizers; they consisted of distributed sensors, data transmitters, central processors, and outputs that drove machinery. This article tells the story of Sperry‟s involvement in anti-aircraft fire control between the world wars and shows how an industrial firm conceived of control systems before the common use of control theory. In the 1930s the task of fire control became progressively more automated, as Sperry engineers gradually replaced human operators with automatic devices. Feedback, human interface, and system integration posed challenging problems for fire control engineers during this period. By the end of the decade these problems would become critical as the country struggled to build up its technology to meet the demands of an impending war.Anti-Aircraft Artillery Fire ControlBefore World War I, developments in ship design, guns, and armor drove the need for improved fire control on Navy ships. By 1920, similar forces were at work in the air: wartime experiences and postwar developments in aerial bombing created the need for sophisticated fire control for anti-aircraft artillery. Shooting an airplane out of the sky is essentially a problem of “leading” the target. As aircraft developed rapidly in the twenties, their increased speed and altitude rapidly pushed the task of computing the lead out of the range of human reaction and calculation. Fire control equipment for anti-aircraft guns was a means of technologically aiding human operators to accomplish a task beyond their natural capabilities.During the first world war, anti-aircraft fire control had undergone some preliminary development. Elmer Sperry, as chairman of the Aviation Committee of the Naval Consulting Board, developed two instruments for this problem: a goniometer,a range-finder, and a pretelemeter, a fire director or calculator. Neither, however, was widely used in the field.When the war ended in I918 the Army undertook virtually no new development in anti-aircraft fire control for five to seven years. In the mid-1920s however, the Army began to develop individual components for anti-aircraft equipment including stereoscopic height-finders, searchlights, and sound location equipment. The Sperry Company was involved in the latter two efforts. About this time Maj. Thomas Wilson, at the Frankford Arsenal in Philadelphia, began developing a central computer for firecontrol data, loosely based on the system of “director firing” that had developed in naval gunn ery. Wilson‟s device resembled earlier fire control calculators, accepting data as input from sensing components, performing calculations to predict the future location of the target, and producing direction information to the guns.Integration and Data TransmissionStill, the components of an anti-aircraft battery remained independent, tied together only by telephone. As Preston R. Bassett, chief engineer and later president of the Sperry Company, recalled, “no sooner, however, did the components get to the point of functioning satisfactorily within themselves, than the problem of properly transmitting the information from one to the other came to be of prime importance.”Tactical and terrain considerations often required that different fire control elements be separated by up to several hundred feet. Observers telephoned their data to an officer, who manually entered it into the central computer, read off the results, and telephoned them to the gun installations. This communication system introduced both a time delay and the opportunity for error. The components needed tighter integration, and such a system required automatic data communications.In the 1920s the Sperry Gyroscope Company led the field in data communications. Its experience came from Elmer Spe rry‟s most successful invention, a true-north seeking gyro for ships. A significant feature of the Sperry Gyrocompass was its ability to transmit heading data from a single central gyro to repeaters located at a number of locations around the ship. The repeaters, essentially follow-up servos, connected to another follow-up, which tracked the motion of the gyro without interference. These data transmitters had attracted the interest of the Navy, which needed a stable heading reference and a system of data communication for its own fire control problems. In 1916, Sperry built a fire control system for the Navy which, although it placed minimal emphasis on automatic computing, was a sophisticated distributed data system. By 1920 Sperry had installed these systems on a number of US. battleships.Because of the Sperry Company‟s experience with fire control in the Navy, as well as Elmer Sperry‟s earlier work with the goniometer and the pretelemeter, the Army approached the company for help with data transmission for anti-aircraft fire control. To Elmer Sperry, it looked like an easy problem: the calculations resembled those in a naval application, but the physical platform, unlike a ship at sea, anchored to the ground. Sperry engineers visited Wilson at the Frankford Arsenal in 1925, and Elmer Sperry followed up with a letter expressing his interest in working on the problem. He stressed his company‟s experience with naval problems, as well as its recent developments in bombsights, “work from the other end of the pro position.” Bombsights had to incorporate numerous parameters of wind, groundspeed, airspeed, and ballistics, so an anti-aircraft gun director was in some ways a reciprocal bombsight . In fact, part of the reason anti-aircraft fire control equipment worked at all was that it assumed attacking bombers had to fly straight and level to line up their bombsights. Elmer Sperry‟s interests were warmly received, and in I925 and 1926 the Sperry Company built two data transmission systems for the Army‟s gun directors.The original director built at Frankford was designated T-1, or the “Wilson Director.” The Army had purchased a Vickers director manufactured in England, but encouraged Wilson to design one thatcould be manufactured in this country Sperry‟s two data tran smission projects were to add automatic communications between the elements of both the Wilson and the Vickers systems (Vickers would eventually incorporate the Sperry system into its product). Wilson died in 1927, and the Sperry Company took over the entire director development from the Frankford Arsenal with a contract to build and deliver a director incorporating the best features of both the Wilson and Vickers systems. From 1927 to 193.5, Sperry undertook a small but intensive development program in anti-aircraft systems. The company financed its engineering internally, selling directors in small quantities to the Army, mostly for evaluation, for only the actual cost of production [S]. Of the nearly 10 models Sperry developed during this period, it never sold more than 12 of any model; the average order was five. The Sperry Company offset some development costs by sales to foreign govemments, especially Russia, with the Army‟s approval 191.The T-6 DirectorSperry‟s modified version of Wilson‟s director was designated T-4 in development. This model incorporated corrections for air density, super-elevation, and wind. Assembled and tested at Frankford in the fall of 1928, it had problems with backlash and reliability in its predicting mechanisms. Still, the Army found the T-4 promising and after testing returned it to Sperry for modification. The company changed the design for simpler manufacture, eliminated two operators, and improved reliability. In 1930 Sperry returned with the T-6, which tested successfully. By the end of 1931, the Army had ordered 12 of the units. The T-6 was standardized by the Army as the M-2 director.Since the T-6 was the first anti-aircraft director to be put into production, as well as the first one the Army formally procured, it is instructive to examine its operation in detail. A technical memorandum dated 1930 explained the theory behind the T-6 calculations and how the equations were solved by the system. Although this publication lists no author, it probably was written by Earl W. Chafee, Sperry‟s director of fire control engineering. The director was a complex mechanical analog computer that connected four three-inch anti-aircraft guns and an altitude finder into an integratedsystem (see Fig. 1). Just as with Sperry‟s naval fire control system, the primary means of connection were “data transmitters,” similar to those that connected gyrocompasses to repeaters aboard ship.The director takes three primary inputs. Target altitude comes from a stereoscopic range finder. This device has two telescopes separated by a baseline of 12 feet; a single operator adjusts the angle between them to bring the two images into coincidence. Slant range, or the raw target distance, is then corrected to derive its altitude component. Two additional operators, each with a separate telescope, track the target, one for azimuth and one for elevation. Each sighting device has a data transmitter that measures angle or range and sends it to the computer. The computer receives these data and incorporates manual adjustments for wind velocity, wind direction, muzzle velocity, air density, and other factors. The computer calculates three variables: azimuth, elevation, and a setting for the fuze. The latter, manually set before loading, determines the time after firing at which the shell will explode. Shells are not intended to hit the target plane directly but rather to explode near it, scattering fragments to destroy it.The director performs two major calculations. First, pvediction models the motion of the target and extrapolates its position to some time in the future. Prediction corresponds to “leading” the target. Second, the ballistic calculation figures how to make the shell arrive at the desired point in space at the future time and explode, solving for the azimuth and elevation of the gun and the setting on the fuze. This calculation corresponds to the traditional artillery man‟s task of looking up data in a precalculated “firing table” and setting gun parameters accordingly. Ballistic calculation is simpler than prediction, so we will examine it first.The T-6 director solves the ballistic problem by directly mechanizing the traditional method, employing a “mechanical firing table.” Traditional firing tables printed on paper show solutions for a given angular height of the target, for a given horizontal range, and a number of other variables. The T-6 replaces the firing table with a Sperry ballistic cam.” A three-dimensionally machined cone shaped device, the ballistic cam or “pin follower” solves a pre-determined function. Two independent variables are input by the angular rotation of the cam and the longitudinal position of a pin that rests on top of the cam. As the pin moves up and down the length of the cam, and as the cam rotates, the height of the pin traces a function of two variables: the solution to the ballistics problem (or part of it). The T-6 director incorporates eight ballistic cams, each solving for a different component of the computation including superelevation, time of flight, wind correction, muzzle velocity. air density correction. Ballistic cams represented, in essence, the stored data of the mechanical computer. Later directors could be adapted to different guns simply by replacing the ballistic cams with a new set, machined according to different firing tables. The ballistic cams comprised a central component of Sperry‟s mechanical computing technology. The difficulty of their manufacture would prove a major limitation on the usefulness of Sperry directors.The T-6 director performed its other computational function, prediction, in an innovative way as well. Though the target came into the system in polar coordinates (azimuth, elevation, and range), targets usually flew a constant trajectory (it was assumed) in rectangular coordinates-i.e. straight andlevel. Thus, it was simpler to extrapolate to the future in rectangular coordinates than in the polar system. So the Sperry director projected the movement of the target onto a horizontal plane, derived the velocity from changes in position, added a fixed time multiplied by the velocity to determine a future position, and then converted the solution back into polar coordinates. This method became known as the “plan prediction method”because of the representation of the data on a flat “plan” as viewed from above; it was commonly used through World War II. In the plan prediction method, “the actual movement of the target is mechanically reproduced on a small scale within the Computer and the desired angles or speeds can be measured directly from the movements of these elements.”Together, the ballistic and prediction calculations form a feedback loop. Operators enter an estimated “time of flight” for the shell when they first begin tracking. The predictor uses this estimate to perform its initial calculation, which feeds into the ballistic stage. The output of the ballistics calculation then feeds back an updated time-of-flight estimate, which the predictor uses to refine the initial estimate. Thus “a cumulative cycle of correction brings the predicted future position of the target up to the point indicated by the actual future time of flight.”A square box about four feet on each side (see Fig. 2) the T-6 director was mounted on a pedestal on which it could rotate. Three crew would sit on seats and one or two would stand on a step mounted to the machine. The remainder of the crew stood on a fixed platform; they would have had to shuffle around as the unit rotated. This was probably not a problem, as the rotation angles were small. The direc tor‟s pedestal mounted on a trailer, on which data transmission cables and the range finder could be packed for transportation.We have seen that the T-6 computer took only three inputs, elevation, azimuth, and altitude (range), and yet it required nine operators. These nine did not include the operation of the range finder, which was considered a separate instrument, but only those operating the director itself. What did these nine men do?Human ServomechanismsTo the designers of the director, the operato rs functioned as “manual servomechanisms.”One specification for the machine required “minimum dependence on …human element.‟ The Sperry Company explained, “All operations must be made as mechanical and foolproof as possible; training requirements must visualize the conditions existent under rapid mobilization.” The lessons of World War I ring in this statement; even at the height of isolationism, with the country sliding into depression, design engineers understood the difficulty of raising large numbers of trained personnel in a national emergency. The designers not only thought the system should account for minimal training and high personnel turnover, they also considered the ability of operators to perform their duties under the stress of battle. Thus, nearly all the work for the crew was in a “follow-the-pointer”mode: each man concentrated on an instrument with two indicating dials, one the actual and one the desired value for a particular parameter. With a hand crank, he adjusted the parameter to match the two dials.Still, it seems curious that the T-6 director required so many men to perform this follow-the-pointer input. When the external rangefinder transmitted its data to the computer, it appeared on a dial and an operator had to follow the pointer to actually input the data into the computing mechanism. The machine did not explicitly calculate velocities. Rather, two operators (one for X and one for Y) adjusted variable-speed drives until their rate dials matched that of a constant-speed motor. When the prediction computation was complete, an operator had to feed the result into the ballistic calculation mechanism. Finally, when the entire calculation cycle was completed, another operator had to follow the pointer to transmit azimuth to the gun crew, who in turn had to match the train and elevation of the gun to the pointer indications.Human operators were the means of connecting “individual elements” into an integrated system. In one sense the men were impedance amplifiers, and hence quite similar to servomechanisms in other mechanical calculators of the time, especially Vannevar Bush‟s differential analyzer .The term “manual servomechanism”itself is an oxymoron: by the conventional definition, all servomechanisms are automatic. The very use of the term acknowledges the existence of an automatic technology that will eventually replace the manual method. With the T-6, this process was already underway. Though the director required nine operators, it had already eliminated two from the previous generation T-4. Servos replaced the operator who fed back superelevation data and the one who transmitted the fuze setting. Furthermore, in this early machine one man corresponded to one variable, and the machine‟s requirement for operators corresponded directly to the data flow of its computation. Thus the crew that operated the T-6 director was an exact reflection of the algorithm inside it.Why, then, were only two of the variables automated? This partial, almost hesitating automation indicates there was more to the human servo-motors than Sperry wanted to acknowledge. As much as the company touted “their duties are purely mechanical and little skill or judgment is required on the part of the operators,” men were still required to exercise some judgment, even if unconsciously. The data were noisy, and even an unskilled human eye could eliminate complications due to erroneous or corrupted data. The mechanisms themselves were rather delicate and erroneous input data, especially if it indicated conditions that were not physically possible, could lock up or damage the mechanisms. Theoperators performed as integrators in both senses of the term: they integrated different elements into a system.Later Sperry DirectorsWhen Elmer Sperry died in 1930, his engineers were at work on a newer generation director, the T-8. This machine was intended to be lighter and more portable than earlier models, as well as less expensive and “procurable in quantities in case of emergency.” The company still emphasized the need for unskilled men to operate the system in wartime, and their role as system integrators. The operators were “mechanical links in the apparatus, thereby making it possible to avoid mechanical complication which would be involved by the use of electrical or mechanical servo motors.” Still, army field experience with the T-6 had shown that servo-motors were a viable way to reduce the number of operators and improve reliability, so the requirements for the T-8 specified that wherever possible “electrical shall be used to reduce the number of operators to a minimum.” Thus the T-8 continued the process of automating fire control, and reduced the number of operators to four. Two men followed the target with telescopes, and only two were required for follow-the-pointer functions. The other follow-the-pointers had been replaced by follow-up servos fitted with magnetic brakes to eliminate hunting. Several experimental versions of the T-8 were built, and it was standardized by the Army as the M3 in 1934.Throughout the remain der of the …30s Sperry and the army fine-tuned the director system in the M3. Succeeding M3 models automated further, replacing the follow-the-pointers for target velocity with a velocity follow-up which employed a ball-and-disc integrator. The M4 series, standardized in 1939, was similar to the M3 but abandoned the constant altitude assumption and added an altitude predictor for gliding targets. The M7, standardized in 1941, was essentially similar to the M4 but added full power control to the guns for automatic pointing in elevation and azimuth. These later systems had eliminated errors. Automatic setters and loaders did not improve the situation because of reliability problems. At the start of World War II, the M7 was the primary anti-aircraft director available to the army.The M7 was a highly developed and integrated system, optimized for reliability and ease of operation and maintenance. As a mechanical computer, it was an elegant, if intricate, device, weighing 850 pounds and including about 11,000 parts. The design of the M7 capitalized on the strength of the Sperry Company: manufacturing of precision mechanisms, especially ballistic cams. By the time the U.S. entered the second world war, however, these capabilities were a scarce resource, especially for high volumes. Production of the M7 by Sperry and Ford Motor Company as subcontractor was a “real choke” and could not keep up with production of the 90mm guns, well into 1942. The army had also adopted an English system, known as the “Kerrison Director” or M5, which was less accurate than the M7 but easier to manufacture. Sperry redesigned the M5 for high-volume production in 1940, but passed in 1941.Conclusion: Human Beings as System IntegratorsThe Sperry directors we have examined here were transitional, experimental systems. Exactly for that reason, however, they allow us to peer inside the process of automation, to examine the displacement of human operators by servomechanisms while the process was still underway. Skilled asthe Sperry Company was at data transmission, it only gradually became comfortable with the automatic communication of data between subsystems. Sperry could brag about the low skill levels required of the operators of the machine, but in 1930 it was unwilling to remove them completely from the process. Men were the glue that held integrated systems together.As products, the Sperry Company‟s anti-aircraft gun directors were only partially successful. Still, we should judge a technological development program not only by the machines it produces but also by the knowledge it creates, and by how that knowledge contributes to future advances. Sperry‟s anti-aircraft directors of the 1930s were early examples of distributed control systems, technology that would assume critical importance in the following decades with the development of radar and digital computers. When building the more complex systems of later years, engineers at Bell Labs, MIT, and elsewhere would incorporate and build on the Sperry Company‟s experience,grappling with the engineering difficulties of feedback, control, and the augmentation of human capabilities by technological systems.在斯佩里防空炮火控和集成系统的发展电气时代的到来带来了新类型的控制系统。
毕业设计外文文献翻译【范本模板】

毕业设计(论文)外文资料翻译系别:专业:班级:姓名:学号:外文出处:附件: 1. 原文; 2。
译文2013年03月附件一:A Rapidly Deployable Manipulator SystemChristiaan J。
J。
Paredis, H. Benjamin Brown,Pradeep K. KhoslaAbstract:A rapidly deployable manipulator system combines the flexibility of reconfigurable modular hardware with modular programming tools,allowing the user to rapidly create a manipulator which is custom-tailored for a given task. This article describes two main aspects of such a system,namely,the Reconfigurable Modular Manipulator System (RMMS)hardware and the corresponding control software。
1 IntroductionRobot manipulators can be easily reprogrammed to perform different tasks, yet the range of tasks that can be performed by a manipulator is limited by mechanicalstructure。
Forexample,a manipulator well-suited for precise movement across the top of a table would probably no be capable of lifting heavy objects in the vertical direction. Therefore,to perform a given task,one needs to choose a manipulator with an appropriate mechanical structure.We propose the concept of a rapidly deployable manipulator system to address the above mentioned shortcomings of fixed configuration manipulators。
(完整版)_毕业设计外文文献翻译_69267082

毕业设计(论文)外文文献翻译专业计算机科学与技术学生姓名班级学号指导教师信息工程学院1、外文文献The History of the InternetThe Beginning - ARPAnetThe Internet started as a project by the US government. The object of the project was to create a means of communications between long distance points, in the event of a nation wide emergency or, more specifically, nuclear war. The project was called ARPAnet, and it is what the Internet started as. Funded specifically for military communication, the engineers responsible for ARPANet "Internet."By definition, an 'Internet' is four or more computers connected by a network.ARPAnet achieved its network by using a protocol called TCPIP. The basics around this protocol was that if information sent over a network failed to get through on one route, it would find another route to work with, as well as establishing a means for one computer to "talk" to another computer, regardless of whether it was a PC or a Macintosh.By the 80's ARPAnet, just years away from becoming the more well known Internet, its network. By the year 1984, it its network.In 1986 ARPAnet (supposedly) shut down, but only the organizationshutdown, and the existing networks still existed between the more than 1000 computers. It shut down due to a failied link up with NSF, who wanted to connect its 5 countywide super computers into ARPAnet.With the funding of NSF, new 1988. By that time, there were 28,174 computers on the (by then decided) Internet. In 1989 there were 80,000 computers on it. By 1989, there were.Another network was built to support the incredible number of people joining. It was constructed in 1992.Today - The InternetToday, the Internet the line' to experience the wealth of information of the Internet. Millions of people now use the Internet, and it's predicted that by the year 2003 every single person on the planet will our time and era, and is evolving so quickly its virtual Internet environments.The Internet is not a 'thing' itself. The Internet cannot just "crash." It functions the same way as the telephone system, only there is no Internet company that runs the Internet.The Internet is a collection of millioins of computers that are all connected to each other, or office network, only it thing about . How does a computer in Houston know a computer in Tokyo to view a webpage?Internet communication, communication among computers connected to the Internet, is based on a language. This language is called TCPIP. TCPIP establishes a language for a computer to access and transmit data over the Internet system.But TCPIP assumes that there is a physical connecetion between one computer and another. This is not usually the case. There would that is requireed is established by way of modems,phonelines, and other modem cable connections (like cable modems or DSL). Modems on computers read and transmit data over established lines,which could be phonelines or data lines. The actual .To explain this better, let's look at Internet Service Provider (ISP). The ISP might in turn be connected to another ISP, or a straight connection into the Internet backbone.2. The user launches a web browser like Netscape or Internet Explorer and types in an internet location to go to.3. Here's where the tricky part comes in. First, the computer sends data about it's data request to a router. A router is a very of routers in the world make what is called a "backbone," on which all the data on the Internet is transferred. The backbone presently operates at a speed of several gigabytes per-second. Such a speed compared to a normal modem is like comparing the to the ice-cube.Routers similarly to envelopes. So, when the request for the webpage goes through, it uses TCPIP protocols to tell the router what to do with the data, where it's going, and overall where the user wants to go.4. The router sends these packets to other routers, eventually leading to the target computer. It's like whisper down the lane (only the information remains intact).5. When the information reaches the target web server, the webserver then begins to send the web page back. A webserver is the computer wherethe webpage is stored that is running a program that packets, sent through routers, and arrive at the users computer where the user can view the webpage once it is assembled.The packets which contain the data also contain special information that lets routers and other computers know the right order.With millions of web pages, and millions of users, using the Internet is not always easy for a beginning user, especially for someone who is not entirely comfortale with using computers. Below you can find tips tricks and services of the Internet.Before you access webpages, you must the software they usually give to customers; you. The fact that you are viewing this page means that you be found at and MSIE can be found atThe fact that you're reading this right now means that you of instructions (like if it remark made by new web-users.Sometimes websites error on a website is not the user's fault, of course.A 404 error means that the page you tried to go to does not exist. This could be because the site is still being constructed and the page created yet, or because the site author made a typo in the page. There's nothing much to do about a 404 error except for e-mailing the site administrator (of the page you wanted to go to) an telling the Javascript code of a website. Not all websites utilize Javascript, but many do. Javascript is different from Java, and most browsers now support Javascript. If you are using an old version of a web browser (Netscape 3.0 for example), you might get Javascript errors because sites utilize Javascript versions that yourbrowser does not support. So, you can try getting a newer version of your web browser.E-mail stands for Electronic Mail, and that's what it is. E-mail enables people to send letters, and even files and pictures to each other.To use e-mail, you must e-mail client, which is just like a personal post office, since it retrieves and stores e-mail.Secondly, you must e-mail account. Most Internet Service Providers provide free e-mail account(s) for free. Some services offer free e-mail, like Hotmail, and Geocities.After configuring your e-mail client with your POP3 and SMTP server address (your e-mail provider will give you that information), you are ready to receive mail.An attachment is a file sent in a letter. If someone sends you an attachment and you don't know who it is, don't run the file, ever. It could be a virus or some other kind of nasty programs. You can't get a virus just by reading e-mail, you'll put a text graphic, your business information, anything you want.Imagine that a computer on the Internet is an island in the sea. The sea is filled with millions of islands. This is the Internet. Imagine an island communicates with other island by sending ships to other islands and receiving ships. The island the Internet a network (or the Internet). This method is similar to the islandocean symbolism above.Telnet refers to accessing ports on a server directly with a text connection. Almost every kind of Internet function, like accessing web pages,"chatting," and e-mailing is done over a Telnet connection.Telnetting requires a Telnet client. A telnet program comes with the Windows system, so Windows users can access telnet by typing in "telnet" (without the "'s) in the run dialog. Linux , chat daemon) can be accessed via telnet, although they are not usually meant to be accessed in such a manner. For instance, it is possible to connect directly to a mail server and check your mail by interfacing with the e-mail server software, but it's easier to use an e-mail client (of course).There are millions of WebPages that come from all over the world, yet database of websites. For instance, if you wanted to find a website on dogs, you'd search for "dog" or "dogs" or "dog information." Here are a few search-engines.1. Altavista (3. Excite ( possibly find. This means that a search engine can literally map out as much of the Internet as it's own time and speed allows for.An indexed collection uses Yahoo's site. You can click on Computers & the Internet. Then you can click on Hardware. Then you can click on Modems, etc., and along the way through sections, there are sites available which relate to what section you're in.Metasearch searches many search engines at the same time, finding the top choices from about 10 search engines, making searching a lot more effective.Once you are able to use search engines, you can effectively find the pages you want.With the arrival of networking and multi user systems, security on the mind of system developers and system operators. Since the dawn ofAT&T and its phone network, known by many, . Why should you be careful while making purchases via a website? Let's look at to a webpage. Looks safe, right? Not necessarily. As the user submits the information, it is being streamed through a series of computers that make up the Internet backbone. The information is in little chunks, in packages called packets. Here's the problem: While the information is being transferred through this big backbone, what is preventing a ". There are methods of enforcing security, like password protection, an most importantly, encryption.Encryption means scrambling data into a code that can only be unscrambled on the "other end." Browser's like Netscape Communicator and Internet Explorer feature encryption support for making on-line transfers. Some encryptions work better than others. The most advanced encryption system is called DES (Data Encryption Standard), and it was adopted by the US Defense Department because it was deemed so difficult to 'crack' that they considered it a security risk if it would fall into another countries to unlock an entire document. The problem is, there are 75 trillion possible keys to use, so it is a3. Excite () - Web spider & Indexed4. Lycos () - Web spider & Indexed5. Metasearch () - Multiple search网络蜘蛛是一种搜索引擎使用的程序,它随着可能找到的任何链接从一个网页到另一个网页。
毕业设计外文文献翻译

毕业设计外文文献翻译Graduation design of foreign literature translation 700 words Title: The Impact of Artificial Intelligence on the Job Market Abstract:With the rapid development of artificial intelligence (AI), concerns arise about its impact on the job market. This paper explores the potential effects of AI on various industries, including healthcare, manufacturing, and transportation, and the implications for employment. The findings suggest that while AI has the potential to automate repetitive tasks and increase productivity, it may also lead to job displacement and a shift in job requirements. The paper concludes with a discussion on the importance of upskilling and retraining for workers to adapt to the changing job market.1. IntroductionArtificial intelligence (AI) refers to the development of computer systems that can perform tasks that typically require human intelligence. AI has made significant advancements in recent years, with applications in various industries, such as healthcare, manufacturing, and transportation. As AI technology continues to evolve, concerns arise about its impact on the job market. This paper aims to explore the potential effects of AI on employment and discuss the implications for workers.2. Potential Effects of AI on the Job Market2.1 Automation of Repetitive TasksOne of the major impacts of AI on the job market is the automation of repetitive tasks. AI systems can perform tasks faster and moreaccurately than humans, particularly in industries that involve routine and predictable tasks, such as manufacturing and data entry. This automation has the potential to increase productivity and efficiency, but also poses a risk to jobs that can be easily replicated by AI.2.2 Job DisplacementAnother potential effect of AI on the job market is job displacement. As AI systems become more sophisticated and capable of performing complex tasks, there is a possibility that workers may be replaced by machines. This is particularly evident in industries such as transportation, where autonomous vehicles may replace human drivers, and customer service, where chatbots can handle customer inquiries. While job displacement may lead to short-term unemployment, it also creates opportunities for new jobs in industries related to AI.2.3 Shifting Job RequirementsWith the introduction of AI, job requirements are expected to shift. While AI may automate certain tasks, it also creates a demand for workers with the knowledge and skills to develop and maintain AI systems. This shift in job requirements may require workers to adapt and learn new skills to remain competitive in the job market.3. Implications for EmploymentThe impact of AI on employment is complex and multifaceted. On one hand, AI has the potential to increase productivity, create new jobs, and improve overall economic growth. On the other hand, it may lead to job displacement and a shift in job requirements. To mitigate the negative effects of AI on employment, it is essentialfor workers to upskill and retrain themselves to meet the changing demands of the job market.4. ConclusionIn conclusion, the rapid development of AI has significant implications for the job market. While AI has the potential to automate repetitive tasks and increase productivity, it may also lead to job displacement and a shift in job requirements. To adapt to the changing job market, workers should focus on upskilling and continuous learning to remain competitive. Overall, the impact of AI on employment will depend on how it is integrated into various industries and how workers and policymakers respond to these changes.。
毕业设计(论文)外文资料及译文(模板)

大连东软信息学院
毕业设计(论文)外文资料及译文
系所:
专业:
班级:
姓名:
学号:
大连东软信息学院
Dalian Neusoft University of Information
外文资料和译文格式要求
一、装订要求
1、外文资料原文(复印或打印)在前、译文在后、最后为指导教师评定成绩。
2、译文必须采用计算机输入、打印。
3、A4幅面打印,于左侧装订。
二、撰写要求
1、外文文献内容与所选课题相关。
2、本科学生译文汉字字数不少于4000字,高职学生译文汉字字数不少于2000字。
三、格式要求
1、译文字号:中文小四号宋体,英文小四号“Times New Roman”字型,全文统一,首行缩进2个中文字符,1.5倍行距。
2、译文页码:页码用阿拉伯数字连续编页,字体采用“Times New Roman”字体,字号小五,页底居中。
3、译文页眉:眉体使用单线,页眉说明五号宋体,居中“大连东软信息学院本科毕业设计(论文)译文”。
大连东软信息学院毕业设计(论文)译文
大连东软信息学院毕业设计(论文)译文
大连东软信息学院毕业设计(论文)译文
大连东软信息学院毕业设计(论文)译文
大连东软信息学院毕业设计(论文)译文。
毕业设计外文原文加译文

Basic Concepts PrimerTOPIC P.1: Bridge MechanicsBasic Equations of Bridge Mechanicswhere: A =area; cross-sectional areaA w = areaof web c = distance from neutral axisto extreme fiber (or surface) of beamE = modulus of elasticityF = force; axial force f a= axial stress f b= bending stress f v = shear stress I = moment of inertia L = original length M = applied moment S = stressV = vertical shear force due toexternal loadsD L = change in length e = strainBasic Concepts Primer Topic P.1 Bridge MechanicsP.1.1Introduction Mechanics is the branch of physical science that deals with energy and forces andtheir relation to the equilibrium, deformation, or motion of bodies. The bridgeinspector will primarily be concerned with statics, or the branch of mechanicsdealing with solid bodies at rest and with forces in equilibrium.The two most important reasons for a bridge inspector to study bridge mechanicsare:Ø To understand how bridge members functionØ To recognize the impact a defect may have on the load-carrying capacityof a bridge component or elementWhile this section presents the basic principles of bridge mechanics, the referenceslisted in the bibliography should be referred to for a more complete presentation ofthis subject.P.1.2Bridge Design Loadings Bridge design loadings are loads that a bridge is designed to carry or resist and which determine the size and configuration of its members. Bridge members are designed to withstand the loads acting on them in a safe and economical manner. Loads may be concentrated or distributed depending on the way in which they are applied to the structure.A concentrated load, or point load, is applied at a single location or over a very small area. Vehicle loads are considered concentrated loads.A distributed load is applied to all or part of the member, and the amount of load per unit of length is generally constant. The weight of superstructures, bridge decks, wearing surfaces, and bridge parapets produce distributed loads. Secondary loads, such as wind, stream flow, earth cover and ice, are also usually distributed loads.Highway bridge design loads are established by the American Association of State Highway and Transportation Officials (AASHTO). For many decades, the primary bridge design code in the United States was the AASHTO Standard Specifications for Highway Bridges (Specifications), as supplemented by agency criteria as applicable.During the 1990’s AASHTO developed and approved a new bridge design code, entitled AASHTO LRFD Bridge Design Specifications. It is based upon the principles of Load and Resistance Factor Design (LRFD), as described in Topic P.1.7.P.1.1SECTION P: Basic Concepts PrimerTopic P.1: Bridge MechanicsP.1.2Bridge design loadings can be divided into three principal categories:Ø Dead loadsØ Primary live loads Ø Secondary loadsDead LoadsDead loads do not change as a function of time and are considered full-time, permanent loads acting on the structure. They consist of the weight of the materials used to build the bridge (see Figure P.1.1). Dead load includes both the self-weight of structural members and other permanent external loads. They can be broken down into two groups, initial and superimposed.Initial dead loads are loads which are applied before the concrete deck is hardened, including the beam itself and the concrete deck. Initial deck loads must be resisted by the non-composite action of the beam alone. Superimposed dead loads are loads which are applied after the concrete deck has hardened (on a composite bridge), including parapets and any anticipated future deck pavement. Superimposed dead loads are resisted by the beam and the concrete deck acting compositely. Non-composite and composite action are described in Topic P.1.10.Dead load includes both the self-weight of the structural members and other permanent external loads.Example of self-weight: A 6.1 m (20-foot) long beam weighs 0.73 kN per m (50 pounds per linear foot). The total weight of the beam is 4.45 kN (1000 pounds). This weight is called the self-weight of the beam.Example of an external dead load: If a utility such as a water line is permanently attached to the beam in the previous example, then the weight of the water line is an external dead load. The weight of the water line plus the self weight of the beam comprises the total dead load.Total dead load on a structure may change during the life of the bridge due to additions such as deck overlays, parapets, utility lines, and inspection catwalks.Figure P.1.1 Dead Load on a BridgePrimary Live LoadsLive loads are considered part-time or temporary loads, mostly of short-term duration, acting on the structure. In bridge applications, the primary live loads are moving vehicular loads (see Figure P.1.2).To account for the affects of speed, vibration, and momentum, highway live loads are typically increased for impact. Impact is expressed as a fraction of the liveSECTION P: Basic Concepts PrimerTopic P.1: Bridge MechanicsP.1.3load, and its value is a function of the span length.Standard vehicle live loads have been established by AASHTO for use in bridge design and rating. It is important to note that these standard vehicles do not represent actual vehicles. Rather, they were developed to allow a relatively simple method of analysis based on an approximation of the actual live load.Figure P.1.2 Vehicle Live Load on a BridgeAASHTO Truck LoadingsThere are two basic types of standard truck loadings described in the current AASHTO Specifications . The first type is a single unit vehicle with two axles spaced at 14 feet (4.3 m) and designated as a highway truck or "H" truck (see Figure P.1.3). The weight of the front axle is 20% of the gross vehicle weight, while the weight of the rear axle is 80% of the gross vehicle weight. The "H" designation is followed by the gross tonnage of the particular design vehicle.Example of an H truck loading: H20-35 indicates a 20 ton vehicle with a front axle weighing 4 tons, a rear axle weighing 16 tons, and the two axles spaced 14 feet apart. This standard truck loading was first published in 1935.The second type of standard truck loading is a two unit, three axle vehicle comprised of a highway tractor with a semi-trailer. It is designated as a highway semi-trailer truck or "HS" truck (see Figure P.1.4).The tractor weight and wheel spacing is identical to the H truck loading. The semi-trailer axle weight is equal to the weight of the rear tractor axle, and its spacing from the rear tractor axle can vary from 4.3 to 9.1 m (14 to 30 feet). The "HS" designation is followed by a number indicating the gross weight in tons of the tractor only.SECTION P: Basic Concepts PrimerTopic P.1: Bridge MechanicsP.1.414’-0”(4.3 m)8,000 lbs (35 kN) 32,000 lbs (145 kN)(3.0 m)10’-0”CLEARANCE AND LOAD LANE WIDTH6’-0” (1.8 m)2’-0” (0.6 m)Figure P.1.3 AASHTO H20 Truck14’-0”(4.3 m)8,000 lbs (35 kN) 32,000 lbs (145 kN)(3.0 m)10’-0”CLEARANCE AND LOAD LANE WIDTH6’-0”(1.8 m)2’-0” (0.6 m)32,000 lbs (145 kN)VFigure P.1.4 AASHTO HS20 TruckExample of an HS truck loading: HS20-44 indicates a vehicle with a front tractor axle weighing 4 tons, a rear tractor axle weighing 16 tons, and a semi-trailer axle weighing 16 tons. The tractor portion alone weighs 20 tons, but the gross vehicle weight is 36 tons. This standard truck loading was first published in 1944.In specifications prior to 1944, a standard loading of H15 was used. In 1944, theSECTION P: Basic Concepts Primer Topic P.1: Bridge MechanicsP.1.5H20-44 Loading HS20-44 Loadingpolicy of affixing the publication year of design loadings was adopted. In specifications prior to 1965, the HS20-44 loading was designated as H20-S16-44, with the S16 identifying the gross axle weight of the semi-trailer in tons.The H and HS vehicles do not represent actual vehicles, but can be considered as "umbrella" loads. The wheel spacings, weight distributions, and clearance of the Standard Design Vehicles were developed to give a simpler method of analysis, based on a good approximation of actual live loads.The H and HS vehicle loads are the most common loadings for design, analysis, and rating, but other loading types are used in special cases.AASHTO Lane LoadingsIn addition to the standard truck loadings, a system of equivalent lane loadings was developed in order to provide a simple method of calculating bridge response to a series, or “train”, of trucks. Lane loading consists of a uniform load per linear foot of traffic lane combined with a concentrated load located on the span to produce the most critical situation (see Figure P.1.5).For design and load capacity rating analysis, an investigation of both a truck loading and a lane loading must be made to determine which produces the greatest stress for each particular member. Lane loading will generally govern over truck loading for longer spans. Both the H and HS loadings have corresponding lane loads.* Use two concentrated loads for negative moment in continuous spans (Refer to AASHTO Page 23)Figure P.1.5 AASHTO Lane Loadings.Alternate Military LoadingThe Alternate Military Loading is a single unit vehicle with two axles spaced at 1.2 m (4 feet) and weighing 110 kN (12 tons)each. It has been part of the AASHTO Specifications since 1977. Bridges on interstate highways or other highways which are potential defense routes are designed for either an HS20 loading or an Alternate Military Loading (see Figure P.1.6).SECTION P: Basic Concepts PrimerTopic P.1: Bridge MechanicsP.1.6110 kN (24 k)110 kN (24 k)Figure P.1.6 Alternate Military LoadingLRFD Live LoadsThe AASHTO LRFD design vehicular live load, designated HL-93, is a modified version of the HS-20 highway loadings from the AASHTO StandardSpecifications. Under HS-20 loading as described earlier, the truck or lane load is applied to each loaded lane. Under HL-93 loading, the design truck or tandem, in combination with the lane load, is applied to each loaded lane.The LRFD design truck is exactly the same as the AASHTO HS-20 design truck. The LRFD design tandem, on the other hand, consists of a pair of 110 kN axials spread at 1.2 m (25 kip axles spaced 4 feet) apart. The transverse wheel spacing of all of the trucks is 6 feet.The magnitude of the HL-93 lane load is equal to that of the HS-20 lane load. The lane load is 9 kN per meter (0.64 kips per linear foot) longitudinally and it is distributed uniformly over a 3 m (10 foot) width in the transverse direction. The difference between the HL-93 lane load and the HS-20 lane load is that the HL-93 lane load does not include a point load.Finally, for LRFD live loading, the dynamic load allowance, or impact, is applied to the design truck or tandem but is not applied to the design lane load. It is typically 33 percent of the design vehicle.Permit VehiclesPermit vehicles are overweight vehicles which, in order to travel a state’s highways, must apply for a permit from that state. They are usually heavy trucks (e.g., combination trucks, construction vehicles,or cranes) that have varying axle spacings depending upon the design of the individual truck. To ensure that these vehicles can safely operate on existing highways and bridges, most states require that bridges be designed for a permit vehicle or that the bridge be checked to determine if it can carry a specific type of vehicle. For safe and legal operation, agencies issue permits upon request that identify the required gross weight, number of axles, axle spacing, and maximum axle weights for a designated route (see Figure P.1.7).SECTION P: Basic Concepts PrimerTopic P.1: Bridge MechanicsP.1.7Figure P.1.7 910 kN (204 kip) Permit Vehicle (for Pennsylvania)Secondary LoadsIn addition to dead loads and primary live loads, bridge components are designed to resist secondary loads, which include the following:Ø Earth pressure - a horizontal force acting on earth-retaining substructureunits, such as abutments and retaining wallsØ Buoyancy -the force created due to the tendency of an object to rise whensubmerged in waterØ Wind load on structure - wind pressure on the exposed area of a bridge Ø Wind load on live load -wind effects transferred through the live loadvehicles crossing the bridgeØ Longitudinal force -a force in the direction of the bridge caused bybraking and accelerating of live load vehiclesØ Centrifugal force -an outward force that a live load vehicle exerts on acurved bridgeØ Rib shortening -a force in arches and frames created by a change in thegeometrical configuration due to dead loadØ Shrinkage - applied primarily to concrete structures, this is a multi-directional force due to dimensional changes resulting from the curing processØ Temperature -since materials expand as temperature increases andcontract as temperature decreases, the force caused by these dimensional changes must be consideredØ Earthquake -bridge structures must be built so that motion during anearthquake will not cause a collapseØ Stream flow pressure -a horizontal force acting on bridge componentsconstructed in flowing waterØ Ice pressure - a horizontal force created by static or floating ice jammedagainst bridge componentsØ Impact loading - the dynamic effect of suddenly receiving a live load;this additional force can be up to 30% of the applied primary live load forceØ Sidewalk loading - sidewalk floors and their immediate supports aredesigned for a pedestrian live load not exceeding 4.1 kN per square meter (85 pounds per square foot)Ø Curb loading -curbs are designed to resist a lateral force of not less than7.3 kN per linear meter (500 pounds per linear foot)Ø Railing loading -railings are provided along the edges of structures forprotection of traffic and pedestrians; the maximum transverse load appliedto any one element need not exceed 44.5 kN (10 kips)SECTION P: Basic Concepts PrimerTopic P.1: Bridge MechanicsP.1.8A bridge may be subjected to several of these loads simultaneously. The AASHTO Specifications have established a table of loading groups. For each group, a set of loads is considered with a coefficient to be applied for each particular load. The coefficients used were developed based on the probability of various loads acting simultaneously.P.1.3Material Response to LoadingsEach member of a bridge has a unique purpose and function, which directly affects the selection of material, shape, and size for that member. Certain terms are used to describe the response of a bridge material to loads. A working knowledge of these terms is essential for the bridge inspector.ForceA force is the action that one body exerts on another body. Force has two components: magnitude and direction (see Figure P.1.8). The basic English unit of force is called pound (abbreviated as lb.). The basic metric unit of force is called Newton (N). A common unit of force used among engineers is a kip (K), which is 1000 pounds. In the metric system, the kilonewton (kN), which is 1000 Newtons, is used. Note: 1 kip = 4.4 kilonewton.FyFigure P.1.8 Basic Force ComponentsStressStress is a basic unit of measure used to denote the intensity of an internal force. When a force is applied to a material, an internal stress is developed. Stress is defined as a force per unit of cross-sectional area.The basic English unit of stress is pounds per square inch (abbreviated as psi). However, stress can also be expressed in kips per square inch (ksi) or in any other units of force per unit area. The basic metric unit of stress is Newton per square meter, or Pascal (Pa). An allowable unit stress is generally established for a given material. Note: 1 ksi = 6.9 Pa.)A (Area )F (Force )S (Stress =毕业设计外文译文桥梁力学基本概论《美国桥梁检测手册》译文:桥梁结构的基础方程S=F/A(见1.8节)fa=P/A(见1.14节)ε=△L/L(见1.9节)fb=Mc/I(见1.16节)E=S/ε(见1.11节)fv=V/Aw(见1.18节)桥梁额定承载率=(允许荷载–固定荷载)*车辆总重量/车辆活荷载冲击力式中:A=面积;横截面面积Aw=腹板面积c=中性轴与横梁边缘纤维或外表面之间的距离E=弹性模量F=轴心力;轴向力fa=轴向应力fb=弯曲应力fv=剪切应力I=惯性距L=原长M=作用力距S=应力V=由外荷载引起的垂直剪应力△L=长度变量ε=应变1桥梁主要的基本概论第一章桥梁力学1.1引言结构力学是研究物体的能量、力、能量和力的平衡关系、变形及运动的物理科学的分支。
毕业设计说明书英文文献中文翻译1

利用声学矢量传感器阵列对连贯的信号进行二维DOA估计摘要在本文中,我们提出了两种新的方法来评估的二维波达方向(DOA)的窄带一致(或高度相关)信号通过一个l型的声学矢量传感器阵列。
我们的去除信号的相干性并利用互相关矩阵重构信号子空间,ESPRIT和传播算子的方法是用于估计方位和俯仰角。
ESPRIT 技术是基于几何形状转移不变性和传播算子的方法是基于分区的互相关矩阵。
传播算子的方法计算效率高,而且只需要线性操作。
此外,ESPRIT的方法不需要任何特征分解或奇异值分解。
这两种技巧是直接的方法,不需要任何二维估计方位和仰角的迭代搜索。
给出仿真结果证明该方法的性能。
爱思唯尔B.V. 2011 保留所有权利。
关键词:来波方向角估计互相关相干信号声学矢量传感器阵列1 介绍近年来,声学矢量传感器阵列信号处理在水下信号处理的领域已经引起了越来越多的关注。
一个声学矢量传感器在空间一点测量压力和声空间粒子速度而传统的压力传感器只能提取压力的信息。
主要利用这些向量传感器比传统的标量传感器是他们可以更好地利用可用的声学信息;因此,它们应该比标量(压力)传感器数组计算精确。
因此应该允许矢量传感器在保持性能的同时使用更小的数组孔。
声学矢量传感器模型首次引入信号处理领域是在文献[1]总。
自那时以来,许多先进的压力传感器阵列技术适应声学矢量传感器阵列[2 - 4]。
各类不同的设计技术的声学矢量传感器如今在商业运用[5]。
矢量传感器技术已在水下环境使用了几十年,并吸引了对水下振源位置的问题的注意。
大多数的高分辨率波达方向估计方法如MUSIC[6、7]和ESPRIT[8、9],当信号不相关时,已被证明是有效的。
当信号源连贯或高度相关时,例如,在多径传播或在军事场景,包含智能干扰系统,这些技术的性能却大幅降低。
在此情况下,协方差矩阵的秩一般都小于信源的数量。
要克服这种不利的方面,去相关技术,如Kozickand Kassam[13]研发的空间平滑(SS)[10-12]技术,特征向量平滑(ES)[14、15],而且没有特征分解(SUMWE) [16]的计算效率方法已经被认可,然而,这些技术只适合某些阵列配置,例如,均匀间隔的线性阵列。
毕业设计(论文)外文文献原文及译文

毕业设计(论文)外文文献原文及译文Chapter 11. Cipher Techniques11.1 ProblemsThe use of a cipher without consideration of the environment in which it is to be used may not provide the security that the user expects. Three examples will make this point clear.11.1.1 Precomputing the Possible MessagesSimmons discusses the use of a "forward search" to decipher messages enciphered for confidentiality using a public key cryptosystem [923]. His approach is to focus on the entropy (uncertainty) in the message. To use an example from Section 10.1(page 246), Cathy knows that Alice will send one of two messages—BUY or SELL—to Bob. The uncertainty is which one Alice will send. So Cathy enciphers both messages with Bob's public key. When Alice sends the message, Bob intercepts it and compares the ciphertext with the two he computed. From this, he knows which message Alice sent.Simmons' point is that if the plaintext corresponding to intercepted ciphertext is drawn from a (relatively) small set of possible plaintexts, the cryptanalyst can encipher the set of possible plaintexts and simply search that set for the intercepted ciphertext. Simmons demonstrates that the size of the set of possible plaintexts may not be obvious. As an example, he uses digitized sound. The initial calculations suggest that the number of possible plaintexts for each block is 232. Using forward search on such a set is clearly impractical, but after some analysis of the redundancy in human speech, Simmons reduces the number of potential plaintexts to about 100,000. This number is small enough so that forward searches become a threat.This attack is similar to attacks to derive the cryptographic key of symmetric ciphers based on chosen plaintext (see, for example, Hellman's time-memory tradeoff attack [465]). However, Simmons' attack is for public key cryptosystems and does not reveal the private key. It only reveals the plaintext message.11.1.2 Misordered BlocksDenning [269] points out that in certain cases, parts of a ciphertext message can be deleted, replayed, or reordered.11.1.3 Statistical RegularitiesThe independence of parts of ciphertext can give information relating to the structure of the enciphered message, even if the message itself is unintelligible. The regularity arises because each part is enciphered separately, so the same plaintext always produces the same ciphertext. This type of encipherment is called code book mode, because each part is effectively looked up in a list of plaintext-ciphertext pairs.11.1.4 SummaryDespite the use of sophisticated cryptosystems and random keys, cipher systems may provide inadequate security if not used carefully. The protocols directing how these cipher systems are used, and the ancillary information that the protocols add to messages and sessions, overcome these problems. This emphasizes that ciphers and codes are not enough. The methods, or protocols, for their use also affect the security of systems.11.2 Stream and Block CiphersSome ciphers divide a message into a sequence of parts, or blocks, and encipher each block with the same key.Definition 11–1. Let E be an encipherment algorithm, and let Ek(b) bethe encipherment of message b with key k. Let a message m = b1b2…, whereeach biis of a fixed length. Then a block cipher is a cipher for whichE k (m) = Ek(b1)Ek(b2) ….Other ciphers use a nonrepeating stream of key elements to encipher characters of a message.Definition 11–2. Let E be an encipherment algorithm, and let Ek(b) bethe encipherment of message b with key k. Let a message m = b1b2…, whereeach bi is of a fixed length, and let k = k1k2…. Then a stream cipheris a cipher for which Ek (m) = Ek1(b1)Ek2(b2) ….If the key stream k of a stream cipher repeats itself, it is a periodic cipher.11.2.1 Stream CiphersThe one-time pad is a cipher that can be proven secure (see Section 9.2.2.2, "One-Time Pad"). Bit-oriented ciphers implement the one-time pad by exclusive-oring each bit of the key with one bit of the message. For example, if the message is 00101 and the key is 10010, the ciphertext is01||00||10||01||10 or 10111. But how can one generate a random, infinitely long key?11.2.1.1 Synchronous Stream CiphersTo simulate a random, infinitely long key, synchronous stream ciphers generate bits from a source other than the message itself. The simplest such cipher extracts bits from a register to use as the key. The contents of the register change on the basis of the current contents of the register.Definition 11–3. An n-stage linear feedback shift register (LFSR)consists of an n-bit register r = r0…rn–1and an n-bit tap sequence t =t 0…tn–1. To obtain a key bit, ris used, the register is shifted one bitto the right, and the new bit r0t0⊕…⊕r n–1t n–1 is inserted.The LFSR method is an attempt to simulate a one-time pad by generating a long key sequence from a little information. As with any such attempt, if the key is shorter than the message, breaking part of the ciphertext gives the cryptanalyst information about other parts of the ciphertext. For an LFSR, a known plaintext attack can reveal parts of the key sequence. If the known plaintext is of length 2n, the tap sequence for an n-stage LFSR can be determined completely.Nonlinear feedback shift registers do not use tap sequences; instead, the new bit is any function of the current register bits.Definition 11–4. An n-stage nonlinear feedback shift register (NLFSR)consists of an n-bit register r = r0…rn–1. Whenever a key bit is required,ris used, the register is shifted one bit to the right, and the new bitis set to f(r0…rn–1), where f is any function of n inputs.NLFSRs are not common because there is no body of theory about how to build NLFSRs with long periods. By contrast, it is known how to design n-stage LFSRs with a period of 2n– 1, and that period is maximal.A second technique for eliminating linearity is called output feedback mode. Let E be an encipherment function. Define k as a cryptographic key,(r) and define r as a register. To obtain a bit for the key, compute Ekand put that value into the register. The rightmost bit of the result is exclusive-or'ed with one bit of the message. The process is repeated until the message is enciphered. The key k and the initial value in r are the keys for this method. This method differs from the NLFSR in that the register is never shifted. It is repeatedly enciphered.A variant of output feedback mode is called the counter method. Instead of using a register r, simply use a counter that is incremented for every encipherment. The initial value of the counter replaces r as part of the key. This method enables one to generate the ith bit of the key without generating the bits 0…i – 1. If the initial counter value is i, set. In output feedback mode, one must generate all the register to i + ithe preceding key bits.11.2.1.2 Self-Synchronous Stream CiphersSelf-synchronous ciphers obtain the key from the message itself. The simplest self-synchronous cipher is called an autokey cipher and uses the message itself for the key.The problem with this cipher is the selection of the key. Unlike a one-time pad, any statistical regularities in the plaintext show up in the key. For example, the last two letters of the ciphertext associated with the plaintext word THE are always AL, because H is enciphered with the key letter T and E is enciphered with the key letter H. Furthermore, if theanalyst can guess any letter of the plaintext, she can determine all successive plaintext letters.An alternative is to use the ciphertext as the key stream. A good cipher will produce pseudorandom ciphertext, which approximates a randomone-time pad better than a message with nonrandom characteristics (such as a meaningful English sentence).This type of autokey cipher is weak, because plaintext can be deduced from the ciphertext. For example, consider the first two characters of the ciphertext, QX. The X is the ciphertext resulting from enciphering some letter with the key Q. Deciphering, the unknown letter is H. Continuing in this fashion, the analyst can reconstruct all of the plaintext except for the first letter.A variant of the autokey method, cipher feedback mode, uses a shift register. Let E be an encipherment function. Define k as a cryptographic(r). The key and r as a register. To obtain a bit for the key, compute Ek rightmost bit of the result is exclusive-or'ed with one bit of the message, and the other bits of the result are discarded. The resulting ciphertext is fed back into the leftmost bit of the register, which is right shifted one bit. (See Figure 11-1.)Figure 11-1. Diagram of cipher feedback mode. The register r is enciphered with key k and algorithm E. The rightmost bit of the result is exclusive-or'ed with one bit of the plaintext m i to produce the ciphertext bit c i. The register r is right shifted one bit, and c i is fed back into the leftmost bit of r.Cipher feedback mode has a self-healing property. If a bit is corrupted in transmission of the ciphertext, the next n bits will be deciphered incorrectly. But after n uncorrupted bits have been received, the shift register will be reinitialized to the value used for encipherment and the ciphertext will decipher properly from that point on.As in the counter method, one can decipher parts of messages enciphered in cipher feedback mode without deciphering the entire message. Let the shift register contain n bits. The analyst obtains the previous n bits of ciphertext. This is the value in the shift register before the bit under consideration was enciphered. The decipherment can then continue from that bit on.11.2.2 Block CiphersBlock ciphers encipher and decipher multiple bits at once, rather than one bit at a time. For this reason, software implementations of block ciphers run faster than software implementations of stream ciphers. Errors in transmitting one block generally do not affect other blocks, but as each block is enciphered independently, using the same key, identical plaintext blocks produce identical ciphertext blocks. This allows the analyst to search for data by determining what the encipherment of a specific plaintext block is. For example, if the word INCOME is enciphered as one block, all occurrences of the word produce the same ciphertext.To prevent this type of attack, some information related to the block's position is inserted into the plaintext block before it is enciphered. The information can be bits from the preceding ciphertext block [343] or a sequence number [561]. The disadvantage is that the effective block size is reduced, because fewer message bits are present in a block.Cipher block chaining does not require the extra information to occupy bit spaces, so every bit in the block is part of the message. Before a plaintext block is enciphered, that block is exclusive-or'ed with the preceding ciphertext block. In addition to the key, this technique requires an initialization vector with which to exclusive-or the initial plaintext block. Taking Ekto be the encipherment algorithm with key k, and I to be the initialization vector, the cipher block chaining technique isc 0 = Ek(m⊕I)c i = Ek(mi⊕ci–1) for i > 011.2.2.1 Multiple EncryptionOther approaches involve multiple encryption. Using two keys k and k' toencipher a message as c = Ek' (Ek(m)) looks attractive because it has aneffective key length of 2n, whereas the keys to E are of length n. However, Merkle and Hellman [700] have shown that this encryption technique can be broken using 2n+1encryptions, rather than the expected 22n(see Exercise 3).Using three encipherments improves the strength of the cipher. There are several ways to do this. Tuchman [1006] suggested using two keys k and k':c = Ek (Dk'(Ek(m)))This mode, called Encrypt-Decrypt-Encrypt (EDE) mode, collapses to a single encipherment when k = k'. The DES in EDE mode is widely used in the financial community and is a standard (ANSI X9.17 and ISO 8732). It is not vulnerable to the attack outlined earlier. However, it is vulnerable to a chosen plaintext and a known plaintext attack. If b is the block size in bits, and n is the key length, the chosen plaintext attacktakes O(2n) time, O(2n) space, and requires 2n chosen plaintexts. The known plaintext attack requires p known plaintexts, and takes O(2n+b/p) time and O(p) memory.A second version of triple encipherment is the triple encryption mode [700]. In this mode, three keys are used in a chain of encipherments.c = Ek (Ek'(Ek''(m)))The best attack against this scheme is similar to the attack on double encipherment, but requires O(22n) time and O(2n) memory. If the key length is 56 bits, this attack is computationally infeasible.11.3 Networks and CryptographyBefore we discuss Internet protocols, a review of the relevant properties of networks is in order. The ISO/OSI model [990] provides an abstract representation of networks suitable for our purposes. Recall that the ISO/OSI model is composed of a series of layers (see Figure 11-2). Each host, conceptually, has a principal at each layer that communicates with a peer on other hosts. These principals communicate with principals at the same layer on other hosts. Layer 1, 2, and 3 principals interact only with similar principals at neighboring (directly connected) hosts. Principals at layers 4, 5, 6, and 7 interact only with similar principals at the other end of the communication. (For convenience, "host" refers to the appropriate principal in the following discussion.)Figure 11-2. The ISO/OSI model. The dashed arrows indicate peer-to-peer communication. For example, the transport layers are communicating with each other. The solid arrows indicate the actual flow of bits. For example, the transport layer invokes network layer routines on the local host, which invoke data link layer routines, which put the bits onto the network. The physical layer passes the bits to the next "hop," or host, on the path. When the message reaches the destination, it is passed up to the appropriatelevel.Each host in the network is connected to some set of other hosts. They exchange messages with those hosts. If host nob wants to send a message to host windsor, nob determines which of its immediate neighbors is closest to windsor (using an appropriate routing protocol) and forwards the message to it. That host, baton, determines which of its neighbors is closest to windsor and forwards the message to it. This process continues until a host, sunapee, receives the message and determines that windsor is an immediate neighbor. The message is forwarded to windsor, its endpoint.Definition 11–5. Let hosts C0, …, Cnbe such that Ciand Ci+1are directlyconnected, for 0 i < n. A communications protocol that has C0 and Cnasits endpoints is called an end-to-end protocol. A communications protocolthat has Cj and Cj+1as its endpoints is called a link protocol.The difference between an end-to-end protocol and a link protocol is that the intermediate hosts play no part in an end-to-end protocol other than forwarding messages. On the other hand, a link protocol describes how each pair of intermediate hosts processes each message.The protocols involved can be cryptographic protocols. If the cryptographic processing is done only at the source and at the destination, the protocol is an end-to-end protocol. If cryptographic processing occurs at each host along the path from source to destination, the protocolis a link protocol. When encryption is used with either protocol, we use the terms end-to-end encryption and link encryption, respectively.In link encryption, each host shares a cryptographic key with its neighbor. (If public key cryptography is used, each host has its neighbor's public key. Link encryption based on public keys is rare.) The keys may be set on a per-host basis or a per-host-pair basis. Consider a network with four hosts called windsor, stripe, facer, and seaview. Each host is directly connected to the other three. With keys distributed on a per-host basis, each host has its own key, making four keys in all. Each host has the keys for the other three neighbors, as well as its own. All hosts use the same key to communicate with windsor. With keys distributed on a per-host-pair basis, each host has one key per possible connection, making six keys in all. Unlike the per-host situation, in the per-host-pair case, each host uses a different key to communicate with windsor. The message is deciphered at each intermediate host, reenciphered for the next hop, and forwarded. Attackers monitoring the network medium will not be able to read the messages, but attackers at the intermediate hosts will be able to do so.In end-to-end encryption, each host shares a cryptographic key with each destination. (Again, if the encryption is based on public key cryptography, each host has—or can obtain—the public key of each destination.) As with link encryption, the keys may be selected on a per-host or per-host-pair basis. The sending host enciphers the message and forwards it to the first intermediate host. The intermediate host forwards it to the next host, and the process continues until the message reaches its destination. The destination host then deciphers it. The message is enciphered throughout its journey. Neither attackers monitoring the network nor attackers on the intermediate hosts can read the message. However, attackers can read the routing information used to forward the message.These differences affect a form of cryptanalysis known as traffic analysis.A cryptanalyst can sometimes deduce information not from the content ofthe message but from the sender and recipient. For example, during the Allied invasion of Normandy in World War II, the Germans deduced which vessels were the command ships by observing which ships were sending and receiving the most signals. The content of the signals was not relevant; their source and destination were. Similar deductions can reveal information in the electronic world.第十一章密码技术11.1问题在没有考虑加密所要运行的环境时,加密的使用可能不能提供用户所期待的安全。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
毕业设计说明书英文文献及中文翻译学生姓名:学号南社区0906064109学院:电子与计算机科学技术学院专业:网络工程指导教师:刘爽英2018年6月An Overview of Servlet and JSP TechnologyGildas Avoi ne and Philippe Oechsli nEPFL, Lausa nne, Switzerla nd1.1A Servlet's JobServlets are Java programs that run on Web or application servers, acting as a middle layer betwee n requests coming from Web browsers or other HTTP clie nts and databases or applicatio ns on the HTTP server. Their job is to perform the followi ngtasks, as illustrated in Figure 1-1b5E2RGbCAPWeb Server(Servlets JSP)Figure 1-1 1.Read the explicit data sent by the client. DatabaseLegacy Application Java Application Web ServiceClient (End User)The end user normally enters this data in an HTML form on a Web page. However, the data could also come from an applet or a custom HTTP clie nt program EanqFDPw2.Read the implicit HTTP request data sent by the browser X DiTa9E3dFigure 1-1 shows a single arrow going from the client to the Web server (the layer where servlets and JSP execute〉, but there are really two varieties of data: the explicit data that the end user en ters in a form and the behi nd-the-sce nes HTTP in formati on. Both varieties are critical. The HTTP information includes cookies, information about media types and compressi on schemes the browser un dersta nds, and sc RTCTpUDGiT3.Gen erate the results.This process may require talking to a database,executing an RMI or EJB call, invoking a Web service, or computing the response directly. Your real data may be in a relati onal database. Fine. But your database probably does n't speak HTTP or retur n results in HTML, so the Web browser can't talk directly to the database.Even if it could, for security reasons, you probably would not want it to. The same argument applies to most other applications. You need the Web middle layer to extract the incoming data from the HTTP stream, talk to the application, and embed the results in side a docume nt5PCzVD7HxA4.Send the explicit data (i.e., the document> to the client-BHrnAiLgThis document can be sent in a variety of formats, including text (HTML or XML>, bi nary (GIF images>, or eve n a compressed format like gzip that is layered on top of some other un derly ing format. But, HTML is by far the most com mon format, so an importa nt servlet/JSP task is to wrap the results in side of HTML H AQX74J0X5.Send the implicit HTTP response data.Figure 1-1 shows a single arrow going from the Web middle layer (the servlet or JSP page> to the clie nt. But, there are really two varieties of data sent: the docume nt itself and the behind-the-scenes HTTP information. Again, both varieties are critical to effective developme nt. Sending HTTP resp onse data invo Ives telli ng the browser or other clie nt what type of docume nt is being retur ned (e.g., HTML>, sett ing cookies and cach ing parameters, and other such task LD AYtR y KfE1.2Why Build Web Pages Dynamically?many client requests can be satisfied by prebuilt documents, and the server would handle these requests without invoking servlets. In many cases, however, a static result is not sufficie nt, and a page n eeds to be gen erated for each request. There are a nu mber of reas ons why Web pages n eed to be built on-the-f Z y z6ZB2Ltk1.The Web page is based on data sent by the clie dvZ f vkwMi1For instanee, the results page from search engines and order-confirmation pages at on li ne stores are specific to particular user requests. You don't know what to display un til you read the data that the user submits. Just remember that the user submits two kinds of data: explicit (i.e., HTML form data> and implicit (i.e., HTTP request headers>. Either kind of in put can be used to build the output page. In particular, it is quite com mon to builda user-specific page based on a cookie value.ni4ZNxi2.The Web page is derived from data that changes frequent f y.xvxotoc。
If the page changes for every request, then you certainly need to build the response at request time. If it changes only periodically, however, you could do it two ways: you could periodically build a new Web page on the server (in depe nden tly of clie nt requests>, or you could wait and only build the page whe n the user requests it. The right approach depends on the situation, but sometimes it is more convenient to do the latter: wait for the user request. For example, a weather report or news headli nes site might build the pages dyn amically, perhaps retur ning a previously built page if that page is still up todate sixE2yxpq53.The Web page uses information from corporate databasesor other server-sideSourceS6ewMyirQFLIf the information is in a database,you need server-side processing even if the clie nt is using dyn amic Web content such as an applet. Imagi ne using an applet by itself for a search engine site:avU42VRUs "Downloading 50 terabyte applet, please wait!" Obviously, that is silly。