毕设英文翻译英文版
毕业设计外文翻译_英文版
A Design and Implementation of Active NetworkSocket ProgrammingK.L. Eddie Law, Roy LeungThe Edward S. Rogers Sr. Department of Electrical and Computer EngineeringUniversity of TorontoToronto, Canadaeddie@, roy.leung@utoronto.caAbstract—The concept of programmable nodes and active networks introduces programmability into communication networks. Code and data can be sent and modified on their ways to destinations. Recently, various research groups have designed and implemented their own design platforms. Each design has its own benefits and drawbacks. Moreover, there exists an interoperability problem among platforms. As a result, we introduce a concept that is similar to the network socket programming. We intentionally establish a set of simple interfaces for programming active applications. This set of interfaces, known as Active Network Socket Programming (ANSP), will be working on top of all other execution environments in future. Therefore, the ANSP offers a concept that is similar to “write once, run everywhere.” It is an open programming model that active applications can work on all execution environments. It solves the heterogeneity within active networks. This is especially useful when active applications need to access all regions within a heterogeneous network to deploy special service at critical points or to monitor the performance of the entire networks. Instead of introducing a new platform, our approach provides a thin, transparent layer on top of existing environments that can be easily installed for all active applications.Keywords-active networks; application programming interface; active network socket programming;I. I NTRODUCTIONIn 1990, Clark and Tennenhouse [1] proposed a design framework for introducing new network protocols for the Internet. Since the publication of that position paper, active network design framework [2, 3, 10] has slowly taken shape in the late 1990s. The active network paradigm allows program code and data to be delivered simultaneously on the Internet. Moreover, they may get executed and modified on their ways to their destinations. At the moment, there is a global active network backbone, the ABone, for experiments on active networks. Apart from the immaturity of the executing platform, the primary hindrance on the deployment of active networks on the Internet is more on the commercially related issues. For example, a vendor may hesitate to allow network routers to run some unknown programs that may affect their expected routing performance. As a result, alternatives were proposed to allow active network concept to operate on the Internet, such as the application layer active networking (ALAN) project [4] from the European research community. In the ALAN project, there are active server systems located at different places in the networks and active applications are allowed to run in these servers at the application layer. Another potential approach from the network service provider is to offer active network service as the premium service class in the networks. This service class should provide the best Quality of Service (QoS), and allow the access of computing facility in routers. With this approach, the network service providers can create a new source of income.The research in active networks has been progressing steadily. Since active networks introduce programmability on the Internet, appropriate executing platforms for the active applications to execute should be established. These operating platforms are known as execution environments (EEs) and a few of them have been created, e.g., the Active Signaling Protocol (ASP) [12] and the Active Network Transport System (ANTS) [11]. Hence, different active applications can be implemented to test the active networking concept.With these EEs, some experiments have been carried out to examine the active network concept, for example, the mobile networks [5], web proxies [6], and multicast routers [7]. Active networks introduce a lot of program flexibility and extensibility in networks. Several research groups have proposed various designs of execution environments to offer network computation within routers. Their performance and potential benefits to existing infrastructure are being evaluated [8, 9]. Unfortunately, they seldom concern the interoperability problems when the active networks consist of multiple execution environments. For example, there are three EEs in ABone. Active applications written for one particular EE cannot be operated on other platforms. This introduces another problem of resources partitioning for different EEs to operate. Moreover, there are always some critical network applications that need to run under all network routers, such as collecting information and deploying service at critical points to monitor the networks.In this paper, a framework known as Active Network Socket Programming (ANSP) model is proposed to work with all EEs. It offers the following primary objectives.• One single programming interface is introduced for writing active applications.• Since ANSP offers the programming interface, the design of EE can be made independent of the ANSP.This enables a transparency in developing andenhancing future execution environments.• ANSP addresses the interoperability issues among different execution environments.• Through the design of ANSP, the pros and cons of different EEs will be gained. This may help design abetter EE with improved performance in future.The primary objective of the ANSP is to enable all active applications that are written in ANSP can operate in the ABone testbed . While the proposed ANSP framework is essential in unifying the network environments, we believe that the availability of different environments is beneficial in the development of a better execution environment in future. ANSP is not intended to replace all existing environments, but to enable the studies of new network services which are orthogonal to the designs of execution environments. Therefore, ANSP is designed to be a thin and transparent layer on top of all execution environments. Currently, its deployment relies on automatic code loading with the underlying environments. As a result, the deployment of ANSP at a router is optional and does not require any change to the execution environments.II. D ESIGN I SSUES ON ANSPThe ANSP unifies existing programming interfaces among all EEs. Conceptually, the design of ANSP is similar to the middleware design that offers proper translation mechanisms to different EEs. The provisioning of a unified interface is only one part of the whole ANSP platform. There are many other issues that need to be considered. Apart from translating a set of programming interfaces to other executable calls in different EEs, there are other design issues that should be covered, e.g., • a unified thread library handles thread operations regardless of the thread libraries used in the EEs;• a global soft-store allows information sharing among capsules that may execute over different environmentsat a given router;• a unified addressing scheme used across different environments; more importantly, a routing informationexchange mechanism should be designed across EEs toobtain a global view of the unified networks;• a programming model that should be independent to any programming languages in active networks;• and finally, a translation mechanism to hide the heterogeneity of capsule header structures.A. Heterogeneity in programming modelEach execution environment provides various abstractions for its services and resources in the form of program calls. The model consists of a set of well-defined components, each of them has its own programming interfaces. For the abstractions, capsule-based programming model [10] is the most popular design in active networks. It is used in ANTS [11] and ASP [12], and they are being supported in ABone. Although they are developed based on the same capsule model, their respective components and interfaces are different. Therefore, programs written in one EE cannot run in anther EE. The conceptual views of the programming models in ANTS and ASP are shown in Figure 1.There are three distinct components in ANTS: application, capsule, and execution environment. There exist user interfaces for the active applications at only the source and destination routers. Then the users can specify their customized actions to the networks. According to the program function, the applications send one or more capsules to carry out the operations. Both applications and capsules operate on top of an execution environment that exports an interface to its internal programming resources. Capsule executes its program at each router it has visited. When it arrives at its destination, the application at destination may either reply it with another capsule or presents this arrival event to the user. One drawback with ANTS is that it only allows “bootstrap” application.Figure 1. Programming Models in ASP and ANTS.In contrast, ASP does not limit its users to run “bootstrap” applications. Its program interfaces are different from ANTS, but there are also has three components in ASP: application client, environment, and AAContext. The application client can run on active or non-active host. It can start an active application by simply sending a request message to the EE. The client presents information to users and allows its users to trigger actions at a nearby active router. AAContext is the core of the network service and its specification is divided into two parts. One part specifies its actions at its source and destination routers. Its role is similar to that of the application in ANTS, except that it does not provide a direct interface with the user. The other part defines its actions when it runs inside the active networks and it is similar to the functional behaviors of a capsule in ANTS.In order to deal with the heterogeneity of these two models, ANSP needs to introduce a new set of programming interfaces and map its interfaces and execution model to those within the routers’ EEs.B. Unified Thread LibraryEach execution environment must ensure the isolation of instance executions, so they do not affect each other or accessThe authors appreciate the Nortel Institute for Telecommunications (NIT) at the University of Toronto to allow them to access the computing facilitiesothers’ information. There are various ways to enforce the access control. One simple way is to have one virtual machine for one instance of active applications. This relies on the security design in the virtual machines to isolate services. ANTS is one example that is using this method. Nevertheless, the use of multiple virtual machines requires relatively large amount of resources and may be inefficient in some cases. Therefore, certain environments, such as ASP, allow network services to run within a virtual machine but restrict the use of their services to a limited set of libraries in their packages. For instance, ASP provides its thread library to enforce access control. Because of the differences in these types of thread mechanism, ANSP devises a new thread library to allow uniform accesses to different thread mechanisms.C. Soft-StoreSoft-store allows capsule to insert and retrieve information at a router, thus allowing more than one capsules to exchange information within a network. However, problem arises when a network service can execute under different environments within a router. The problem occurs especially when a network service inserts its soft-store information in one environment and retrieves its data at a later time in another environment at the same router. Due to the fact that execution environments are not allowed to exchange information, the network service cannot retrieve its previous data. Therefore, our ANSP framework needs to take into account of this problem and provides soft-store mechanism that allows universal access of its data at each router.D. Global View of a Unified NetworkWhen an active application is written with ANSP, it can execute on different environment seamlessly. The previously smaller and partitioned networks based on different EEs can now be merging into one large active network. It is then necessary to advise the network topology across the networks. However, different execution environments have different addressing schemes and proprietary routing protocols. In order to merge these partitions together, ANSP must provide a new unified addressing scheme. This new scheme should be interpretable by any environments through appropriate translations with the ANSP. Upon defining the new addressing scheme, a new routing protocol should be designed to operate among environments to exchange topology information. This allows each environment in a network to have a complete view of its network topology.E. Language-Independent ModelExecution environment can be programmed in any programming language. One of the most commonly used languages is Java [13] due to its dynamic code loading capability. In fact, both ANTS and ASP are developed in Java. Nevertheless, the active network architecture shown in Figure 2 does not restrict the use of additional environments that are developed in other languages. For instance, the active network daemon, anted, in Abone provides a workspace to execute multiple execution environments within a router. PLAN, for example, is implemented in Ocaml that will be deployable on ABone in future. Although the current active network is designed to deploy multiple environments that can be in any programming languages, there lacks the tool to allow active applications to run seamlessly upon these environments. Hence, one of the issues that ANSP needs to address is to design a programming model that can work with different programming languages. Although our current prototype only considers ANTS and ASP in its design, PLAN will be the next target to address the programming language issue and to improve the design of ANSP.Figure 2. ANSP Framework Model.F. Heterogeneity of Capsule Header StructureThe structures of the capsule headers are different in different EEs. They carries capsule-related information, for example, the capsule types, sources and destinations. This information is important when certain decision needs to be made within its target environment. A unified model should allow its program code to be executed on different environments. However, the capsule header prevents different environments to interpret its information successfully. Therefore, ANSP should carry out appropriate translation to the header information before the target environment receives this capsule.III. ANSP P ROGRAMMING M ODELWe have outlined the design issues encountered with the ANSP. In the following, the design of the programming model in ANSP will be discussed. This proposed framework provides a set of unified programming interfaces that allows active applications to work on all execution environments. The framework is shown in Figure 2. It is composed of two layers integrated within the active network architecture. These two layers can operate independently without the other layer. The upper layer provides a unified programming model to active applications. The lower layer provides appropriate translation procedure to the ANSP applications when it is processed by different environments. This service is necessary because each environment has its own header definition.The ANSP framework provides a set of programming calls which are abstractions of ANSP services and resources. A capsule-based model is used for ANSP, and it is currently extended to map to other capsule-based models used in ANTSand ASP. The mapping possibility to other models remains as our future works. Hence, the mapping technique in ANSP allows any ANSP applications to access the same programming resources in different environments through a single set of interfaces. The mapping has to be done in a consistent and transparent manner. Therefore, the ANSP appears as an execution environment that provides a complete set of functionalities to active applications. While in fact, it is an overlay structure that makes use of the services provided from the underlying environments. In the following, the high-level functional descriptions of the ANSP model are described. Then, the implementations will be discussed. The ANSP programming model is based upon the interactions between four components: application client , application stub , capsule , and active service base.Figure 3. Information Flow with the ANSP.•Application Client : In a typical scenario, an active application requires some means to present information to its users, e.g., the state of the networks. A graphical user interface (GUI) is designed to operate with the application client if the ANSP runs on a non-active host.•Application Stub : When an application starts, it activates the application client to create a new instance of application stub at its near-by active node. There are two responsibilities for the application stub. One of them is to receive users’ instructions from the application client. Another one is to receive incoming capsules from networks and to perform appropriate actions. Typically, there are two types of actions, thatare, to reply or relay in capsules through the networks, or to notify the users regarding the incoming capsule. •Capsule : An active application may contain several capsule types. Each of them carries program code (also referred to as forwarding routine). Since the application defines a protocol to specify the interactions among capsules as well as the application stubs. Every capsule executes its forwarding routine at each router it visits along the path between the source and destination.•Active Service Base : An active service base is designed to export routers’ environments’ services and execute program calls from application stubs and capsules from different EEs. The base is loaded automatically at each router whenever a capsule arrives.The interactions among components within ANSP are shown in Figure 3. The designs of some key components in the ANSP will be discussed in the following subsections. A. Capsule (ANSPCapsule)ANSPXdr decode () ANSPXdr encode () int length ()Boolean execute ()New types of capsule are created by extending the abstract class ANSPCapsule . New extensions are required to define their own forwarding routines as well as their serialization procedures. These methods are indicated below:The execution of a capsule in ANSP is listed below. It is similar to the process in ANTS.1. A capsule is in serial binary representation before it issent to the network. When an active router receives a byte sequence, it invokes decode() to convert the sequence into a capsule. 2. The router invokes the forwarding routine of thecapsule, execute(). 3. When the capsule has finished its job and forwardsitself to its next hop by calling send(), this call implicitly invokes encode() to convert the capsule into a new serial byte representation. length() isused inside the call of encode() to determine the length of the resulting byte sequence. ANSP provides a XDR library called ANSPXdr to ease the jobs of encoding and decoding.B. Active Service Base (ANSPBase)In an active node, the Active Service Base provides a unified interface to export the available resources in EEs for the rest of the ANSP components. The services may include thread management, node query, and soft-store operation, as shown in Table 1.TABLE I. ACTIVE SERVICE BASE FUNCTION CALLSFunction Definition Descriptionboolean send (Capsule, Address) Transmit a capsule towards its destination using the routing table of theunderlying environment.ANSPAddress getLocalHost () Return address of the local host as an ANSPAddress structure. This isuseful when a capsule wants to check its current location.boolean isLocal (ANSPAddress) Return true if its input argument matches the local host’s address andreturn false otherwise.createThread () Create a new thread that is a class ofANSPThreadInterface (discussed later in Section VIA “Unified Thread Abstraction”).putSStore (key, Object) Object getSStore (key) removeSStore (key)The soft-store operations are provided by putSStore(), getSSTore(), and removeSStore(), and they put, retrieve, and remove data respectively. forName (PathName) Supported in ANSP to retrieve a classobject corresponding to the given path name in its argument. This code retrieval may rely on the code loading mechanism in the environment whennecessary.C. Application Client (ANSPClient)boolean start (args[])boolean start (args[],runningEEs) boolean start (args[],startClient)boolean start (args[],startClient, runningEE)Application Client is an interface between users and the nearby active source router. It does the following responsibilities.1. Code registration: It may be necessary to specify thelocation and name of the application code in some execution environments, e.g., ANTS. 2. Application initialization: It includes selecting anexecution environment to execute the application among those are available at the source router. Each active application can create an application client instance by extending the abstract class, ANSPClient . The extension inherits a method, start(), to automatically handle both the registration and initialization processes. All overloaded versions of start() accept a list of arguments, args , that are passed to the application stub during its initialization. An optional argument called runningEEs allows an application client to select a particular set of environment variables, specified by a list of standardized numerical environment ID, the ANEP ID, to perform code registration. If this argument is not specified, the default setting can only include ANTS and ASP. D. Application Stub (ANSPApplication)receive (ANSPCapsule)Application stubs reside at the source and destination routers to initialize the ANSP application after the application clients complete the initialization and registration processes. It is responsible for receiving and serving capsules from the networks as well as actions requested from the clients. A new instance is created by extending the application client abstract class, ANSPApplication . This extension includes the definition of a handling routine called receive(), which is invoked when a stub receives a new capsule.IV. ANSP E XAMPLE : T RACE -R OUTEA testbed has been created to verify the design correctnessof ANSP in heterogeneous environments. There are three types of router setting on this testbed:1. Router that contains ANTS and a ANSP daemonrunning on behalf of ASP; 2. Router that contains ASP and a ANSP daemon thatruns on behalf of ANTS; 3. Router that contains both ASP and ANTS.The prototype is written in Java [11] with a traceroute testing program. The program records the execution environments of all intermediate routers that it has visited between the source and destination. It also measures the RTT between them. Figure 4 shows the GUI from the application client, and it finds three execution environments along the path: ASP, ANTS, and ASP. The execution sequence of the traceroute program is shown in Figure 5.Figure 4. The GUI for the TRACEROUTE Program.The TraceCapsule program code is created byextending the ANSPCapsule abstract class. When execute() starts, it checks the Boolean value of returning to determine if it is returning from the destination. It is set to true if TraceCapsule is traveling back to the source router; otherwise it is false . When traveling towards the destination, TraceCapsule keeps track of the environments and addresses of the routers it has visited in two arrays, path and trace , respectively. When it arrives at a new router, it calls addHop() to append the router address and its environment to these two arrays. When it finally arrives at the destination, it sets returning to false and forwards itself back to the source by calling send().When it returns to source, it invokes deliverToApp() to deliver itself to the application stub that has been running at the source. TraceCapsule carries information in its data field through the networks by executing encode() and decode(), which encapsulates and de-capsulates its data using External Data Representation (XDR) respectively. The syntax of ANSP XDR follows the syntax of XDR library from ANTS. length() in TraceCapsule returns the data length, or it can be calculated by using the primitive types in the XDRlibrary.Figure 5. Flow of the TRACEROUTE Capsules.V. C ONCLUSIONSIn this paper, we present a new unified layered architecture for active networks. The new model is known as Active Network Socket Programming (ANSP). It allows each active application to be written once and run on multiple environments in active networks. Our experiments successfully verify the design of ANSP architecture, and it has been successfully deployed to work harmoniously with ANTS and ASP without making any changes to their architectures. In fact, the unified programming interface layer is light-weighted and can be dynamically deployable upon request.R EFERENCES[1] D.D. Clark, D.L. Tennenhouse, “Architectural Considerations for a NewGeneration of Protocols,” in Proc. ACM Sigcomm’90, pp.200-208, 1990. [2] D. Tennenhouse, J. M. Smith, W. D. Sicoskie, D. J. Wetherall, and G. J.Minden, “A survey of active network research,” IEEE Communications Magazine , pp. 80-86, Jan 1997.[3] D. Wetherall, U. Legedza, and J. Guttag, “Introducing new internetservices: Why and how,” IEEE Network Magazine, July/August 1998. [4] M. Fry, A. Ghosh, “Application Layer Active Networking,” in ComputerNetworks , Vol.31, No.7, pp.655-667, 1999.[5] K. W. Chin, “An Investigation into The Application of Active Networksto Mobile Computing Environments”, Curtin University of Technology, March 2000.[6] S. Bhattacharjee, K. L. Calvert, and E. W. Zegura, “Self OrganizingWide-Area Network Caches”, Proc. IEEE INFOCOM ’98, San Francisco, CA, 29 March-2 April 1998.[7] L. H. Leman, S. J. Garland, and D. L. Tennenhouse, “Active ReliableMulticast”, Proc. IEEE INFOCOM ’98, San Francisco, CA, 29 March-2 April 1998.[8] D. Descasper, G. Parulkar, B. Plattner, “A Scalable, High PerformanceActive Network Node”, In IEEE Network, January/February 1999.[9] E. L. Nygren, S. J. Garland, and M. F. Kaashoek, “PAN: a high-performance active network node supporting multiple mobile code system”, In the Proceedings of the 2nd IEEE Conference on Open Architectures and Network Programming (OpenArch ’99), March 1999. [10] D. L. Tennenhouse, and D. J. Wetherall. “Towards an Active NetworkArchitecture”, In Proceeding of Multimedia Computing and Networking , January 1996.[11] D. J. Wetherall, J. V. Guttag, D. L. Tennenhouse, “ANTS: A toolkit forBuilding and Dynamically Deploying Network Protocols”, Open Architectures and Network Programming, 1998 IEEE , 1998 , Page(s): 117 –129.[12] B. Braden, A. Cerpa, T. Faber, B. Lindell, G. Phillips, and J. Kann.“Introduction to the ASP Execution Environment”: /active-signal/ARP/index.html .[13] “The java language: A white paper,” Tech. Rep., Sun Microsystems,1998.。
毕业设计英文翻译(英文)
Industrial Power Plants and Steam SystemSteam power plants comprise the major generating and process steam sources throughout the world today. Internal-combustion engine and hydro plants generate less electricity and steam than power plants. For this reason we will give our initial attention in this book to steam power plants and their design application.In the steam power field two major types of plants sever the energy needs of customer-industrial plants for factories and other production facilities-and central-station utility plants for residential, commercial, industrial demands. Of these two types of plants, the industrial power plant probably has more design variations than the utility plant. The reason for this is that the demands of industrial tend to be more varied than the demands of the typical utility customer.To assist the power-plant designer in understanding better variations in plant design, industrial power plants are considered first in this book. And to provide the widest design variables, a power plant serving several process operation and all utility is considered.In the usual industrial power plant, a steam generation and distribution system must be capable of responding to a wide range of operating conditions, and often must be more reliable than the plants electrical system. The system design is often the last to be settled but the first needed for equipment procurement and plant startup. Because of these complications the power plant design evolves slowly, changing over the life of a project.Process steam loadsSteam is a source of power and heating, and may be involved in process reaction. Its applications include serving as a stripping, fluidizing, agitating , atomizing, ejector-motive and direct-heating steam. Its quantities, Pressure Levels and degrees of superheat are set by such process needs.As reaction steam, it becomes a part of the process kinetics, as in H2, ammonia and coal-gasification plants. Although such plants may generate all the steam needed. steam from another source must be provided for startup and backup.The second major process consumption of steam is for indirect heating, such as in distillation-tower reboilers , amine-system reboilers, process heaters, piping tracing and building heating. Because the fluids in these applications generally do not need to be above 350F,steam is a convenient heat source.Again, the quantities of steam required for the services are set by the process design of the facility. There are many options available to the process designer in supplying some of these low-level heat requirements, including heat-exchange system , and circulating heat-transfer-fluid systems, as well as system and electricity. The selection of an option is made early in the design stage and is based predominantly on economic trade-off studies.Generating steam from process heat affords a means of increasing the overall thermal efficiency of a plant. After providing for the recovery of all the heat possible via exchanges, the process designer may be able to reduce cooling requirements by making provisions for the generation of low-pressure(50-150 psig)steam. Although generation at this level may be feasible from a process-design standpoint, the impact of this on the overall steam balance must be considered, because low-pressure steam is excessive in most steam balances, and the generation of additional quantities may worsen the design. Decisions of this type call close coordination between the process and utility engineers.Steam is often generated in the convection section of fired process heaters in order to improve a plant’s thermal efficiency. High-pressure steam can be generated in the furnace convection section of process heater, which have radiant heat duty only.Adding a selective –catalytic-reduction unit for the purpose of lowing NOx emissions may require the generation of waste-heat steam to maintain correct operating temperature to the catalytic-reduction unit.Heat from the incineration of waste gases represents still another source of process steam. Waste-heat flues from the CO boilers of fluid-catalytic crackers and from fluid-coking units, for example, are hot enough to provide the highest pressure level in a steam system.Selecting pressure and temperature levelsThe selecting of pressure and temperature levels for a process steam system is based on:(1)moisture content in condensing-steam turbines,(2)metallurgy of the system,(3)turbine water rates,(4)process requirements ,(5)water treatment costs, and(6)type of distribution system.Moisture content in condensing-steam turbines---The selection of pressure and temperature levels normally starts with the premise that somewhere in the system there will be a condensing turbine. Consequently, the pressure and temperature of the steam must be selected so that its moisture content in the last row of turbine blades will be less than 10-13%. In high speed, a moisture content of 10%or less is desirable. This restriction is imposed in order to minimize erosion of blades by water particles. This, in turn, means that there will be a minimum superheat for a given pressure level, turbine efficiency and condenser pressure for which the system can be designed.System mentallurgy- A second pressure-temperature concern in selecting the appropriate steam levels is the limitation imposed by metallurgy. Carbon steel flanges, for example, are limited to a maximum temperature of 750F because of the threat of graphite (carbides) precipitating at grain boundaries. Hence, at 600 psig and less, carbon-steel piping is acceptable in steam distribution systems. Above 600 psig, alloy piping is required. In a 900- t0 1,500-psig steam system, the piping must be either a r/2 carbon-1/2 molybdenum or a l/2 chromium% molybdenum alloyTurbine water rates - Steam requirements for a turbine are expressed as water rate, i.e., lb of steam/bph, or lb of steam/kWh. Actual water rate is a function of two factors: theoretical water rate and turbine efficiency.The first is directly related to the energy difference between the inlet and outlet of a turbine, based on the isentropic expansion of the steam. It is, therefore, a function of the turbine inlet and outlet pressures and temperatures.The second is a function of size of the turbine and the steam pressure at the inlet, and of turbine operation (i.e., whether the turbine condenses steam, or exhausts some of it to an intermediate pressure level). From an energy stand point, the higher the pressure and temperature, the higher the overall cycle efficiency. _Process requirements - When steam levels are being established, consideration must be given to process requirements other than for turbine drivers. For example, steam for process heating will have to be at a high-enough pressure to prevent process fluids from leaking into the steam. Steam for pipe tracing must be at a certain minimum pressure so that low-pressure condensate can be recovered.Water treatment costs - The higher the steam pressure, the costlier the boiler feedwater treatment. Above 600 psig, the feedwater almost always must be demineralized; below 600 psig, soft,ening may be adequate. It may have to be of high quality if the steam is used in the process, such as in reactions over a catalyst bed (e.g., in hydrogen production).Type of distribution system - There are two types of systems: local, as exemplified by powerhouse distribution; and complex, by wluch steam is distributed to many units in a process plant. For a small local system, it is not impractical from a cost standpoint for steam pressures to be in the 600-1,500-psig range. For a large system, maintaining pressures within the 150-600-psig range is desirable because of the cost of meeting the alloy requirements for higher-pressure steam distribution system.Because of all these foregoing factors, the steam system in a chemical process complex or oil refinery frequently ends up as a three-level arrangement. The highest level, 600 psig, serves primarily as a source of power. The intermediate level, 150 psig, is ideally suitable for small emergency turbines, tracing off the plot, and process heating. The low level, normally 50 psig, can be used for heating services, tracing within the plot, and process requirements. A higher fourth level normally not justified, except in special cases as when alarge amount ofelectric power must be generated.Whether or not an extraction turbine will be included in the process will have a bearing on the intermediate-pressure level selected, because the extraction pressure should be less than 50% of the high-pressure level, to take into account the pressure drop through the throttle valve and the nozzles of the high-pressure section of' the turbine.Drivers for pumps and compressorsThe choice between a steam and an electric driver for a particular pump or compressor depends on a number of things, including the operational philosophy. In the event of a power failure, it must be possible to shut down a plant orderly and safely if normal operation cannot be continued. For an orderly and safe shutdown, certain services must be available during a power failure: (1) instrument air, (2) cooling water, (3) relief and blow down pump out systems, (4) boiler feedwater pumps, (5) boiler fans, (6) emergency power generators, and (7) fire water pumps.These services are normally supplied by steam or diesel drivers because a plant's steam or diesel emergency system is considered more reliable than an electrical tie-line.The procedure for shutting down process units must be analyzed for each type of processplant and specific design. In general, the following represent the minimum services for which spare pumps driven by steam must be provided: column reflux, bottoms and purge-oil circulation, and heater charging. Most important is to maintain cooling; next, to be able to safely pump the plant's inventory into tanks.Driver selection cannot be generalized; a plan and procedure must be developed for each process unit.The control required for a process is at times another consideration in the selection of a driver. For example, a compressor may be controlled via flow or suction pressure. The ability to vary driver speed, easily obtained with a steam turbine, may be basis for selecting a steam driver instead of a constant-speed induction electric motor. This is especially important when the molecular weight of the gas being compressed may vary, as in catalytic-cracking and catalytic-reforming processes.In certain types of plants, gas flow must be maintained to prevent uncontrollable high-temperature excursions during shutdown. For example, hydrocrackers are purged of heavy hydrocarbon with recycle gas to prevent the exothermic reactions from producing high bed temperatures. Steam-driven compressors can do this during a power failure.Each process operation must be analyzed from such a safety viewpoint when selecting drivers for critical equipment. The size of a relief and blowdown system can be reduced by installing steam drivers. In most cases, the size of such a system is based on a total power failure. If heat-removal powered by steam drivers, the relief system can be smaller. For example, a steam driver will maintain flow in the pump-around circuit for removing heat from a column during a power failure, reducing the relief load imposed on the flare system.Equipment support services (such as lubrication and sea-oil systems for compressors) that could be damaged during a loss of power should also be powered by steam drivers.Driver size can also be a factor. An induction electric motor requires large starting currents - typically six times the normal load. The drop in voltage caused by the startup of such a motor imposes a heavy transient demand on the electrical distribution system. For this reason, drivers larger than 10,000 hp are normally steam turbines, although synchronous motors as large as 25,000 hp are used.The reliability of life-support facilities - e.g., building heat, potable water, pipe tracing, emergency lighting-during power failures is of particular concern mates. In such a case, at least one boiler should be equipped with steam-driven auxiliaries to provide these services.Lastly, steam drivers are also selected for the purpose of balancing steam systems and avoiding large amounts of letdown between steam levels. Such decisions regarding drivers are made after the steam balances have been refined and the distribution system has been fully defined. There must be sufficient flexibility to allow balancing the steam system under all operating conditions.Selecting steam driversAfter the number of steam drivers and their services have been established, the utility, or process engineer will estimate the steam consumption for making the steam balance.The standard method of doing this is to use the isentropic expansion of steam correeted for turbine efficiency.Actual steam consumption by a turbine is determined via:SR = (TSR)(bhp)/EHere, SR = actual steam rate, lb/h; TSR = theoretical steam rate, lb/hr/bhp ; bhp = turbine brake horsepower; and E = turbine efficiency.When exhaust steam can be used for process heating, the highest thermodynamic efficiency can be achieved by means of backpressure turbines. Large drivers, which are of high efficiency and require low theoretical steam rates, are normally supplied by the high-pressure header, thus minimizing steam consumption.Small turbines that operate only in emergencies can be allowed to exhaust to atmosphere. Although their water rates are poor, the water lost in short-duration operations may not represent a significant cost. Such turbines obviously play a small role in steam balance planning.Constructing steam balancesAfter the process and steam-turbine demands have been established, the next step is to construct a steam balance for the chemical complex or oil refinery. A sample balance is shown in Fig. 1-4. It shows steam production and consumption, the header systems, letdown stations, and boiler plant. It illustrates a normal (winter) case.It should be emphasized that there is not one balance but a series, representing a variety of operating modes. The object of the balances is to determine the design basis for establishing boiler she, letdown stations and deaerator capacities, boiler feedwater requirements, and steam flows in various parts of the system.The steam balance should cover the following operating modes: normal, all units operating; winter and summer conditions; shutdown of major units; startup of major units; loss of largest condensate source; power failure with flare in service; loss of large process steam generators; and variations in consumption by large steam users.From 50 t0 100 steam balances could be required to adequately cover all the major impacts on the steam system of a large complex.At this point, the general basis of the steam system design should have been developed by the completion of the following work:1. All significant loads have been examined, with particular attention focused on those for which there is relatively little design freedom - i.e., reboilers, sparing steam for process units, large turbines required because of electric power limitation and for shutdown safety.2. Loads have been listed for which the designer has some liberty in selecting drivers. These selections are based on analyses of cost competitiveness.3. Steam pressure and temperature levels have been established.4. The site plan has been reviewed to ascertain where it is not feasible to deliver steam or recover condensate, because piping costs would be excessive.5. Data on the process units are collected according to the pressure level and use of steam - i.e., for the process, condensing drivers and backpressure drivers.6. After Step 5, the system is balanced by trial-and-error calculations or computerized techniques to determine boiler, letdown, deaerator and boiler feedwater requirements.7. Because the possibility of an electric power failure normally imposes one of the major steam requirements, normal operation and the eventuality of such a failure must both be investigated, as a minimum.Checking the design basisAfter the foregoing steps have been completed, the following should be checked:Boiler capacity - Installed boiler capacity would be the maximum calculated (with an allowance of l0-20% for uncertainties in the balance), corrected for the number of boilers operating (and on standby).The balance plays a major role in establishing normal-case boiler specifications, both number and size. Maximum firing typically is based on the emergency case. Normal firing typically establishes the number of boilers required, because each boiler will have to be shut down once a year for the code-required drum inspection. Full-firing levels of the remaining boilers will be set by the normal steam demand. The number of units required (e.g., three 50% units, four 33%units, etc.) in establishing installed boiler capacity is determined from cost studies. It is generally considered double-jeopardy design to assume that a boiler will be out of service during a power failure.Minimum boiler turndown - Most fuel-fired boilers can be operated down to approximately 20% of the maximum continuous rate. The maximum load should not be expected to be below this level.Differences between normal and maximum loads –If the maximum load results from an emergency (such as power failure), consideration should be given to shedding process steam loads under this condition in order to minimize in- stalled boiler capacity. However, the consequences of shedding should be investigated by the process designer and the operating engineers to ensure the safe operation of the entire process.Low-level steam consumption - The key to any steam balance is the disposition of low-level steam. Surplus low-level steam can be reduced only by including more condensing steam turbines in the system, or devising more process applications for it, such as absorption refrigeration for cooling process streams and ranking-cycle systems for generating power. In general, balancing the supply and consumption of low-level steam is a critical factor in the design of the steam system.Quantity of steam at pressure-reducing stations - Because useful work is not recovered from the steam passing through a pressure-reducing station, such flow should be kept at a minimum. In the Fig. 1-5 150/50-psig station, a flow of only 35,000 lb/h was established as normal for this steam balance case (normal, winter). The loss of steam users on the 50-psig systems should be considered, particularly of the large users, because a shutdown of one may demand that the 150/50-psig station close off beyond its controllable limit. If this happened, the 50-psig header would be out of control, and an immediate-pressure buildup in the header wouldbegin, setting off the safety relief valves.The station's full-open capacity should also be checked to ensure that it can make up any 50-psig steam that may be lost through the shutdown of a single large 50-psig source (a turbine sparing a large electric motor, for example}. It would be undesirable for the station to be sized so that it opens more than 80%. In some cases, range ability requirements may dictate two valves (one small and one large).Intermediate pressure level - If large steam users or suppliers may come on stream or go off steam, the normal(day-to-day) operation should be checked. No such change in normal operation should result in a significant upset (e.g.,relief valves set off, or the system pressure control lost).If a large load is lost, the steam supply should be reduced by the letdown-station. If the load suddenly increases, the 600/150-psig station must be able of supplying the additional steam. If steam generated via the process disappears, the station must be capable of making up theload. If150-psig steam is generated unexpectedly, the 600/150-psig station must be able to handle the cutback.The important point here is that where the steam flow could rise t0 700,000 lb/h, this flow should be reduced by a cutback at the 600/150-psig station, not by an increase in the flow to the lower-pressure level, because this steam would have nowhere to go. The normal (600/150-psig) letdown station must be capable of handling some of the negative load swings, even though, overall, this letdown needs to be kept to a minimum.On the other hand, shortages of steam at the 150-psig level can be made up relatively easily via the 600/150-psig station. Such shortages are routinely small in quantity or duration, or both-(startup, purging, electric drive maintenance, process unit shutdown, etc.)High-pressure level - Checking the high-pressure level is generally more straightforward because rate control takes place directly at the boilers. Firing can be increased or lowered to accommodate a shortage or surplus.Typical steam-balance casesThe Fig. 1-4 steam balance represents steady-state condition, winter operation, all process units operating, and no significant unusual demands for steam.An analysis similar to the foregoing might also be required for the normal summertime case, in which a single upset must not jeopardize control but the load may be less (no tank heating, pipe tracing, etc.)The balance representing an emergency (e.g., loss of electric power) is significant. In this case, the pertinent test point is the system's ability to simply weather the upset, not to maintain normal, stable operation. The maximum relief pressure that would develop in any of the headers represents the basis for sizing relief valves. The loss of boiler feed water or condensate return, or both, could result in a major upset, or even a shutdown.Header pressure control during upsetsAt the steady-state conditions associated with the multiplicity of balances, boiler capacity can be adjusted to meet user demands. However, boiler load cannot be changed quickly to accommodate a sharp upset. Response rate is typically limited to 20% of capacity per minute. Therefore, other elements must be relied on to control header pressures during transient conditions.The roles of several such elements in controlling pressures in the three main headers during transient conditions are listed in Table l-3. A control system having these elements will result in a steam system capable of dealing with the transient conditions experienced in moving from one balance point to another.Tracking steam balancesBecause of schedule constraints, steam balances and boiler size are normally established early in the design stage. These determinations are based on assumptions regarding turbine efficiencies, process steam generated in waste-heat furnaces, and other quantities of steam that depend on purchased equipment. Therefore, a sufficient number of steam balances should be tracked through the design period to ensure that the equipment purchased will satisfy the original design concept of the steam system.This tracking represents an excellent application for a utility data-base system and a system linear programming model. During the course of the mechanical design of a large "grass roots" complex, 40 steam balances were continuously updated for changes in steam loads via such an application.Cost tradeoffsTo design an efficient but least-expensive system, the designer ideally develops a total minimum-cost curve – which incorporates all the pertinent costs related to capital expenditures, installation, fuel, utilities, operations and maintenance-and performs a cost study of the final system. However, because the designer is under the constraint of keeping to a project schedule, major, highly expensive equipment must be ordered early in the project, when many key parts of the design puzzle are not available (e.g., a complete load summary, turbine water rates, equipment efficiencies and utility costs).A practical alternative is to rely on comparative-cost estimates, as are conventionally used in assisting with engineering decision points. This approach is particularly useful in making early equipment selections when fine-tuning is not likely to alter decisions, such as regarding the number of boilers required, whether boilers should be shop-fabricated or field-erected, and the practicality of generating steam from waste heat or via cogeneration.The significant elements of a steam-system cost-comparative study are costs for: equipment and installation; ancillaries (i.e., miscellaneous items required to support the equipment,such as additional stacks, upgraded combustion control, more extensive blowdown facilities, etc.); operation(annual); maintenance (annual); and utilities.The first two costs may be obtained from in-house data or from vendors. Operational and maintenance costs can be factored from the capital cost for equipment based on an assessment of the reliability of the purchased equipment.Utility costs are generally the most difficult to establish at an early stage because sources frequently depend on the site of the plant. Some examples of such costs are: purchased fuel gas - $5.35/million Btu, raw water - $0.60/1,000 gal, electricity - $0.07{kWh, and demineralized boiler feedwater -$1.50/1,000 gal. The value of steam at the various pressureLevels can be developed [5J.Let it be further assumed that the emergency balance requires 2,200,000 lb/h of steam (all boilers available). Listed in Table 1 4 are some combinations of boiler installations that meet the design conditions previously stipulated.Table l-4 indicates that any of the several combinations of power-boiler number and size could meet both normal and emergency demand. Therefore, a comparative-cost analysis would be made to assist in making an early decision regarding the number and size of the power boilers.(Table l-4 is based on field-erected, industrial-type boiler Conventional sizing of this type of boiler might range from 100,000 lb/h through 2,000,000 lb/h for each.)An alternative would be the packaged boiler option (although it does not seem practical at this load level. Because it is shop-fabricated, this type of boiler affords a significant saving in terms of field installation cost. Such boilers are available up to a nominal capacity of 100,000 lb/h, with some versions up t0 250,000 lb7h.Selecting turbine water rate i.e., efficiency) represents another major cost concern. Beyond the recognized payout period (e.g., 3 years), the cost of drive steam can be significant in comparison with the equipment capital cost. The typical 30% efficiency ofthe medium-pressure backpressure turbine can be boosted significantly.Driver selections are frequently made with the help of cost-tradeoff studies, unless overriding considerations preclude a drive medium. Electric pump drives are typically recommended on the basis of such studies.Steam tracing has long been the standard way of winterizing piping, not only because of its history of successful performance but also because it is an efficient way to use low-pressure steam.Design consideratonsAs the steam system evolves, the designer identifies steam loads and pressure levels, locates steam loads, checks safety aspects, and prepares cost-tradeoff studies, in order to provide low-cost energy safely, always remaining aware of the physical entity that will arise from the design.How are design concepts translated into a design document? And what basic guidelines will ensure that the physical plant will represent what was intended conceptually?Basic to achieving these ends is the piping and instrument diagram (familiar as the P&ID). Although it is drawn up primarily for the piping designers benefit, it also plays a major role in communicating to the instrumentation designer the process-control strategy, as well as in conveying specialty information to electrical, civil, structural, mechanical and architectural engineers. It is the most important document for representing the specification of the steam。
毕业设计外文翻译模板
本科生毕业设计(论文)外文翻译毕业设计(论文)题目:组合钻床动力滑台液压系统及电控系统设计外文题目: Drilling machine译文题目:组合钻床学生姓名:马莉莉专业:机械设计制造及其自动化0701班指导教师姓名:王洁评阅日期:正文内容小四号字,宋体,行距1.5倍行距。
The drilling machine is a machine for making holes with removal of chips and it is used to create or enlarge holes. There are many different types of drilling machine for different jobs, but they can be basically broken down into two categories.The bench drill is used for drilling holes through raw materials such as wood, plastic and metal and gets its name because it is bolted to bench for stability so that larger pieces of work can be drilled safely. The pillar drill is a larger version that stands upright on the floor. It can do exactly the same work as the bench drill, but because of its size it can be used to drill larger pieces of materials and produce bigger holes. Most modern drilling machines are digitally automated using the latest computer numerical control (CNC) technology.Because they can be programmed to produce precise results, over and over again, CNC drilling machines are particularly useful for pattern hole drilling, small hole drilling and angled holes.If you need your drilling machine to work at high volume, a multi spindle drill head will allow you to drill many holes at the same time. These are also sometimes referred to as gang drills.Twist drills are suitable for wood, metal and plastics and can be used for both hand and machine drilling, with a drill set typically including sizes from 1mm to 14mm. A type of drill machine known as the turret stores tools in the turret and positions them in the order needed for work.Drilling machines, which can also be referred to as bench mounted drills or floor standing drills are fixed style of drills that may be mounted on a stand or bolted to the floor or workbench. A drilling machine consists of a base, column, table, spindle), and drill head, usually driven by an induction motor.The head typically has a set of three which radiate from a central hub that, when turned, move the spindle and chuck vertically, parallel to the axis of the column. The table can be adjusted vertically and is generally moved by a rack and pinion. Some older models do however rely on the operator to lift and re clamp the table in position. The table may also be offset from the spindles axis and in some cases rotated to a position perpendicular to the column.The size of a drill press is typically measured in terms of swing which can be is defined as twice the throat distance, which is the distance from the centre of the spindle to the closest edge of the pillar. Speed change on these drilling machines is achieved by manually moving a belt across a stepped pulley arrangement.Some drills add a third stepped pulley to increase the speed range. Moderndrilling machines can, however, use a variable-speed motor in conjunction with the stepped-pulley system. Some machine shop drilling machines are equipped with a continuously variable transmission, giving a wide speed range, as well as the ability to change speed while the machine is running.Machine drilling has a number of advantages over a hand-held drill. Firstly, it requires much less to apply the drill to the work piece. The movement of the chuck and spindle is by a lever working on a rack and pinion, which gives the operator considerable mechanical advantage.The use of a table also allows a vice or clamp to be used to position and restrain the work. This makes the operation much more secure. In addition to this, the angle of the spindle is fixed relative to the table, allowing holes to be drilled accurately and repetitively.Most modern drilling machines are digitally automated using the latest computer numerical control (CNC) technology. Because they can be programmed to produce precise results, over and over again, CNC drilling machines are particularly useful for pattern hole drilling, small hole drilling and angled holes.Drilling machines are often used for miscellaneous workshop tasks such as sanding, honing or polishing, by mounting sanding drums, honing wheels and various other rotating accessories in the chuck. To add your products click on the traders account link above.You can click on the links below to browse for new, used or to hire a drilling machine.Drilling machines are used for drilling, boring, countersinking, reaming, and tapping. Several types are used in metalworking: vertical drilling machines, horizontal drilling machines, center-drilling machines, gang drilling machines, multiple-spindle drilling machines, and special-purpose drilling machines.Vertical drilling machines are the most widely used in metalworking. They are used to make holes in relatively small work-pieces in individual and small-lot production; they are also used in maintenance shops. The tool, such as a drill, countersink, or reamer, is fastened on a vertical spindle, and the work-piece is secured on the table of the machine. The axes of the tool and the hole to be drilled are aligned by moving the workpiece. Programmed control is also used to orient the workpiece and to automate the operation. Bench-mounted machines, usually of the single-spindle type, are used to make holes up to 12 mm in diameter, for instance, in instrument-making.Heavy and large workpieces and workpieces with holes located along a curved edge are worked on radial drilling machines. Here the axes of the tool and the hole to be drilled are aligned by moving the spindle relative to the stationary work-piece.Horizontal drilling machines are usually used to make deep holes, for instance, in axles, shafts, and gun barrels for firearms and artillery pieces.Center-drilling machines are used to drill centers in the ends of blanks. They are sometimes equipped with supports that can cut off the blank before centering, and in such cases they are called center-drilling machines. Gang drilling machines with more than one drill head are used to produce several holes at one time. Multiple-spindle drilling machines feature automation of the work process. Such machines can be assembled from several standardized, self-contained heads with electric motors and reduction gears that rotate the spindle and feed the head. There are one-, two-, and three-sidedmultiple-spindle drilling machines with vertical, horizontal, and inclined spindles for drilling and tapping. Several dozen such spindles may be mounted on a single machine. Special-purpose drilling machines, on which a limited range of operations is performed, are equipped with various automated devices.Multiple operations on workpieces are performed by various combination machines. These include one- and two-sided jig boring machines,drilling-tapping machines (usually gang drilling machines with reversible thread-cutting spindles), milling-type drilling machines and drilling-mortising machines used mainly for woodworking, and automatic drilling machines.In woodworking much use is made of single- and multiple-spindle vertical drilling machines, one- and two-sided, horizontal drilling machines (usually with multiple spindles), and machines equipped with a swivel spindle that can be positioned vertically and horizontally. In addition to drilling holes, woodworking machines may be used to make grooves, recesses, and mortises and to remove knots.英文翻译指导教师评阅意见。
毕业设计英文翻译
INTRODUCTION
The Yong Jong Grand Bridge is located at the west coast of Korea, and was constructed to connect the new international airport at Yong Jong Island, Inchon, and the mainland. The bridge is 4.4 km long and is composed of three different bridge types—a suspension bridge (550 m), a truss bridge (2,250 m), and steel box bridges. The Grand Bridge (Gil and Cho 1998) has double decks; it will carry six highway lanes on the upper deck, and four highway lanes and dual tracks of a railway on the lower deck. The approach truss bridges are double-deck, Warren truss type bridges. The truss bridges are three-span continuous with a length of 125 m for each span. The width of the truss bridge is 36.1 m.
shoe and dead load of the neighboring truss bridges. The selfanchored suspension bridge typically has limited space for the main cable anchorage, which is located at the stiffening truss, so the air spinning method, which can contain more wires per strand than the parallel wire strand method, is employed to erect the main cables.
(完整版)_毕业设计英文翻译_及格式
毕业设计(论文)英文翻译题目专业班级姓名学号指导教师职称200年月日The Restructuring of OrganizationsThroughout the 1990s, mergers and acquisitions a major source of corporate restructuring, affecting millions of workers and their families. This form of restructuring often is accompanied by downsizing. Downsizing is the process of reducing the size of a firm by laying off or retiring workers early. The primary objectives of downsizing are similar in U.S. companies and those in other countries:●cutting cost,●spurring decentralization and speeding up decision making,●cutting bureaucracy and eliminating layers of especially they did five years ago. One consequence of this trend is that today’s managers supervise larger numbers of subordinates who report directly to them. In 1990, only about 20 percent of managers supervise twelve or more people and 54 percent supervised six or fewer.Because of downsizing, first-line managers quality control, resources, and industrial engineering provide guidance and support. First-line managers participate in the production processes and other line activities and coordinate the efforts of the specialists as part of their jobs. At the same time, the workers that first-line managers supervise are less willing to put up with authoritarian management. Employees want their jobs to be more creative, challenging, fun, and satisfying and want to participate in decisions affecting their work. Thus self-managed work teams that bring workers and first-line managers together to make joint decisions to improve the way they do their jobs offer a solution to both supervision and employee expectation problems. When you ’t always the case. Sometimes entire divisions of a firm are simply spun off from the main company to operate on their own as new, autonomous companies. The firm that spun them off may then become one of their most important customers or suppliers. That AT&T “downsized” the old Bell Labs unit, which is now known as Lucent Technologies. Now, rather than - return is free to enter into contracts with companies other than AT&T. this method of downsizing is usually called outsourcing.Outsourcing means letting other organizations perform a needed service andor manufacture needed parts or products. Nike outsources the production of its shoes to low-cost plants in South Korea and China and imports the shoes for distribution in North America. These same plants also ship shoes to Europe and other parts of Asia for distribution. Thus today’s managers face a new challenge: t o plan, organize, lead, and control a company that may as a modular corporation. The modularcorporation is most is most common in three industries: apparel, auto manufacturing, and electronics. The most commonly out-sourced function is production. By out sourcing production, a company can switch supplier best suited to a customer’s needs.Decisions about what to outsource and what to keep in- to contract production to another company is a sound business decision to contract production to another company is a sound business decision, at least for U.S. manufacturers. It appears to the unit cost of production by relieving the company of some overhead, and it frees the company to allocate scarce resources to activities for which the company examples of modular companies are Dell Computer, Nike, Liz Claiborne fashions, and ship designer Cyrix.As organizations downsize and outsource functions, they become flatter and smaller. Unlike the behemoths of the past, the new, smaller firms are less like autonomous fortresses and more like nodes in a net work of complex relationships. This approach, called the network form of organization, involves establishing strategic alliances among several entities.In Japan, cross-ownership and alliances among firms-called keiretsu-both foreign and U.S. auto parts producers. It also owns 49 percent of Hertz, the car rental company that is also a major customer. Other alliances include involvement in several research consortia. In the airline industry, a common type of alliance is between an airline and an airframe manufacture. For example, Delta recently agreed to buy all its aircraft from Boeing. Boeing Airlines. Through these agreements, Boeing guarantees that it will be able to sell specified models of its aircraft and begin to adapt their operations to the models they will be flying in the future. Thus both sides expect to reap benefits from these arrangements for many years.Networks forms of organizations are prevalent in access to the universities and in small, creative organizations. For example, the U.S. biotechnology industry is characterized by network of relationships between new biotechnology firms dedicated to research and new products development and established firms in industries that can use these new products, such as pharmaceuticals. In return for sharing technical information with the larger firms, the smaller firms gain access to their partners’ resources for product testing, marketing, and distribution. Big pharmaceutical firms such as Merk or Eli Lily gain from such partnerships because the smaller firms typically development cycle in the larger firms.Being competitive increasingly requires establishing and managing strategic alliances with other firms. In a strategic alliance, two or more firms agree to cooperate in a venture that is expected to benefit both firms.企业重组整个20世纪90年代中,合并和收购一直是企业重组的主要起源,影响着千百万的工人和他们的家庭。
毕业设计英文翻译
The first dam for which there are reliable records was build on the Nile River sometime before 4000 B.C.It was used to divert the Nile and provide a site for the ancient city of Memphis.The oldest dam still in use is the Almanza Dam in Spain,which was constructed in the sixteenth ceentury.With the passage of time, materials and construction have improved, making possible the erection of such large dams as the Nurek Dam, which is being constructed in the U.S.S.R. On the vaksh River near the border of Afghanistan. This dam will be 1017ft(333m)high, of earth and rock fill. The failure of a dam may cause serious loss of life and property; consequently, the design and maintenance of dams are commonly under government surveillance. In the United States over 30000 dams are under the control of state authorities. The 1972 Federal Dam Safety Act (PL92-367) requires periodic inspections of dams by qualified experts. The failure of the Teton Dam in Idaho in June 1976 added to the concern for dam safety in the United States.1.Type of DamsDams are classified on the basis of the type and materials of construction, as gravity, arch, buttress, and earth. The first three types are usually constructed of concrete. A gravity dam depends on its own weight for stability and it straight in plan although sometimes slightly curved. Arch dams transmit most of the horizontal thrust of the water behind them to the abutments by arch action and have thinner cross sections than comparable gravity dams. Arch dams can be used only in narrow canyons where the walls are capable of withstanding the thrust produced by the arch action. The simplest of the many types of buttresses. Earth dams are embankments of rock or earth with provision for controlling seepage by means of an impermeable core or upstream blanket. More than one type of dam may be included in a single structure. Curved dams may combine both gravity and arch action to achieve stability. Long dams often have a concrete river section containning spollway and sluice gates and earth or rock-fill wing dams for the remainder of their length.The selection of the best type of dam for a given site is a problem in both engineering feasibility and cost. Feasibility is governed by topography, geology and climate. For example, because concrete spalls when subjected to alternate freezing and thawing, arch and buttress dams With thin concrete section are sometimes avoided in areas subject to extreme cold. The relative cost of the various type of dams depends mainly on the availability of construction materials near the site and the accessibility of transportation facilities. Dams are sometimes built in stages with the second or later stages constructed a decade or longer after the stage.The height of a dam is defined as the difference in elevation between the roadway, or spillway crest, and the lowest part of the excayated foundation. However, figures quoted for heights of dams are oftendetemined in other ways. Frequently the height is taken net height above the old riverbed.。
毕业设计翻译定稿 英汉对照(绝版)
A Comparison of AASHTO Bridge Load Rating Methods Authors:Cristopher D. Moen, Ph.D., P.E., Virginia Tech, Blacksburg, VA, cmoen@Leo Fernandez, P.E., TranSystems, New York, NY, lafernandez@INTRODUCTIONThe capacity of an existing highway bridge is traditionally quantified with a load rating factor. This factor, when multiplied by the design live load magnitude, describes the total live load a bridge can safely carry. The load rating factor, RF, is related to the capacity of the controlling structural component in the bridge, C, and the dead load D and live load L applied to that component with the equation:L DC RF -=(1)Visual bridge inspections provide engineers with information to quantify the degradation in structural integrity of a bridge (i.e., the reduction in C). The trends in RF over time can be employed by bridge owners to make decisions regarding bridge maintenance and replacement. For example, when a bridge is first constructed, RF=1.3 means that a bridge can safely carry 1.3 times the weight of its design live load (i.e., that C-D, the existing capacity after accounting for dead load, is 1.3 times the design live load L). If the RF decreases to 0.8 after 20 years of service, deterioration of the primary structural components has most likely occurred and rehabilitation or replacement should be considered.Equation (1) is a simple idea, but C, D, and L can be highly variable and difficult to characterize depending upon the bridge location, bridge type, daily traffic flow, structural system (e.g., simple or continuous span) and choice of constructionmaterials (e.g. steel, reinforced or prestressed concrete, composite construction). The American Association of State Highway and Transportation Officials (AASHTO) Manual for Condition Evaluation of Bridges (MCEB) provides a formal load rating procedure to assist engineers in the evaluation of existing bridges [AASHTO 1994 with interims through 2003]. The MCEB provides two load rating methods, one based on an allowable stress approach (ASR) and another based on a load factor approach (LFR). Both the ASR and LFR methods are consistent with the design loading and capacity calculations outlined in the AASHTO Standard Specification for the Design of Highway Bridges [AASHTO 2002]. Recently momentum has shifted towards a probabilistic-based bridge design approach with the publication of the AASHTO LRFD Bridge Design Specifications [AASHTO 2007]. Bridges designed with this code have a uniform probability of failure (i.e., a uniform reliability). The AASHTO Manual for Condition Evaluation and Load and Resistance Factor Rating (LRFR) of Highway Bridges [AASHTO 2003] extends this idea of uniform reliability from LRFD to the load rating of existing bridges and is currently the recommended load rating method (over the ASR and LFR methods) by the Federal Highway Administration (FHWA).The transition from ASR and LFR to LRFR bridge load rating methodology represents a positive shift towards a more accurate and rational bridge evaluation strategy. Bridge owners are optimistic that the LRFR load rating methodology will improve bridge safety and economy, but they are also currently dealing with the tough questions related to its implementation. Why do ASR, LFR, and LRFR methods produce different load rating factors for the same bridge? Should we change the posting limit on a bridge if the LRFR rating is lower than the MCEB ratings? What are the major philosophical differences between the three methods? It is the goal of this paper to answer some of these questions (and at the same time dispel common myths) with a succinct summary of the history of the three methods. A comparison of the LFR and LRFR methods for a typical highway bridge will also bepresented, with special focus on the benefits inherent in the rational, probabilistic approach of the LRFR load rating method. This paper is also written to serve as an introduction to load rating methodologies for students and engineers new to the bridge evaluation field.S UMMARY OF EXISTING LITERATURESeveral reports have been published which summarize the development of AASHTO design and load rating methodologies. FHWA NHI Report 07-019 is an excellent historical reference describing the evolution of AASHTO live loadings (including the HS20-44 truck) and load factor design [Kulicki 2007b]. NCHRP Report 368 describes the development of the AASHTO LRFD design approach[Nowak 1999], and is supplemented by the NCHRP Project No. 20-7/186 report[Kulicki 2007a] with additional load factor calibration research. NCHRP Report 454 documents the calibration of the AASHTO LRFR load factors [Moses 2000], and NCHRP Web Document 28 describes the implementation of the LRFR load rating method [NCHRP 2001]. The NCHRP Project 20-7/Task 122 report supplements Web Document 28 with a detailed comparison of the LRFR and LFD load rating approaches [Mertz 2005].AASHTO A LLOWABLE STRESS RATING METHODThe Allowable Stress Rating (ASR) method is the most traditional of the three load rating methods, primarily because the performance of a bridge is evaluated under service conditions in the load rating equation [AASHTO 1994]:)1(21l L A D A C RF +-= (2) C is calculated with a “working stress” approach where the capacity of the primary structural members is limited to a proportion of the assumed failure stress (e.g., 0.55F y for structural steel in tension and 0.3f’c for concrete in compression.) Consistent with the service level approach, the demand dead load D and live load Lare unfactored, i.e. A 1=1.0 and A 2=1.0.The uncertainty in the strength of the bridge is accounted for in the ASR approach by limiting the applied stresses, but the variability in the demand loads is neglected. For example, dead load on a bridge has a relatively low variability because the dimensional tolerances of the primary structural members (e.g., a hot-rolled steel girder) are small [Nowak 2000]. Vehicular traffic loads on a bridge have a higher uncertainty because of varying traffic volume (annual average daily truck traffic or ADTT) and varying types of vehicular traffic (e.g., primarily trucks on an interstate or primarily cars on a parkway). The ASR method also does not consider redundancy of a bridge (e.g., continuous or simple spans, hammerhead piers or multiple column bents) or the amplified uncertainty in the capacity of aging structural members versus newly constructed members. The ASR method’s treatment of capacity and demand results in load rating factors lacking a uniform level of reliability (i.e., a uniform probability of failure) across all types of highway bridges. For example, with the ASR method, two bridges can have RF=2 even though one bridge carries a high ADTT with a non-redundant superstructure (higher probability of failure) while the other bridge carries a low AADT with a redundant superstructure (lower probability of failure).AASHTO L OAD F ACTOR R ATING METHODIn contrast to the ASR method’s service load approach to load rating, the AASHTO Load Factor Rating (LFR) method evaluates the capacity of a bridge at its ultimate limit state . The LFR load rating factor equation is:12(1)nR A D RF A L I φ-=+ (3) where the capacity C of the bridge in (2) has been replaced with φ R n , the predicted strength of the controlling structural component in the bridge. R n is the nominal capacity of the structural component and φ is a strength reduction factor which accounts for the uncertainty associated with the material properties,workmanship, and failure mechanisms (e.g., shear, flexure, or compression). For example, φ is 0.90 for the flexural strength of a concrete beam and 0.70 for a concrete column with transverse ties [AASHTO 2002]. The lower φ for the concrete column means that there is more uncertainty inherent in the structural behavior and strength prediction for a concrete column than for a concrete beam. The dead load factor A 1 is 1.3 to account for unanticipated permanent load and A 2 is either 1.3 or2.17, defining a live load envelope ranging from an expected design level (Inventory) to an extreme short term loading (Operating) [AASHTO 1994].The LFR method is different from the ASR method because it calculates the load rating factor RF by quantifying the potential for failure of a bridge (and associated loss of life and property) instead of quantifying the behavior of a bridge in service . The LFR method is similar to the ASR method in that it does not account for the influence of redundancy on the reliability of a bridge. Also, the load factors A 1 and A 2 are defined without a formal reliability analysis (i.e., they are not derived by considering probability distributions of capacity and demand) and therefore do not produce rating factors consistent with a uniform probability of failure.AASHTO L OAD AND R ESISTANCE F ACTOR R ATING METHODThe AASHTO Load and Resistance Factor Rating (LRFR) method evaluates the existing capacity of a bridge using structural reliability theory [Melchers 1999; Nowak 2000]. The LRFR rating factor equation is similar in form to (2) and (3):(1)c s n DC DW L R DC DW RF LL IM ϕϕϕγγγ--=+ (4) where ϕc is a strength reduction factor that accounts for the increased variability in the member strength of existing bridges when compared to new bridges [Moses 1987]. The factor ϕs addresses the failure of structural systems and penalizes older non-redundant structures with lower load ratings [Ghosn 1998]. The dead load factors γDC and γDW have been separated in LRFR to account for a lower variability indead load for primary structural components DC (e.g., columns and beams) and a higher variability for bridge deck wearing surfaces DW.Another important difference between the LRFR method and the ASR and LFR methods is the use of the HL93 notional design live load, which is a modern update to the HS20-44 notional load first implemented in 1944 [Kulicki 2007b] (notional in this case means that the design live load is not meant to represent actual truck traffic but instead is a simplified approximation intended to conservatively simulate the influence of live load across many types of bridge spans). The HL93 loading produces live load demands which are more consistent with modern truck traffic than the HS20-44 live load. The HL93 design loading combines the HS20-44 truck with a uniform load and also considers the load case of a tandem trailer with closely spaced axles and relatively high axle loads (in combination with a uniform load) [AASHTO 2007]. The design tandem load increases the shear demand on shorter bridges and produces, in combination with the design lane load, a live load effect greater than or equal to the AASHTO legal live load Type 3, Type 3S2, and Type 3-3 vehicles [AASHTO 1994].AASHTO LFR VS. LRFR LOAD RATING COMPARISONA parameter study is conducted in this section to explore the differences between the AASHTO LFR and LRFD load rating methods. The ASR method is not included in the study because it evaluates the live load capacity of a bridge at service levels, which makes it difficult to compare against the ultimate limit state LFR and LRFR methods (also note that the ASR method employs less modern “working stress” methods for calculating member capacities than LFR and LRFR). A simple span multi-girder bridge with steel girders and a composite concrete bridge deck is considered. The flexural capacity of an interior girder is assumed to control the load rating. AASHTO legal loads are employed in the study to provide a consistent live loading between the rating methods (although the impact factor and live loaddistribution factor for the controlling girder will be different for LFR and LRFR methods).The LFR load rating equation in (3) is rewritten as:u 12LFR LFD LFD M A D RF A B I L-= (5) where M u is the LFD flexural capacity of the composite girder (φ is implicit in the calculation of M u ), B LFD is the live load distribution factor for an interior girder[AASHTO 1994]:5.5LFD S B = (6) and the live load impact factor I LFD is [AASHTO 1994]:501125LFD I =++ (7) The span length of the bridge is denoted as . A 1 and A 2 are chosen as 1.3 in this study to compare the LFR Operating rating with the LRFR rating method (the intent of the LRFR legal load rating is to provide a single rating level consistent with the LFD Operating level [AASHTO 2003]).The LRFR equation in (4) is rewritten to be consistent with (5):u2c s DC D LFR L LRFD LRFD M M RF B I Lϕϕγγ-= (8) Where B LRFD is the live load distribution factor for moment in an interior girder[AASHTO 2007]0.60.230.075()()()9.512g LRFD sK S S B t =+ (9) and I LRFD , the live load impact factor, is 1.33 [AASHTO 2007]. M D is the dead load moment assuming that the dead load effects from a wearing surface and utilities are zero (i.e., DW is zero) and γDC is 1.25. M u is assumed equivalent in (5) and (8) because the LFD and LRFD prediction methods for the flexural capacity of composite girders are founded on a common structural basis [Tonias 2007]. The term K g /12 t s 3 in (9) is assumed equal to 1 as suggested by the LRFD specification forpreliminary design [AASHTO 2007] (this approximation reduces the number of variables in the parameter study). The term LL in (4), i.e. the LRFD lane loading, is approximated by 2L in (8). This conversion from lane loading to wheel line loading allows for the cancellation of L (i.e., the live load variable) when (8) and (5) are formulated as a ratio:(10)Rearranging the term M u in (10) leads to:(11)The relationship between the LRFR and LFR load rating equations, as described in (11), is explored in Figure 1 to Figure 4. M D/M u is assumed as 0.30 for the bridge span lengths considered in this study. Equation (11) varies only slightly (a maximum of 5%) when M D/M u ranges between 0.10 to 0.50 because the LFR and LRFR dead load factors are similar, i.e. γDC=1.25 and A1=1.3. Figure 1 demonstrates that the LRFR legal load rating is less than the LFD Operating rating for both short and long single span bridges (the span range is 20 ft. to 200 ft. in this study). This is consistent with the findings of NCHRP Web Document 28, which demonstrates that the LRFR legal load rating is lower than the LFD Operating rating but higher than the LFD Inventory rating [NCHRP 2001]. RF LRFR increases for longer span lengths because the live load distribution factor B LRFD in (9) decreases with increasing . RF LRFR also increases as the girder spacing, S, increases (S ranges from 3 ft. to 7 ft. in Figure 1) because the LRFD live load distribution factor B LRFD decreases relative to the LFD live load distribution factor B LFD for larger girder spacings.FIGURE 1-COMPARISON OF LRFR AND LFR (OPERATING) LEGAL LOAD RATING FACTORS FOR FLEXURE IN AN INTERIOR GIRDER OF A SIMPLE SPAN MULTI-GIRDER COMPOSITE BRIDGEThe volume of traffic is directly accounted for in the LRFR load rating method by considering the Average Daily Truck Traffic (ADTT) (this is an improvement over the LFR method which does not account for frequency of bridge usage when calculating RF). Figure 2 highlights the variability of the LRFR legal load rating with ADTT. RF LRFR is approximately 30% greater for a lightly traveled bridge (ADTT≤100) when compared to a heavily traveled bridge (ADTT≥5000), and the LRFR load rating trends toward the LFD Operating load rating for lightly traveled bridges.FIGURE 2 - INFLUENCE OF ANNUAL DAILY TRUCK TRAFFIC ON THE LRFR LEGAL LOAD RATING FACTOR (S=4 FT.)The factors ϕs and ϕc account for system redundancy and the increased uncertainty from bridge deterioration in the LRFR load rating method respectively (this is an important update to the LFR rating method which assumes one level of uncertainty for all bridge types and bridge conditions). Figure 3 demonstrates that RF LRFR decreases by approximately 30% as the bridge condition deteriorates from good to poor. Bridges with a small number of girders (e.g., 3 or 4 girders) are considered to be more susceptible to catastrophic collapse, which is reflected in the lower RF LRFR load rating factors in Figure 4.FIGURE 3 –INFLUENCE OF CONDITION FACTOR ϕs ON THE LRFR LOAD RATING FACTOR (S=4 FT.)FIGURE 4 - INFLUENCE OF SYSTEM FACTOR ϕc ON LRFR LOAD RATING FACTOR (S=4 FT.)D ISCUSSIONThe LRFR load rating method represents an important step in the evolution of bridge evaluation strategies. The method is calibrated to produce a uniform level of reliability across all existing highway bridges (i.e., a uniform probability of failure) and is an improvement over the ASR and LFR methods because it allows bridge owners to account for traffic volume, system redundancy, and the increased uncertainty in the predicted strength of deteriorating bridge components. The LRFR load rating method can be used as a foundation for the development of more accurate performance-based bridge evaluation strategies in the future, where bridge owners directly calculate the existing capacity (or reliability) with in service data from a structural health monitoring network and make maintenance decisions based on relationships between corrosion, structural capacity, and repair or replacement costs.Reliability-based cost models have been proposed, for example [Nowak 2000]: T I F F C C C P =+ (12)Where CT is the total cost of the bridge over its lifetime, CI is the initial cost, CF is the failure cost of the bridge (which could include rehabilitation costs), and PF is the failure probability of the bridge. As PF increases (i.e., as the bridge deteriorates over time), the total cost CT increases, which ties the reliability of the bridge to economy and provides a metric from which to optimize maintenance decisions and minimum rehabilitation costs in a highway system. The continued evolution of bridge evaluation strategies depends on improved methods for evaluating the structural capacity of bridges and defining correlation between corrosion in bridges, strength loss, and failure rates [ASCE 2009].The AASHTO LRFR load rating method is a step forward in bridge evaluation strategy when compared to the ASR and LFR methods because it is calibrated to produce a uniform reliability across all existing highway bridges. The LRFR method provides factors which account for the volume of traffic on the bridge, the redundancy of the superstructure, and the increased uncertainty in structural capacity associated with a deteriorating structure. The flexural LRFR load rating factor for an interior steel composite girder in a multi-girder bridge is up to 40% lower than the LFR Operating load rating over a span range of 20 ft. to 200 ft. and for girder spacings between 3 ft. and 7 ft. The LRFR flexural load rating factor increases for longer span lengths and larger girder spacings, influenced primarily by the LRFD live load distribution factor.A CKNOWLEDGEMENTSThe authors are grateful for the guidance provided by Bala Sivakumar in the organization of this paper. The authors also wish to thank Kelley Rehm and Bob Cullen at AASHTO for their help identifying historical references pertaining to AASHTO live load vehicles and design procedures.R EFERENCESAASHTO, Manual for Condition Evaluation of Bridges, Second Edition, with 1995, 1996, 1998, 2000, 2001, and 2003 Revisions, AASHTO, Washington, D.C., 1994.AASHTO, Standard Specifications for Highway Bridges, 17th Edition, AASHTO, Washington, D.C., 2002.AASHTO, Manual for Condition Evaluation and Load and Resistance Factor Rating (LRFR) of Highway Bridges, AASHTO, Washington, D.C., 2003.AASHTO, LRFD Bridge Design Specifications, 4th Edition, AASHTO, Washington, D.C., 2007.ASCE, "ASCE/SEI-AASHTO Ad-Hoc Group on Bridge Inspection, Rehabilitation, and Replacement White Paper on Bridge Inspection and Rating", ASCE Journal of Bridge Engineering, 14(1), 2009, 1-5.Ghosn, M., Moses, F., NCHRP Report 406: Redundancy in Highway Bridge Superstructures, TRB, National Research Council, Washington, D.C., 1998.Kulicki, J.M., Prucz, Zolan, Clancy, Chad M., Mertz, Dennis R., Updating the Calibration Report for AASHTO LRFD Code (Project No. NCHRP 20-7/186), AASHTO, Washington, DC, 2007a.Kulicki, J.M., Stuffle, Timothy J., Development of AASHTO Vehicular Loads (FWHA NHI 07-019), Federal Highway Administration, National Highway Institute (NHNI-10), 2007b.Melchers, R.E., Structural Reliability Analysis and Prediction, John Wiley and Sons, New York, 1999.Mertz, D.R., Load Rating by Load and Resistance Factor Evaluation Method (NCHRP Project 20-07/Task 122), TRB, National Research Council, Washington DC, 2005.Moses, F., NCHRP Report 454: Calibration of Load Factors for LRFR BridgeEvaluation, TRB, National Research Council, Washington, D.C., 2000.Moses, F., Verma, D., NCHRP Report 301: Load Capacity Evaluation of Existing Bridges, TRB, National Research Council, Washington, D.C., 1987.NCHRP, Manual for Condition Evaluation and Load Rating of Highway Bridges Using Load and Resistance Factor Philosophy (NCHRP Web Document 28), TRB, National Research Council, Washington DC, 2001.Nowak, A.S., NCHRP Report 368: Calibration of LRFD Bridge Design Code, TRB, National Research Council, Washington D.C., 1999.Nowak, A.S., Collins, Kevin R., Reliability of Structures, McGraw Hill, New York, 2000.Tonias, D.E., Zhao, J.J., Bridge Engineering: Design, Rehabilitation, and Maintentance of Modern Highway Bridges, McGraw-Hill, New York, 2007.AASHTO关于桥梁荷载等级评定方法的比较作者:Cristopher D. Moen,Ph.D.,P.E.,Virginia Tech,Blacksburg,VA,cmoen@Leo Fernandez,P.E.,TranSystems,New York,NY,lafernandez@绪论:现有的高速公路桥梁的承载能力是用传统单一荷载等级因数定量化的。
毕业设计英文翻译
毕业设计英文翻译Graduation DesignAbstract:This graduation design aims to propose a solution for improving the efficiency of waste management in urban areas. The current waste management system suffers from various problems such as inadequate collection practices, lack of recycling infrastructure, and improper disposal of hazardous waste. To address these issues, this design proposes the implementation of a comprehensive waste management system, which includes improved collection practices, the establishment of recycling centers, and the introduction of stricter waste disposal regulations. The design also includes the development of a mobile application to facilitate communication between residents and waste management authorities. This design offers a feasible and sustainable solution to improve waste management in urban areas.Introduction:Waste management has become an increasingly important issue for urban areas due to the rapid growth of population and industrial activities. The current waste management system in many cities is unable to cope with the increasing volume of waste being generated. This has led to numerous problems such as environmental pollution, health hazards, and the inefficient utilization of resources. Therefore, there is a need for an improved waste management system that can effectively address these issues. Methodology:This design proposes several key strategies to improve wastemanagement in urban areas. Firstly, there needs to be an improvement in waste collection practices. This can be achievedby implementing a more efficient collection schedule and ensuring that waste collection trucks cover all areas of the city. Additionally, separate collection bins for recyclable and non-recyclable waste should be provided to encourage recycling.Secondly, it is essential to establish recycling centers in strategic locations throughout the city. These centers will act as collection points for recyclable materials and will also provide education and awareness programs to promote recycling among residents. Furthermore, partnerships with recycling companies should be established to ensure the proper processing and utilization of recyclable materials.Lastly, stricter waste disposal regulations should be introduced to discourage improper disposal of hazardous waste. This can be achieved by imposing fines and penalties on individuals and businesses that do not comply with waste disposal regulations. Additionally, educational campaigns should be conducted to raise awareness about the dangers of improper waste disposal and to promote responsible waste management practices.Results and Discussion:The proposed waste management system, if implemented, will have several benefits. Firstly, it will contribute to a cleaner and healthier environment by reducing pollution and preventing the spread of diseases. Secondly, the establishment of recycling centers will promote resource conservation and reduce the dependence on raw materials. Finally, the introduction of stricterwaste disposal regulations will ensure proper handling of hazardous waste, thus protecting human health and the environment.Conclusion:In conclusion, the implementation of a comprehensive waste management system is necessary to address the shortcomings of the current waste management system in urban areas. This design proposes several key strategies such as improved collection practices, the establishment of recycling centers, and the introduction of stricter waste disposal regulations. The design also includes the development of a mobile application to enhance communication between residents and waste management authorities. By implementing these strategies, this design offers a feasible and sustainable solution to improve waste management in urban areas.。
毕业设计中英文翻译
1 Introduction and scope1.1 Aims of the ManualThis Manual provides guidance on the design of reinforced and prestressed concrete building structures. Structures designed in accordance with this Manual will normally comply with DD ENV 1992-1-1: 19921 (hereinafter referred to as EC2).1.2 Eurocode systemThe structural Eurocodes were initiated by the European Commission but are now produced by the Comité Européen de Normalisation (CEN) which is the European standards organization, its members being the national standards bodies of the EU and EFTA countries,e.g. BSI.CEN will eventually publish these design standards as full European Standards EN (Euronorms), but initially they are being issued as Prestandards ENV. Normally an ENV has a life of about 3 years to permit familiarization and trial use of the standard by member states. After formal voting by the member bodies, ENVs are converted into ENs taking into account the national comments on the ENV document. At present the following Eurocode parts have been published as ENVs but as yet none has been converted to an EN:DD ENV 1991-1-1: Basis of design and actions on structures (EC1)DD ENV 1992-1-1: Design of concrete structures (EC2)DD ENV 1993-1-1: Design of steel structures (EC3)DD ENV 1994-1-1: Design of composite steel and concrete structures (EC4)DD ENV 1995-1-1: Design of timber structures (EC5)DD ENV 1996-1-1: Design of masonry structures (EC6)DD ENV 1997-1-1: Geotechnical design (EC7)DD ENV 1998-1-1: Earthquake resistant design of structures (EC8)DD ENV 1999-1-1: Design of aluminium alloy structures (EC9)Each Eurocode is published in a number of parts, usually with ‘General rules’ and ‘Rules for buildings’ in Part 1. The various parts of EC2 are:Part 1.1 General rules and rules for buildings;Part 1.2 Supplementary rules for structural fire design;Part 1.3 Supplementary rules for precast concrete elements and structures;Part 1.4 Supplementary rules for the use of lightweight aggregate concrete;Part 1.5 Supplementary rules for the use of unbonded and external prestressing tendons;Part 1.6 Supplementary rules for plain or lightly reinforced concrete structures;Part 2.0 Reinforced and prestressed concrete bridges;Part 3.0 Concrete foundations;Part 4.0 Liquid retaining and containment structures.All Eurocodes follow a common editorial style. The codes contain ‘Principles’ and‘Application rules’. Principles are general statements, definitions, requirements and sometimes analytical models. All designs must comply with the Principles, and no alternative is permitted. Application rules are rules commonly adopted in design. They follow the Principles and satisfy their requirements. Alternative rules may be used provided that compliance with the Principles can be demonstrated.Some parameters in Eurocodes are designated by | _ | , commonly referred to as boxed values. The boxed values in the Codes are indicative guidance values. Each member state is required to fix the boxed value applicable within its jurisdiction. Such information would be found in the National Application Document (NAD) which is published as part of each ENV.There are also other purposes for NADs. NAD is meant to provide operational information to enable the ENV to be used. For certain aspects of the design, the ENV may refer to national standards or to CEN standard in preparation or ISO standards. The NAD is meant to provide appropriate guidance including modifications required to maintain compatibility between the documents. Very occasionally the NAD might rewrite particular clauses of the code in the interest of safety or economy. This is however rare.1.3 Scope of the ManualThe range of structures and structural elements covered by the Manual is limited to building structures that do not rely on bending in columns for their resistance to horizontal forces and are also non-sway. This will be found to cover the vast majority of all reinforced and prestressed concrete building structures. In using the Manual the following should be noted:• The Manual has been drafted to comply with ENV 1992-1-1 together with the UK NAD• Although British Standards have been referenced as loading codes in Sections 3 and 6,to comply with the UK NAD, the Manual can be used in conjunction with other loading codes • The structures are braced and non-sway• The concrete is of normal weight• The structure is predominantly in situ• Prestressed concrete members have bonded or unbonded internal tendons• The Manual can be used in conjunction with all commonly used materials in construction; however the data given are limited to the following:– concrete up to characteristic cylinder strength of 50N/mm2 (cube strength 602N/mm)– high-tensile reinforcement with characteristic strength of 4602N/mm– mild-steel reinforcement with characteristic strength of 2502N/mm– prestressing tendons with 7-wire low-relaxation (Class 2) strands• High ductility (Class H) has been assumed for:– all ribbed bars and grade 250 bars, and– ribbed wire welded fabric in wire sizes of 6mm or over• Normal ductility (Class N) has been assumed for plain or indented wire welded fabric.For structures or elements outside this scope EC2 should be used.1.4 Contents of the ManualThe Manual covers the following design stages:• gene ral principles that govern the design of the layout of the structure• initial sizing of members• estimating of quantities of reinforcement and prestressing tendons• final design of members.2 General principlesThis section outlines the general principles that apply to both initial and final design of both reinforced and prestressed concrete building structures, and states the design parameters that govern all design stages.2.1 GeneralOne engineer should be responsible for the overall design, including stability, and should ensure the compatibility of the design and details of parts and components even where some or all of the design and details of those parts and components are not made by the same engineer.The structure should be so arranged that it can transmit dead, wind and imposed loads in a direct manner to the foundations. The general arrangement should ensure a robust and stable structure that will not collapse progressively under the effects of misuse or accidental damage to any one element.The engineer should consider engineer site constraints, buildability2, maintainability and decommissioning.The engineer should take account of his responsibilities as a ‘Designer’ under the Construction (Design & Management) Regulations.32.2 StabilityLateral stability in two orthogonal directions should be provided by a system of strongpoints within the structure so as to produce a braced non-sway structure, in which the columns will not be subject to significant sway moments. Strongpoints can generally be provided by the core walls enclosing the stairs, lifts and service ducts. Additional stiffness can be provided by shear walls formed from a gable end or from some other external or internal subdividing wall. The core and shear walls should preferably be distributed throughout the structure and so arranged that their combined shear centre is located approximately on the line of the resultant in plan of the applied overturning forces. Where this is not possible, the resulting twisting moments must be considered when calculating the load carried by each strongpoint. These walls should generally be of reinforced concrete not less than 180mm thick to facilitate concreting, but they may be of 215mm brickwork or 190mm solid blockwork properly tied and pinned up to the framing for low- to medium-rise buildings.Strongpoints should be effective throughout the full height of the building. If it is essential for strongpoints to be discontinuous at one level, provision must be made to transfer the forces toother vertical components.It is essential that floors be designed to act as horizontal diaphragms, particularly if precast units are used.Where a structure is divided by expansion joints each part should be structurally independent and designed to be stable and robust without relying on the stability of adjacent sections.2.3 RobustnessAll members of the structure should be effectively tied together in the longitudinal, transverse and vertical directions.A well-designed and well-detailed cast-in situ structure will normally satisfy the detailed tying requirements set out in subsection 5.11.Elements whose failure would cause collapse of more than a limited part of the structure adjacent to them should be avoided. Where this is not possible, alternative load paths should be identified or the element in question strengthened.2.4 Movement jointsMovement joints may need to be provided to minimize the effects of movements caused by, for example, shrinkage, temperature variations, creep and settlement.The effectiveness of movement joints depends on their location. Movement joints should divide the structure into a number of individual sections, and should pass through the whole structure above ground level in one plane. The structure should be framed on both sides of the joint. Some examples of positioning movement joints in plan are given in Fig. 2.1.Movement joints may also be required where there is a significant change in the type of foundation or the height of the structure. For reinforced concrete frame structures in UK conditions, movement joints at least 25mm wide should normally be provided at approximately 50m centres both longitudinally and transversely. In the top storey and for open buildings and exposed slabs additional joints should normally be provided to give approximately 25m spacing. Joint spacing in exposed parapets should be approximately 12m.Joints should be incorporated in the finishes and in the cladding at the movement joint locations.2.5 Fire resistance and durabilityFor the required period of fire resistance (prescribed in the Building Regulations), the structure should:• have adequate loadbearing capacity• limit the temperature rise on the far face by sufficient insulation, and• have sufficient integrity to prevent the formation of crack s that will allow the passage of fire and gases.Fig. 2.1 Location of movement jointsThe design should take into account the likely deterioration of the structure and its components in their environment having due regard to the anticipated level of maintenance. The following inter-related factors should be considered:• the required performance criteria• the expected environmental conditions• the composition, properties and performance of materials• the shape of members and detailing• the quality of workmanship• any protective measure• the likely maintenance during the intended life.Concrete of appropriate quality with adequate cover to the reinforcement should be specified. The above requirements for durability and fire resistance may dictate sizes for members greater than those required for structural strength alone.。
毕设英语翻译(英文和译文都有)
Wear 225–229 Ž1999. 354–367Wear of TiC-coated carbide tools in dry turningC.Y .H. Lim ) , S.C. Lim, K.S. LeeDepartment of Mechanical and Production Engineering, National Uni Õersity of Singapore, 10 Kent Ridge Crescent, Singapore 119260, SingaporeAbstractThis paper examines the flank and crater wear characteristics of titanium carbide ŽTiC .-coated cemented carbide tool inserts during dry turning of steel workpieces. A brief review of tool wear mechanisms is presented together with new evidence showing that wear of the TiC layer on both flank and rake faces is dominated by discrete plastic deformation, which causes the coating to be worn through to the underlying carbide substrate when machining at high cutting speeds and feed rates. Wear also occurs as a result of abrasion, as well as cracking and attrition, with the latter leading to the wearing through of the coating on the rake face under low speed conditions. When moderate speeds and feeds are used, the coating remains intact throughout the duration of testing. Wear mechanism maps linking the observed wear mechanisms to machining conditions are presented for the first time. These maps demonstrate clearly that transitions from one dominant wear mechanism to another may be related to variations in measured tool wear rates. Comparisons of the present wear maps with similar maps for uncoated carbide tools show that TiC coatings dramatically expand the range of machining conditions under which acceptable rates of tool wear might be experienced. However, the extent of improvement brought about by the coatings depends strongly on the cutting conditions, with the greatest benefits being seen at higher cutting speeds and feed rates. q 1999 Elsevier Science S.A. All rights reserved.Keywords: Wear mechanisms; Wear maps; TiC coatings; Carbide tools1. IntroductionIn the three decades since the commercial debut of coated cutting tools, these tools have gained such popular- ity that today’s metal cutting industry has come to rely almost exclusively on them. This success stems from the spectacular improvements in tool performance and cutting economies that the coatings are able to bring to traditional high-speed-steel and cemented carbide tools w 1x . At pre- sent, the most common group of coated tools consists of various combinations of titanium nitride ŽTiN ., titanium carbide ŽTiC ., titanium carbonitride ŽTiCN . and aluminium selection of the appropriate tool and machining conditions for a particular application, but also assist engineers andscientists in their development of new tool andcoating materials.This work investigates the wear of coated cementedcarbide tool inserts during dry turning under a wide range ofmachining conditions. Although the current trend is towardsthe use of multilayer coatings, an understanding of the wearcharacteristics of the individual constituent mate- rials wouldbenefit the development of such multilayers. TiC ischosen in this study, since it often forms theimportant base layer in multilayer-coatings due to its lowoxide ŽAl 2 O 3 ., deposited in a multilayer manner onto thermal mismatch with cemented carbide substratesw 5x . cemented carbide substrates. The wear behaviour of coated tools has understandably been the subject of much research but in most instances, the focus appears to be limited to relatively narrow ranges ofmachining conditions Žsee for example, Refs. w 2–4x ..There seems to be a lack of overviews on the wearcharacteristics of different coated tools throughoutthe entire range of their recommended cutting conditions.Such information would not only contribute to a more informedThe flank and crater wear characteristics of TiC-coated tools will be examined, and the methodology of wear maps will be applied to explore the ways in which these tools may be used more effectively.2. Experimental detailsA series of experiments was carried out in accordance with the International Standard ISO 3685-1977 ŽE.test for single-point turning tools w6x. Commerciallyavailable) Corresponding author. Tel.: q65-874-8082; fax: q65-779-1459;e-mail: mpelimc@.sg TiC-coated tool inserts of geometry ISO SNMN 120408 from Sumitomo’s AC720 coated grade were used in these0043-1648r99r$ - see front matter q 1999 Elsevier Science S.A. All rights reserved. PII: S 0 0 4 3 - 1 6 4 8 Ž9 8 .0 0 3 6 6 - 4C.Y.H. Lim et al.r W ear 225–229 (1999) 354–367 355Table 1Tool geometry used in the turning testsBack rake angley68Side rake angley68End clearance angle68Side clearance angle68End cutting edge angle158Side cutting edge angle158Nose radius0.8 mmtests. The cemented carbide substrates belonged to the ISO application group P20–P30 and these had been coated with TiC to an average thickness of 8.5 mm. Knoop microhard- ness indentation testing on the TiC coating with a load of 50 gf indicated a mean hardness of 2678 kg r mm2 . The workpiece material, a hot-rolled medium carbon steel ŽAISI 1045 equivalent.with an average hardness of 89 HRB, was used in its as-received condition. A toolholder of designation ISO CSBNR 2525M12 was employed to achieve the specified cutting geometry. Details of this tool geometry and the test configuration adopted during the turning tests may be found respectively in Table 1 and Fig.23. The chipbreaker, which formed part of the clamping mechanism of the toolholder, was fully wound back during the tests to prevent it from supporting the chip and shorten- ing the contact length.A total of 13 sets of various combinations of cutting speed and feed rate were selected for the tests, with the aim of adequately covering the recommended range of machining conditions for coated carbide tools w7–9x. The choice of these 13 conditions was also influenced in part by the need to explore the wear behaviour under certain machining conditions for which no wear data were avail- able from the open literature. This was to ensure the proper tool wear w10x. A value of 2 mm was chosen, based on the average depth of cut used in the machining tests of other researchers whose data were extracted for the wear maps. No cutting fluid was used in these experiments, as stipu- lated in ISO 3685-1977 ŽE.w6x. Each insert was tested for a total of 20 min or until catastrophic failure, whichever occurred first. The period of 20 min was chosen to limit the amount of work material consumed, while at the same time corresponding to the average tool life of between 10 to 20 min seen in industrial practice.Flank and crater wear were monitored at regular inter- vals throughout the machining experiments. The locations of these wear regions on the tool are shown in Fig. 24. According to ISO 3685-1977 ŽE. w6x, flank and crater wear were measured by the width of the flank wear land, VB, and the depth of the crater, KT, respectively. These mea- surements are illustrated in Fig. 25. It has been shown previously w11x that the rates of flank and crater wear may be more meaningfully portrayed by the dimensionless pa- rameters of VB and KT per unit cutting distance. These quantities are more conveniently represented by log wŽVB or KT.rŽcutting distance.x, and the experimental wear rates from the present tests are given in Table 2.3. Tool wear mechanismsSeveral studies on the mechanisms of flank and crater wear in TiC-coated carbide tools may be found in the open literature Žsee for example, Refs. w4,12,13x., but in each case, the tools were tested under a relatively narrow range of machining conditions. In this work, the worn tools were examined using scanning electron microscopy ŽSEM., after removing adherent work material by immersion in concen-construction of the wear maps later. These conditions are trated hydrochloricacid ŽHCl.. A number of metallo-listed in Table 2. The depth of cut was kept constant since it has been shown that this parameter has little effect ongraphic sections through the centre of thecrater and normal to the cutting edge were also made. TheobservedTable 2Machining conditions and experimental tool wear ratesSet Speed Žm r min.Feed Žmm r rev.Flank wear rate Crater wear rateŽVB.rŽDistance.log 10 ŽŽVB.rŽKT.rŽDistance.log 10 ŽŽKT.rŽDistance..ŽDistance..1 32.3 0.06 9.60 =10y82 207.5 0.06 3.04 =10y83 404.0 0.06 1.57 =10y74 103.9 0.2 3.03 =10y85 186.5 0.2 2.04 =10y86 302.9 0.2 6.09 =10y87 98.3 0.3 3.81 =10y88 193.0 0.3 2.39 =10y89 316.9 0.3 1.12 =10y710 31.7 0.4 9.78 =10y811 349.2 0.4 2.37 =10y712 150.1 0.5 2.33 =10y813 241.0 0.5 4.05 =10y8y7.0 1.32 =10y8y7.5 1.08 =10y9y6.8 5.65 =10y9y7.5 7.22 = 10y 10y7.7 6.79 = 10y 10y7.2 4.73 =10y9y7.4 1.27 =10y9y7.6 1.50 =10y9y7.0 1.17 =10y7y7.0 1.18 =10y8y6.6 5.72 =10y7y7.6 1.88 =10y9y7.4 5.93 =10y9y7.9y9.0y8.2y9.1y9.2y8.3y8.9y8.8y6.9y7.9y6.2y8.7y8.2C.Y.H. Lim et al.r W ear 225–229 (1999) 354–367 356appear plastically deformed in the direction of workpiecerotation ŽFig. 1b.. As cutting continues to the end of the20-min test, a ‘ridge-and-furrow’ topography isformedŽFig. 1c.. This ridge-and-furrow appearance has previouslybeen reported w4,13,14x, and the mechanismresponsibleFig. 1. Topography of flank face Ža.when new, Žb.after 3 mins, and Žc.after 20 mins of cutting at 103.9 m r min and 0.2 mm r rev,showing original dimple-like surface features being worn bydiscrete plastic deformation to give a ridge-and-furrow appearance.wear mechanisms are discussed below, with an attempt tocorrelate the current findings with published reportsin order to present a better picture of tool wear across a widerrange of cutting conditions.3.1. Flank wear mechanismsThe unworn TiC coating on the flank face of the newtool shown in Fig. 1a exhibits ‘dimple-like’ surface fea-tures. After 3 min of machining, however, these featuresC.Y.H. Lim et al.r W ear 225–229 (1999) 354–367357was termed ‘discrete plastic deformation’ since the depth ofthe gradual thinning of the coating worn by discrete plastic deforma- tion. deformation is limited to 1 mm or less of the coating surfacew4x, as seen in Fig. 1c.It has been demonstrated that during machining, highcompressive stresses and intimate contact between theatomically clean surfaces of the newly-machined work- pieceand the flank face of TiC-coated carbide tools results inseizure over much of the tool–work contact area w15x. ŽItshould be pointed out that the term ‘seizure’used in themachining context differs somewhat from the usual tribo-logical understanding in which the real and nominal con- tactareas are equal..However, cutting is able to continue as thework material moves by shear in the layers of the workadjacent to this interface w16x. Such conditions gener- ate highshear stresses on the tool surface that plastically deform and‘smear’ the original dimple-like features of the coating in thedirection of workpiece rotation. With time, these deformeddimples flow and merge into the ridge- and-furrowtopography shown in Fig. 1c. It has been proposed that thisprocess culminates in the ductile frac- ture of tiny fragmentsof the coating, which are then swept away by the passingwork w4x.Discrete plastic deformation gradually reduces thethickness of the TiC coating during machining. The cross-sectional view of the flank wear land in Fig. 2 shows a‘depression’ in the coating worn by discrete plastic defor-mation. As wear progresses, localized areas of the underly-ing carbide substrate become exposed ŽFig. 3a., and even-tually merge into a continuous band of exposed substrate ŽFig.3b., an observation shared by several other workers w4,17–20x.Fig. 2. Section through flank wear land after 20 min of cutting at 150.1m r min and 0.5 mm r rev, showing a ‘depression’ in the TiC coating due toC.Y.H. Lim et al.r W ear 225–229 (1999) 354–367 358Fig. 3. Flank wear land showing coating removal by discrete plastic deformation, Ža.beginning with the appearance of holes and voids, and Žb. followed by the merging of voids to form a continuous band of exposed substrate. deformation around the tool edge w22x. This may contribute to cracking due to the inability of the brittle TiC layer to conform to this deformation. Although cracking of the coating does not directly result in flank wear, the cracks compromise the integrity of the coating and may become preferential sites for coating removal. Fig. 4 shows an example of how tiny fragments of the coating have been ‘ plucked out’ in the vicinity of cracks. This could acceler- ate coating wear and hasten the exposure of the substrate.The severity of cracking and attrition appears to depend on the cutting speed and feed rate. At speeds below 40 m r min, no signs of cracking or attrition are seen. At moderate speeds and feeds, a few fine cracks are visible, but attrition is not evident in most cases. Cracks become more abundant at high speeds and feeds, accompanied by slight attrition of the coating. These observations lend support to the earlier suggestion that cracking is related to the compressive deformation of the tool edge. Higher cutting speeds increase tool temperatures, thus causing greater softening of the tool nose, while a higher feed imposes higher compressive stresses on the tool edge. Both factors result in greater deformation of the tool nose, which increases the extent of cracking. However, under the conditions of the present tests, cracking and attrition of the TiC coating on the flank does not appear to play a dominant role in tool wear. There is also no evidence to support the suggestion that hard coating fragments re- moved via the attrition process contribute significantly to wear by abrasion w20x.Flank wear of TiC-coated carbide tools has frequently been attributed to the dominance of abrasive wear Žsee for example, Refs. w12,14,20,23x.. It seems unlikelythoughRidge-and-furrow wear surfaces are seen under all the conditions used in the present tests, but they appear more pronounced at higher cutting speeds and feed rates. It has been shown that increases in speed and feed lead to a rise in temperatures at the tool flank w21x. This causes the TiC layer to soften w22x, rendering it more susceptible to plastic deformation. In tools tested at high speeds and feeds, the coating is worn away rapidly by discrete plastic deforma- tion in a matter of minutes, exposing the carbide substrate beneath.Other features observed on the worn tool flanks are fine cracks parallel to the tool edge and perpendicular to the direction of workpiece rotation ŽFig. 4.. These are found on all tools except those tested at very low cutting speeds Žless than 40 m r min.. Careful examination of new tools shows that such cracks are not present prior to testing. Furthermore, the cracks are confined to the flank wear land, the only region that is in contact with the workpiece during machining. These findings suggest that the cutting process is responsible for the formation of these cracks. It is believed that the high shear stresses that cause discrete plastic deformation, also lead to cracking within theC.Y.H. Lim et al.r W ear 225–229 (1999) 354–367359TiC coating. In addition, the high compressive loads imposed on the tool edge during cutting are known to cause bulk that abrasion would be a major mechanism of coating wear since the TiC coating has a hardness equal to that of hard inclusions in the workpiece w22x. There is also little evi- dence of deep abrasion grooves on the coating, which might indicate the dominance of abrasive wear. At low magnifications, the ridge-and-furrow topography of dis-Fig. 4. Flank wear land showing fine cracks in the TiC coating, and the attrition of coating particles in the vicinity of the cracks when cutting at 316.9 m r min and 0.3 mm r rev.C.Y.H. Lim et al.r W ear 225–229 (1999) 354–367 360crete plastic deformation could perhaps be mistaken for evidence of abrasion and it is usually only upon closer examination that the difference between the two mecha- nisms may be discerned.On some tools, however, there appear to be faint grooves scratched on the surface of the coating. The tool shown in Fig. 5 has been tested at a very low cutting speed and feed rate. The worn surface appears smooth with numerous shallow but sharp grooves. These grooves do not resemble the coarser ridge-and-furrow features of discrete plastic deformation. Even on surfaces that do exhibit ridge-and- furrow formation, such as the one in Fig. 6, faint sharp lines may also be seen on top of the ridges. It is possible that these grooves are the result of abrasion by favourably-oriented inclusions in the workpiece thatareFig. 6. Flank wear land after 20 min of cutting at 207.5 m r min and 0.06 able to plough into the TiC coating w 24x . Abrasion is mm r rev, showing sharp, shallow abrasion grooves on top of ridge-and- probably significant only at lower speeds and feeds, where discrete plastic deformation involves very small strains and the wear rate is low w 4x . At higher speeds and feeds, the effect that such abrasion has on the overall wear of the coating on the flank face is likely to be small since the grooves are very shallow, especially when compared with the depth of the furrows formed by discrete plastic defor- mation. TiC has been found to be the most resistant tofurrow surface. in Fig. 7. The growth in the exposed areas of substrate through such a chipping mechanism has also been reported elsewhere w 12,25x . Examination of the tools in their as-machined condition show that regions where the substrate have become ex- abrasion among the common coating materialsŽsee for posed are completely covered by work material, which example, Refs. w 13,14,20x ., so it is hardly surprising that abrasion is not a major wear mechanism in the present investigation.For most of the tools, the TiC coatings remained intact throughout the entire duration of the experiment. Only a few tools tested under high speed or high feed conditions suffered coating loss through discrete plastic deformation and attrition. Once an area of substrate has been exposed, the passing work ‘impinges’ on the lower border of the coating adjacent to these exposed areas of substrate, and slowly chips away at the coating, causing the flank wear land to grow downwards. This process leaves a very uneven border at the bottom of the flank wear land, as seenFig. 5. Flank wear land after 20 min of cutting at 32.3 m r min and 0.06 mm r rev, showing sharp, shallow grooves due to abrasion.C.Y.H. Lim et al.r W ear 225–229 (1999) 354–367361may be removed only by dissolving in acid. This suggeststhat there has been intimate contact between the workmaterial and the exposed carbide substrate during machin-ing. Removal of the adherent work material reveals a smoothtopography with ridges and grooves on some of thecarbide grains ŽFig. 8.. This feature has been com- monlyassociated with diffusion wear in cemented carbide toolsw22,26x. However, it is also likely that the discrete plasticdeformation mechanism observed on the TiC coat- ing,continues to wear away the substrate as well. Such a processcould also contribute to the smoothly worn appear- ance.Fig. 7. Flank wear land after 20 min of cutting at 207.5 m r min and 0.06mm r rev, showing the uneven border between the TiC coating and theexposed carbide substrate as a result of chipping of the coating.C.Y.H. Lim et al.r W ear 225–229 (1999) 354–367 362features on the coating have been deformed in the chip flowdirection. These observations suggest that discrete plasticdeformation is also a dominant mechanism in coat- ing wearon the rake face. Similar to the case of flank wear earlier,discrete plastic deformation on the rake face be- comes morepronounced as cutting speed and feed rate are raised. Thesefindings agree with a previous suggestion that theseverity of this wear process is governed by temperature andshear stress, which rise with increasingspeed and feed w4x. The coating on the rakeface isFig. 8. Flank wear land after 2 min of cutting at 404 m r min and 0.06mm r rev, showing a smooth topography, as well as carbide grains withrandomly-oriented ridges and grooves.3.2. Crater wear mechanismsThe ridge-and-furrow topography associated with dis-crete plastic deformation on the flank face is also seen on therake face. In the comparison of a new and a worn tool inFig. 9, it is apparent that the originaldimple-likeFig. 9. Topography of rake face Ža. when new, and Žb. after 3 min of cuttingat 302.9 m r min, 0.2 mm r rev, showing original dimple-likesurface features being worn by discrete plastic deformation togive a ridge-and-furrow appearance.C.Y.H. Lim et al.r W ear 225–229 (1999) 354–367363gradually worn to expose the carbide substrate in a mannersimilar to that seen on the flank face.The micrograph in Fig. 10 shows extensive cracking onthe rake face parallel to the cutting edge and perpendicular tothe chip flow direction. In the vicinity of these cracks, tinypieces of the TiC coating have been plucked out, leavingjagged uneven edges. The cracks here resemble the cracksseen on the flank face. It is believed that in moving the chipover the tool under a condition of seizure w15x, the high-shearstresses generated on the tool surface causes the coating tofracture. Unlike the case of flank wear in which cracking ismore severe at higher speeds and feeds, this phenomenon isobserved on the rake face only at low cuttingspeeds. This may be explained by considering the differenttemperatures experienced on the rake face at low speeds andat high speeds. At low speeds, the temperature during cuttingis lower and the hardness of the TiC coating remainsrelatively high; the coating is therefore likely to be brittle andmore susceptible to cracking. With rising speed, the highertemperatures cause the hardness of the TiC coating to dropw22x, and the coating becomes more duc- tile. Consequently,the shear stresses imposed on the coat- ing surface result indiscrete plastic deformation of the coating rather thancracking.The cracking of the TiC coating on the rake facefacilitates removal of the coating by attrition. The view of therake face at a low magnification in Fig. 11 shows large areaswhere the coating has been removed by such anFig. 10. Rake face after cutting for 20 min at 32.3 m r min and 0.06 mm r rev,showing cracks parallel to the cutting edge and perpendicular to chip flowdirection.C.Y.H. Lim et al.r W ear 225–229 (1999) 354–367364Fig. 11. Rake face after cutting for 20 min at 32.3 m r min and 0.06 mm r rev, showing large areas where the TiC coating has been removed via cracking and attrition.attrition process. The extensive attrition at low speeds and feeds, could also be caused by a less laminar and more intermittent flow of the chip over the rake face. Sudden and local tensile stresses imposed by the unevenly flowing work material would tear away fragments of the coating. Uneven flow of the chip is associated with the occurrenceFig. 13. Crater bottom after 30 s of cutting at 241 m r min and 0.5 mm r rev, showing a smooth topography, as well as carbide grains with randomly-oriented ridges and grooves.The TiC coating is worn away fairly rapidly at high speeds and feeds via discrete plastic deformation. When this happens, the crater quickly fills with work material. Dissolving the adherent work or examining the tool in cross-section reveals a deep crater. Careful study of the crater in cross-section shows the occasional protrudingof a built-up-edge ŽBUE .. Although no adherent BUE was carbide at the tool –work interface ŽFig. 12.. This is afound on any tool after cutting, this does not rule out its occurrence during machining. An earlier report described the formation of a BUE when machining at speeds be- tween 15 m r min and 40 m r min w 27x . The same study also noted that the adhesion between the BUE and the coated tool edge is very weak, and the BUE is readily loosened during metallographic preparations. Here, a comparison of the surface finish of the workpiece after machining at two different speeds Ž32.3 m r min and 103.9 m r min . reveals that the workpiece is a lot rougher at the lower speed: 6.35 characteristic often associated with diffusion wear of car- bide tools, in which it is believed that carbides that are less soluble in the work material are left protruding while the surrounding tool material is worn away at a faster rate w 26x . However, the discrete plastic deformation mechanism could also account for such features, since the softer matrix would be deformed to a greater extent and worn away more quickly than the harder carbides. Removing the work material in the crater with acid reveals a smooth topogra- phy with carbide grains that have randomly-orientedmm Ra as compared to 2.46 mm Ra. A poor surface finish grooves on their surfaces ŽFig. 13.. This appearance ishas been found to be a good indication of the formation of BUE during machining w 22x .very similar to that of the flank wear land shown in Fig. 8.4. Wear maps for TiC-coated carbide toolsWear maps are useful tools for presenting the overall behaviour of wearing systems in a more meaningful andcomplete fashion w 28–30x . Research on metalsC.Y.H. Lim et al.r W ear 225–229 (1999) 354–367365 w31–34x,Fig. 12. Section through crater after cutting for 30 s at 307.8 m r min and 0.6 mm r rev, showing protruding carbide grains at the tool–work inter- face. ceramics Žsee for example, Ref. w35x., and some cutting tools w36–38x has shown that such maps facilitate the study and understanding of the relationships between measured wear rates and the dominant wear mechanisms over a wide range of operating conditions. The wear-map approach is adopted in this work to examine the wear characteristics of the TiC-coated carbide tools.The construction of a wear map first requires the exten- sive gathering of wear data from the technical literature for the particular wear system of interest. In this case, infor- mation relating to flank and crater wear of TiC-coated carbide tools during dry turning of steel workpieces wasC.Y.H. Lim et al.r W ear 225–229 (1999) 354–367366Fig. 14. Map for flank wear of TiC-coated carbide inserts during dry turning. The regions where the different ranges of wear rates are observed are shaded accordingly. The safety zone is the region where flank wear rates are the lowest; the considerably larger least-wear regime is also indicated.collected. The axes of the map are then decided: usually two Žsometimes three . operating parameters of the system zone Ž) y 6.9.. On the crater wear map, the wear rates span a wider range than in the case of flank wear, and are selected to form a plane Žor space . within whichthese are divided into five regimes: the safety zone Ž- empirical wear data are presented. Here, the same axes as y 8.5., a least-wear zone Žy 8.0 to y 8.4., and three other those employed previouslyw 11,36 –38x , namely,cuttinghigher-wear regions Ž) y 7.9..speed Žin m r min . and feed rate Žin mm r rev ., are used. 4.1. Wear-rate mapsMaps showing the flank and crater wear rates of TiC- coated carbide tools are given in Figs. 14 and 15. These maps, which have been introduced earlier w 38x , are based on the results of the present cutting tests, together with similar data from 35 other sources. The boundaries on the maps define different regions within which wear rates of similar ranges of values are contained. Three major wear regions are demarcated on the flank wear map; these are: the safety zone Ždimensionless wear rates - y 7.5., the moderate-wear region Žy 7.0 to y 7.4., and the high-wearThe boundaries on the wear maps reflect the influence of cutting speed and feed rate on the wear of the inserts. Under high speed –high feed conditions, the protective TiC coating is worn away quickly to expose the substrate, which has a much lower wear resistance, and thus gives rise to the higher wear rates seen. The increased wear rate at the low speeds is probably due to the presence of a BUE occurring at those speeds. Information on the wear of TiC-coated tools in the speed range of BUE formation is scanty since such low speeds are not within the normal machining range of these tools. It is believed however, that the BUE that forms on TiC-coated inserts during low-speed machining is very unstable w 27x . The unstable nature of the BUE causes it to break off and reform over and over again,。
毕设翻译原文
Heat transfer simulation in drag–pick cutting of rocksJohn P. Loui , U.M. Rao Karanama .Central Mining Research Institute, Barwa Road, Dhanbad, Jharkhand , Indiab .Department of Mining Engineering, Indian Institute of Technology, Kharagpur , IndiaAbstractA two-dimensional transient heat transfer model is developed using finite element method for the study of temperature rise during continuous drag-cutting. The simulation results such as temperature built-up with time and maximum stabilized pick–rock interface temperature are compared with experimental results for various input parameters. The effect of frictional force and cutting speed on the temperature developed at the pick–rock interface is also studied and compared with the experimental observations.Keywords: FEM; Rock cutting; Heat transfer; Wear1. IntroductionAll the rock cutting operations involve rock fracturing and subsequent removal of the broken rock chips from the tool–rock interface and drag –picks are one of the many types of cutting tools used for cutting rocks in rock excavation engineering. They are versatile cutting tools and have proved to be more efficient and desirable for cutting soft rock formations. However,there is a continuous effort to extend their applications to all types of rock formations.The forces responsible for rock fracture under the action of a drag-cutter can be resolved into two mutually perpendicular directions viz., thrust (normal) force Fn and cutting (tangential) force Fc. It is the cutting force,which decides the specific energy requirement for any cutting operation.Amajor part of the total energy spent during drag-cutting is lost as frictional heat. The temperature rise at the pick–rock interface due to this frictional heat has a significant effect on the wear rate of the cutting tool. Gray et al.(1962),De Vries (1966), Roxborough (1969), Barbish and Gardner (1969), Estes (1978), Detournay and Defourny (1992), Cools (1993), Loui and Rao (1997) found that the higher temperatures encountered in tool–rock interaction ultimately results in drastic reduction in drag-bit performance. It may also cause significant thermal stresses in rock as well as the tool. The experimental investigations conducted earlier (De Vries, 1966; Estes, 1978;Karfakis and Heins, 1966; Loui and Rao, 1997;Martin and Fowell, 1997) could only measure pick–rock interface temperatures at 2–3 locations on the cutting tool.Most of the temperature measurements during laboratory experiments were done by thermocouples placed within the tool. Conducting such experiments is not onlytime consuming and costly but also provide inadequate information if the objective is to study the temperature distribution in the pick–rock system.Analytical modelling for predicting the temperature during rock cutting requires major simplification of the problem and this may not be able to provide accurate results for the complicated real life situation of drag-cutting.Therefore, a numerical modelling technique viz., the finite element method is used in the current study to develop a two dimensional transient heat transfer model to solve for the temperature profile in the pick –rock system.The present paper discusses the development of this transient heat transfer model and its experimental validation.2.Theoretical heat transfer analysis in drag-cuttingPrior to the finite element solution of the problem,theoretical analysis has been done to evaluate input parameters for the finite element program. These parameters include velocity field in the pick–rock system,forces acting on the rake and flank face of the drag-cutter and the heat generated due to the interfacial friction while cutting.2.1. Velocity fieldsFor simplicity, the drag-cutting process is simulated as the pick remaining stationary against the rock moving past the cutter at a cutting velocity Vc. The resulting velocity fields in the uncut rock and fully formed chip are evaluated theoretically as input parameters for the numerical model.Though the researchers in the past have postulated linear (Nishmatsu, 1972) and curvilinear (Loui, 1998)paths of rock failure during the process of chip formation,for simplicity, it is assumed to be linear for the evaluation of velocity fields in the chip and the uncut rock. Fig. 1 illustrates this process of chipping under the action of a drag-cutter. The failure path is linear and at an angle / with respect to the cutting velocity as shown in Fig. 1. The inter-relationships between the cutting velocity Vc, shear velocity along the shear plane Vs, and chip velocity along the rake face Vr are represented in Fig. 2. These velocity fields in the rock areevaluated relative to the pick and thus the pick domain is assumed to be stationary against a moving rock domain.It has been found from chip-formation simulation studies (Loui, 1998) that the fracture plane (shear plane)exists at an angle of 30–35 with respect to the cutting velocity.From the velocity diagram (Fig. 2), the velocity components,u and v in x and y directions respectively, for the uncut rock and fullyformed chip are given by the following Eqs. (1) and (2), respectively. Uncut rocku =V c and v =0 (1)Fully formed chipu =γsin r V and v =γcos r V (2)2.2. ForcesThe forces acting on an orthogonal drag-cutter are representeddiagrammatically in Fig. 1. The cutting force Fc and the thrust force Fn were measured experimentallyand are related to the normal and frictional forces at the rake face and flank face (N and F, and /N and /F , respectively) as shown below: ,sin cos /c F F F F ++=γγ (3),sin cos /n N N F F +-=γγ (4)If μ is the tool –rock interface friction,//NF N F ==μ, (5)Solving for N and /N , we get,γμγμμsin ,2cos )1(2+--=N C F F N (6) γμγμγγμγμγsin 2cos )1()sin cos ()sin cos (2/+---+=C N F F N , (7) 2.3. Heat generationHeat generation during drag-cutting is mainly caused by friction at the interface between the pick and the rock (at flank and rake faces) as the cutter is dragged againstthe rock surface at a certain cutting velocity.It requires large or repeated plastic deformations to result in heat generation as in the case of metal cutting.Though elasto-plastic deformations take place in certainrock types before their failure and the formation of chips, suchdeformations are not large enough in rocks to result in the generation of heat. Therefore, for the purpose of estimating the heat generation during dragcutting,the rock chipping can be assumed to be caused by brittle failure and the heat generation limited to frictional heating. )(c f r V N NV Q Q Q /r tot +=+=μ, (8)where Qr and Qf are the frictional heat generated per second (watts) at the rake and flank faces, respectively,and Vr and Vc are the interfacial chip velocity at the rake face and flank face respectively.The velocity at which rock slides along the rake face of the tool (Vr) after rock chipping is difficult to assess.A fully formed chip does not offer a force against the rake face of the tool since it is completely detached from the rock mass and gets thrown away during the process of cutting. It has been observed by researchers in the past that drag tools undergo severe flank wear (wear land) and insignificant wear of the cutting face (Pliset al., 1988). Hence, for all practical purposes, the heat generated due to tool –rock friction at the rake face could be ignored and Eq. (8) reduces toV N Q Q f /tot μ==, (9)3. Discretization of pick –rock systemSince a simple orthogonal cutting tool is considered,heat transfer in the pick –rock system is treated as a two-dimensional problem by ignoringthe end effects.The whole domain has been discretized and analyzed in a two-dimensional Cartesian coordinate system. In the finite element solution of the problem, the domain is discretized into four-noded isoparametric elements as shown in Fig. 3. In the cutting simulations the pick is assumed to be stationary, thus the spatial discretizationof the pick does not change with time. However,since the rock is assumed to move past the pick at a constant velocity Vc, the discretized domain in the rock changes with time as per the velocity fields evaluated above.4. Finite element formulationGalerkin s approach has been used for converting the thermal energy equation (Eq. (10)) into a set of equivalent integral equations,tT C Q y T V X T C y T T K P P ∂∂=+∂∂+∂∂-∂∂+∂∂ )()x (2222μ, (10) where k is the coefficient of thermal conductivity, q is the density and Cp is the specific heat capacity at constant pressure.Let T be the approximate solution temperature and Rfem the finite element residue. Then, fem 2222-)()x (R tT C Q y T V X T C y T T K P P =∂∂+∂∂+∂∂-∂∂+∂∂ μ, (11) The approximate temperature solution T can be represented over the solution domain by[][]n T N T = , (12)where [N] is the overall shape function vector and {Tn} is the nodal temperature vector. With the use of Eq. (12), Eq. (11) can be discretized (Shih, 1984) yielding,5. Laboratory micro-pick experimentsThe cutting action was simulated using laboratory scale micro-picks and the rotary drag-cutting was carried out against an applied vertical thrust force. The applied thrust levels (Fn) were in the range of 230–750 N and the cutting speeds (Vc) were 0.01, 0.16 and 0.25m/s, which are within the practical drag-cutting ranges.The experiments were conducted on a vertical drill machine. A schematic diagram of the complete experimental setup is shown in Fig. 4. Laboratory scale micro-picks used for rotary cutting had tungsten carbide inserts as the cutting edge. The inserts were 12 mm in length, 10 mm in width and 3.5 mm in thickness and was designed to have a wedge angle of 80 and a rake angle of 10. For the measurement of temperature developed during cutting, copper–constantan thermocouple was introduced into a 1-mm diameter hole drilled at a distance of 2 mm from the cutting edge within the tungsten carbide insert and blazed with silver to secure a good holding. The micro-picks along with thermocouple are given in Fig. 5.A pre-calibrated milli-voltmeter of the range 0.1–1000 mV was used to record the difference of voltage across the thermocouple. Torque generated at the pick–rock interface is measured using a spoked wheelTorque generated at the pick–rock interface is measured using a spoked wheel dynamometer (Rao and Misra, 1994) in line with arecorder.In all these experiments, the drag–pick cutter was held stationary between the plates of the dynamometer and the rock core samples were held in a holder. The rock sample holder is designed to hold samples at one end, while the other end is provided with a taper, which fits into the drill shank. With this arrangement, rock core sample rotates against the stationary drag –pick during the cutting process. The pick-holder and rockholder are shown in Fig. 6. The experimental results have been discussed, in details, in Loui and Rao(1997). However, only a few of the experimental results are used in this paper for validation of the numerical model.6. Results and discussionThe numerical model developed in the current study has the ability to predict the pick–rock interfacial temperature and the temperature profiles in the pick–rock system. The main input parameters, which influence the temperature development at the pick–rock interface, are the cutting speed and the interfacial friction at the flank face ofthe pick. Eq. (9) shows they are linearly related to the quantity of heat generated at the pick–rock interface. The results obtained from the numerical model are compared with those observed from the experimental observations.6.1. Temperature built-upAll the simulation runs indicated that after 6 minute of pick–rock contact time, the temperature through out the domain stabilizes.Pick-rock interface temperature is defined as the average interface temperature observed along the flank face of the tool and is evaluated using Eq. (23). The temperature rise with time at the pick–rock interface for the rock type sandstone at a cutting speed of 0.25 m/s, thrust force of 230 N, and for a depth of cut of 1 mm is shown in Fig. 7. A comparison is also made with the experimental observation of the pick–rock interface temperature for the same input parameters used for the numerical model. The trend by which temperature builds up and further stabilizes has been found to be in a good agreement with the experimental observations. This trend is due to the fact that the amount of the heat generated being much higher during the onset of the cutting process compared to the dissipation of heat. As the cutting proceeds, the temperature builds up in the pick –rock system. When the temperature attains higher regimes, the heat dissipation due to convection and conduction also increases and eventually equals the heat generation due to friction. As the rate of heat generation remains constant, provided the machine operating parameters are unaltered, the temperature in the pick–rock system tends to stabilize after a few minutes of cutting.6.2 Stabilized interface temperatureThe stabilized pick–rock interface temperature at the end of 6 min of continuous cutting is termed as the stabilized interface temperaturefor that particular simulationor experiment. The variation of the stabilized pick–rock interface temperature has been studied against some of the input parameters, which directly influence the temperature such as the cutting speed and the frictional force. Other parameters like depth of cut, rake angle, etc. influence the frictional force at the interface and therefore, have only indirect effect on the temperature developed.Fig. 9 shows the variation of stabilised interfacial temperature with the cutting speed and its comparison with the experimental observation. The input parametersfor the numerical model were taken corresponding to the operating parameters used for the experiments. The predicted values by FEM analysis show a linear variation(Fig. 9) since the cutting speed is directly proportional to the quantity of heat generated (Eq. (9)).The other parameter, which directly influences the heat generation at the pick–rock interface, is the frictional force at the flank face of the pick and therefore,it has been plotted against the pick rock interface temperature for numerical and experimental results (Fig.10). As observed from Fig. 10, both the results show aLineartrend.In general, from all the temperature prediction runs,the numerical results showed a higher temperature values(up to approximately 25%) compared to their experimentalcounterparts. In the numerical model it has been assumed that all of the frictional heat generated at the flank face of the tool has been converted into frictional heat which may be the reason for an over estimation. However, the errors incurred in the experimental dragcutting and also during observation using thermo-couple type of temperature measurement system cannot totally be ignored. Martin and Fowell (1997) has measured the pick–rock interface temperature using thermocouples as well as infra-red gun and found that the latter recorded higher temperature values. The error incurred may be partly due to the two-dimensional approximation of drag-cutting.6.3. Temperature variation along rake face and flankfaceFigure 11 shows the temperature variation at its stabilized state (after 6 min of continuous cutting period) along rake face and flank face of the tool. Both the curves (flank and rake faces) are plotted simultaneouslystarting from the tip of the cutter, which is the intersection point for both the forces. As frictional heat is generate at the wear-land of the flank face of the tool,temperature rises along the wear land reaches a maximum approximately at the mid point of the wear land (hf), and drops rapidly towards the flank side of the cutter as show in Fig. 11. Since at the rake face of the tool no heat is being generated during cutting, temperature falls along the rake face from the tool edge.This indicates that the temperature concentration takes place at the worn-out portion of the flank face of the cutting tool (wear land), which comes in direct contact with the rock.7.ConclusionsA general purpose finite element program has been developed to study the temperature attained during pick–rock interaction. The model has been used forthe prediction of pick–rock interface temperatures as well as temperature profile of the whole pick–rock System.The transient heat transfer modelling showed that the temperature builds up steeply during the onset of cutting and stabilizes within a few minutes of continuous pick–rock contact. This trend has been validated from experimentalobservations.The results obtained from the numerical model proves direct effect of the rock cutting parameters viz., frictional force and cutting velocity on the temperaturerise at the pick–rock interface. This has been validated by linearly increasing trends observed between the stabilized interface temperature and the rock cuttingParameters.The current study has dealt with continuous dragcutting,both numerically and experimentally. However,the transient finite element program developed can bemodified to predict the temperature rise in the pick during intermittent cutting, which mostly occurs in real life cutter picks used in road headers and shearers. With theprior knowledge of frictional forces acting in the pick–rock system during intermittent cutting, this modification can be done by suppressing the heat generationterms and adding convective heat transfer terms at the pick–rock interface nodes when the pick leaves the contact with the rock; and by initialization of rock domaintemperatures and re-introduction of heat generation terms duringre-contact. Since the experimental setup used in the current study was not designed for intermittentcutting, experimental data could not be obtained for validation and therefore, intermittent cutting was not dealt in this paper. Further, it may require a moredetailed three-dimensional modelling to reduce the errors and to get the results closer to the realistic temperature values.ReferencesBarbish, A.B., Gardner, G.H.F., 1969. The effect of heat on some mechanical properties of igneous rocks. ASME J. Soc. Petr. Eng. 9,395–402.Cools, P.C.B.M., 1993. Temperature measurements upon the chisel surface during rock cutting. Int. J. Rock Mech. Min. Sci. Geomech.30, 25–35.De Vries, M.F., 1966. Investigation of drill temperature as a drilling performance criterion. Ph.D. thesis, University of Wisconsin, USA.Detournay, E., Defourny, P., 1992. A phenomenological model for the drilling action of drag bits. Int. J. Rock. Mech. Min. Sci. 29, 13–23. Estes, J.C., 1978. Techniques of pilot scale drilling research. ASME J. Pressure Vessels Technol. 100, 188–193.Gray, K.E., Armstrong, F., Gatlin, C., 1962. Two-dimensional study of rock breakage in drag-bit drilling at atmospheric pressure. J. Pertol. Technol., 93–98.Karfakis, M.G., Heins, R.W., 1966. Laboratory investigation of bit bearing temperatures in rotary drilling. ASME J. Energy Resourc.Tech. 108, 221–227.Loui, J.P., Rao, K.U.M., 1997. Experimental investigations of pick –rock interface temperature in drag–pick cutting. Indian J. Eng.Mater. Sci. 4, 63–66.Loui, J.P., 1998. Finite element simulation and experimental investigationof drag-cutting in rocks. Ph.D. thesis, Indian Institute of Technology, Kharagpur, India.Martin, J.A., Fowell, R.J., 1997. Factors governing the onset of severe drag tool wear in rock cutting. Int. J. RockMech. Min. Sci. 34, 59–69. Nishmatsu, Y., 1972. The mechanics of rock cutting. Int. J. Rock Mech. Min. Sci. 9, 261–272.Plis, M.N., Wingquist, C.F., Roepke, W.W., 1988. Preliminary Evaluation of the Relationship of Bit Wear to Cutting Distance, Forces and Dust Using Selected Commercial and ExperimentalCoal and Rock Cutting Tools. USBM, RI-9193, p. 63.Rao, K.U.M., Misra, B., 1994. Design of a spooked wheel dynamometer. Int. J. Surf. Mining Recl. 8, 146–147.Roxborough, F.F., 1969. Rock cutting research. Tunnels Tunnelling 1, 125–128.Shih, T.M., 1984. Numerical Heat Transfer. Hemisphere/Springer, Washington/New York, p. 563.。
毕设英文翻译英文版
72页Machine Tools Objectived.Machine tools are the main engines of the manufacturing industry. This chapter covers a few of the details that are common to all classes of machine tools discussed in this book. After completing the chapter, the reader will be able to>understand the classification ofthe various machine tools used in manufacturing industries.>identify the differences between generating and forming of surfaces.> identify various methods used to generate different types of surfaces. >distinguish between the different accuracies and surface finishes that are achievable with different machine tools.>understand the different components of the machine tools and their functions.>learn about the different support structures used in the machine tools. >understand the various actuation systems that are useful to generate the required surfaces.>Learn the different types of guideways used in the machine tools.>understand the work holding requirements.3.1 INTRODUCTIONThe earliest known machine tools are the Egyptian foot-operated lathes.These machine tools were developed essentially to allow for the introduction of accuracy in manufacturing.A machine tool is defined as one which while holding the cutting tools, would be able to remove metal from a workpiece in order to generate the requisite job of given size, configuration, and finish. It is different from a machine, which is essentially a means of converting the source of power from one form to the other. The machine tools are the mother machines since without them, no components can be produced in their finished form. They are very old and the industrial revolution owes its success to them.A machine tool is required to provide support to the workpiece and cutting tools as well as provide motion to one or both of them in order to generate the required shape on the workpiece. The form generated depends upon the type of machine tool used.In the last two centuries, the machine tools have been developed substantially. The machine tool versatility has grown to cater to the varied needs Of the new inventors coming with major developments. For example,James Watt's steam engine could be proven only after a satisfactory method was found to bore theengine cylinder with a boring bar by Wilkinson around1775.A machine tool is designed to perform certain primaryfunctions, but the extent to which it can be exploited to perform secondaryfunctions is a measure of its flexibility.Generally,the flexibility o f the machine tool is increased by the use of secondary functional attachments,such as radius or spherical turning attachment for a centre lathe.Alternatively,to improve productivity,special attach ments are added,which also reduce the flexibility.3.2CLASSIFICATION OF MACHINE TOOLSThere are many ways in which the machine tools can be classified .One such classification based on the production capability and a pplication is shown below:1.General purpose machine tools(GPM)are thosedesigned to pe rform a variety of machining operations ona wide range of compo nents.By the very nature ofgeneralisation,the general purpose m achine tools are though capable of carrying out a variety of tasks, would not be suitable for large production,since the setting time for any given operation is large.Thus,the idle time onthe general purpose machine tools is more and the machine utilisation is po or.The machine utilisation may be termed as the percentage of a ctual machining or chip generating time to the actual time availab le.This is much lower for the general purpose machine tools.The y may also be termed as the basic machine tools.Further,skilled operators would be required to run the general pu rpose machine tools.Hence,their utility is in job shops,such as catering to small batch and large variety job production,where the requirement is versatilityrather than productioncapability.Examplesa relathe,shaper,and milling machine.2Production machine tools are those where a numberof function s of the machine tools are automated such that the operator skill required to produce the componentis reduced.Also,this would h elp in reducing the idle time of the machine tool,thus improving the machine utilisation.It is also possible that a general purpose machine tool may be converted into a production machine tool by the utilisation of jigs and fixtures for holding theworkpiece.Th ese have been developed from the basic machine tools.Some exa mples are capstan lathes,turret lathes,automats,and multiple spi ndle drilling ma chines.The setting time for a given job is more.Also,toolingdesign for a given job is more time consuming and expensive.Hence the pro duction machine tools can only beused for large volume productio n.3.Special purpose machine tools(SPM)are those machine tools in which the setting operation for the job and tools is practically eliminated and complete automationis achieved.ms greatly reduce s the actual manufacturing time of a component and helps in the reduction of costs.These tools are used for mass manufacturing.These machine tools are expensive compared to the general purp ose machines since they are specifically designed for thegiven appl ication,and are restrictive in their application capabilities.Example s are cam shaft grinding machine,connecting rod twin boring mac hine,and piston turning lathe.4.Single purpose machine tools are those,which are designed specifically for doing a single operation on a class of jobs or on a single job.These tools have thehighest amount of automat ion and are used for really high rates of production.These are used specifically for one product only,and thus have the least f lexibility.However,these do not require any manual interventio n andare most cost effective.Examples are transfer linescompose d of unit heads for completely machining any givenproduct.The application of the above four types can beshown graphically in Fig.3.1.Fig.3.1Application of machine tools based on the capability.3.3GENERATING AND FORMINGGenerally,the component shape is produced in machinetools by two different techniques,gener ating andforming.Generating is the technique in which the required profile is ob tained by manipulating the relative motions of the workpiece and the cutting tool edge.Thus,theobtained contour would not be identical to the shapeof the cutting tool edge.This is generall y used for amajority of the general profiles required.The type of surface generated depends on the primary motion ofthe wor kpiece as well as the secondary or feed motion of the cutting t ool.For example,when the workpiece is rotated and a single point tool is moved along a straight line parallel to the axis ofrotati on of the workpiece,a helical surface is generated,as shown in Fig.3.2(a).If thepitch of the helix or feed rate is extremely s mall,orthe surface generated may be approximated to a cylinder.Thisis carried outinladlesandiscalledturningorcylindricalturning.Fig.3.2Generating and forming of surfaces by machine tools.An alternate method of obtaining the given profile is called formi ng in which,the shape of the cutting tool is impressed upon the workpiece,as shown in fig.3.2(b).Thus,the accuracy Of the obt ained shape dependupon the accuracy of the form of the tool used.However,many of the machine tool operations areactually combi nations of the above two.For example.when a dove tail is cut,the actual profile is obtainedby sweeping the angular cutter along the straight line.Thus,it involves forming(angular cutter profile)and generating(sweeping along a line),as shown in Fig.3.3.Fig3.3Generation of surface.3.4METHODS OF GENERATING SURFACESFig.3.4Classification of machine tools using single point cuttingtools.A large number of surfaces can be generated orformed with the help of the motions given to the tooland the workpiece.The sha pe of the tool also makes a very important contribution to the fi nal surface obtained Basically,there are two types of motions giv en in amachine tool.The primary motion given to the workpieceo r cutting tool constitutes the cutting speed,which causes a relativ e motion between the tool and workpiece such that the face of the cutting tool approaches the material to be ually,t he primary motion consumes most of the cutting power.The seco ndary motion isone which feeds the tool relatively past the workp iece.The combination of the primary and secondary motions isres ponsible for the generation of specific surfaces.Sometimes,there would be a tertiary movement in between thecutsforspecificsurfaces.A classification of machine tools based on themotions is shown i n Fig.3.4,for single point tools,and Fig.3.5for multi-point tools. In the case of job rotation,cylindrical surfaces would be generat ed,as shown in Fig.3.6,when a tool is fed in a direction paralle l to the axis of rotation.When the feeding direction is not paralle l to the axis of rotation,complex surfaces,such as cones(Fig.3.7), or contours(Fig.3.8)can begenerated.The tools used in the ab ove cases are of single point.If the tool motion is perpendicular to the axis of rotation,a plane surface would be generated,assho wn in Fig.3.9.However,if a cutting tool of a given form is fed i n a direction perpendicular to the axisof rotation,also called plun ge cutting,a contour surfaceof revolution would be obtained,as shown in Fig.3.10.Fig.3.5Classification of machine tools usingmulti-pointcuttingtools.Plane surface generation in shaping Plane surfaces canbe generat ed when the job or tool reciprocates for theprimary motion,as sh own in Fig.3.11,without any rotation.With the multi-point tools generally plane surfaces aregenerated,a s shown in Fig.3.12.However,in this situation,a combination of forming and generating,is used to get a variety of complex surf aces,which are otherwise impossible to get through the single-poi nt tool operations.Some typical examples are the spur gear hobbin g andspiral milling of formed cavities.3.5ACCURACY AND FINISH ACHIEVABLEIt is necessary to select a given machine tool or mchining operat ion for a job such that it is the lowest cost option.There are var ious operations possible for a given type of surface and each one has its own characteristics in terms of possible accuracy,surface finish,andcost.This selection is made at the time of process plan ning.The obtainable accuracy for various types of machine tools i s shown in Table3.1.The surface finish expected from the variou s processes is shown in Fig.3.13.The values presented in Table3 .1and Fig.3.13areonly a rough guide.The actual values greatly vary depending on the condition of the machine tool,the cuttingt ool used,and the various cutting process parameters.80Manufacturing TechnologyBASIC ELEMENTS OF MACHINE TOOLS3.6 BASIC ELEMENTS OF MACHINE TOOLSThe various components that are present in all the machine tools may be identified as follows:•Work holding device to hold the workpiece in thecorrect orient ation to achieve the required in manufacturing,for example chuck. •Tool holding device to hold the cutting tool in thecorrect posit ion with respect to the workpiece,and provide enough holding fo rce to counteract the cutting forcesacting on the tool,example to ol•Work motion mechanism to provide the necessary speed to th e workpiece for generating the surface,examplehead stock.•Tool motion mechanism to provide the various motions needed for the tool in conjunction with workpiece motion in order to ge nerate the required surface profiles,example carriage.•Support structure to support all the mechanisms shown above, and maintain their relative position with respect to each other,a nd allow for relative movement between the various parts to obta in the*requisite part profile and accuracy,example bed.The type of device or mechanism used variesdepending on the t ype of machine tool and thefunction it is expected to serve.In th is chapter,some of the more common elements would be discuss ed.However,further details may be found in the chapters where the actual machine tools are discussed.The various motions that need to be provided in themachine too l are cutting speed and feed.The range ofspeed and feed rates t o be provided in a given machine tool depends on the capability of the machine tooland the range of work materials that are expe cted to be processed.Basically,the actual speed and feed chosen depends upon the•work material,•required production rate,•required surface finish,and•expected accuracy.The drive units in a machine tool are expected to provide the re quired speed and convert the rotational speed into linear motion. Details of these may be found in books dealing with machine tool design.3.7SUPPORT STRUCTURESThe broad categories of support structures found in various machi ne tools are shown in Fig.3.14.They may beclassified as beds(h orizontal structures)or columns(vertical structures).The main requirements of the support structure are•Rigidity•Accuracy of guideways•Impact resistance•Wear resistanceBed provides a support for all the elements presentin a machine tool.It also provides the true relative positions Of all units in m achine tools.Some of these units may be sliding on the bed or fi xed.For the purpose Of sliding,accurate guideways are provided. Bed weight is approximately half the total weight of the machinet ool.The basic construction of a bed is like a box,to provide the highest possible rigidity with low weight.Toincrease the rigidity,th e basic box structure is added with various types of ribs,as shown in Fig.3.15.The addition of ribs complicates the manufacturing process forthe beds.Beds are generally constructed using cast iron or alloy cast iron c onsisting of alloying elements,such as nickel,chromium,and moly bdenum.With cast iron,because of the intricate designs of the b eds,the casting defectsmay not be fully eliminated.Alloy steel structure is also used for making beds.The predomina nt manufacturing method used is welding.The following advantage s can be claimed for steel construction:(a)With steels,the wall t hickness can be reduced.Thus,greater strength and stiffness for th e same weightwould be possible with alloy steel bed construction.(b)Walls of different thicknesses can be conveniently welded.W hereas in casting,this would create problems.(c)Repair of welded structures would be easier.(d)Large machining allowances would have to be provided for ca sting to remove the defects and hardConcrete is also tried as bed material.Its choice is mainly because of the large damping capac ity.For precisionmachine tools and measuring machines,granite is also used as the bed material.The major types of bed styles used in the machinetools are sho wn in Fig.3.16.。
毕业设计英翻译(英)(工程造价类)
The impacts of change management practices on projectchange cost performanceIntroductionMany researchers have conducted statistical analyses to reveal the correlations between project management best practices and project performance, and they have provided valuable recommendations to the industry about how to improve project performance. Among many project management best practices, change management practice is one of the most important practices (Lee et al., 2004; Zou and Lee, 2006).Further, project change cost performance is one of the most essential metrics used as a measure of project success (Williams, 2000; Eden et al., 2005).However, the previous studies concentrated on the overall change management practices implementation level, and none of them looked into individual change management practice elements. In addition, project budget was generally adopted as the basis for comparison when measuring project change cost magnificence, which entails a problem of accuracy, as will be elucidated later in this paper. This paper resolves these problems, and its intention is to show construction managers how each individual element of change management practices can improve project change cost performance. A secondary aim is to explore the correlation between overall change management practice implementation and project change cost performance.BackgroundThe Benchmarking and Metrics (BM&M) programme of the Construction Industry Institute (CII H) has been committed to providing construction industry ‘quanti tative data essential for the support of cost/benefit analyses’ (Construction Industry Institute, 2007). CII commenced the BM&M datacollection in 1996, and the database currently represents over 1200 projects for which CII best practices and project performance indices have been or are being recorded. In 1990, the CII Cost/Schedule Controls Task Force published a research report about the impact of project changes on construction cost and schedule at the operational level. Sanders (2000) also explored the impacts of change management practices on certain projects.CII started to include change management practice in its database in 1997, which included 14 elements. This study is aimed at investigating the relationship between usage of these 14 change management practice elements and project change cost performance. The BM&M database is the source of data analysis, and the data analysed in this research can be basically classified into three categories of metrics: project characteristics, project performance and change management practices use. Table 1 illustrates the demographic composition of the BM&M database based on respondent type, project nature and industrial group.As can be seen in Table 1, most of the projects are in the heavy industry sector. Such an extremely uneven sample population distribution poses difficulty in the subsequent data analyses, and it is therefore one of the primary considerations when the analysis techniques are selected. Appendix II presents the 14 questions used in this research.Research scopeCost and schedule are frequently the biggest concerns in a project, and they are also the project performance facets most sensitive to project changes. However, the impact of project changes on project schedule is far less significant than the effect on cost, and the reason is that cost is additive, while schedule is not. A project’s duration is determined by its critical paths, and all noncritical paths contain floats (slacks) of one size or another. If a change consumes some of the floats on non-critical paths but not so much that it changes the critical path, the total durationof the project will not be changed, and thus project schedule performance is not negatively affected by the change. In addition, change management practices have been found to be the most influential element affecting cost savings in the majority of projects (Construction Industry Institute, 2003). With all of these considerations, this research only focuses on the impacts of change management practice on project change cost performance.Research objectivesIt is worth noting that this research is to be explanatory instead of confirmative or predictive. In other words, the purpose of this research is to reveal potential correlations1 among project characteristics, change management practice and project change cost performance. The two main objectives of this research are (1) to investigate the effectiveness of individual change management practice elements in terms of improving project change cost performance—e.g. for a particular change management practice element, could the construction project using it have a high probability of achieving better and more predictable change cost performance than with other elements; and (2) to explore the correlation between overall change management practice implementation and project change cost performance while controlling for project characteristics variables.Answers to the first question could highlight which change management practice elements can singly influence project change cost performance significantly and thus deserve particular attention. By exploring the second relationship, the effectiveness of change management practice in controlling project characteristics can be validated.A few more words about changeFirst, i t is necessary to clarify the definition of ‘change’. In this research, ‘change’ refers to project changes that have been mutually agreed upon by both the owner and contractor. There are two types of projectchanges—project-development changes (PDCs) and scope changes (SCs). PDCs ‘include those changes required to execute the original scope of work or to obtain the original process basis’; in contrast, SCs‘include changes in the base scope of work or the process basis’ (Construction Industry Institute, 2007).The absolute monetary value of project change is less meaningful to this research than the ratio of it to the baseline because of different project scales. There should be some baseline against which change cost can be compared, and thus project change cost performance can be assessed. Such a denominator can be either project budget (initial predicted project cost) or project actual cost (the amount a project has spent at completion). Some researchers have used the former (Hsieh et al., 2004), but here the second metric is employed because the accuracy of the initial predicted project budget cannot be guaranteed and there is no information from the database to evaluate the accuracy of the project initial estimate.Research limitationsAs with any study employing statistical data analysis techniques and tools, the reliability of the raw data is crucial. Considerable opinion-type data in this research are collected based on the Likert scale. Therefore, the data are influenced by respondents’ biases. Some preparation of data has been done prior to the data analysis process (e.g. transforming the original 0–10 project complexity measure in the BM&M database into only three levels—low, medium and high).2 In this way, some of the original data are truncated and become fuzzier, which means that some bias can be eliminated to alleviate the subjectivity of the data.It is necessary to point out that because this research is basically carried out by statistical means, the research processes and results are inherently vulnerable to the statistical limitations of the selected data analysis techniques and the available data. This paper relies onstatistical analysis, and thus further research is suggested, including qualitative analysis of the implied relations.Research methodThe two objectives of this research can be interpreted as finding two types of statistical correlations: (1) the relationship between each of a group of bivariant indicator variables3 (change management practice element question with answers Yes or No) and a single interval variable (project change cost factor); and (2) the relationship between two interval variables (change management practice index and project change cost factor) within each category of categorical variables (project characteristics).ANOVA is intrinsically ideal for investigating the first type of relation, so multiple one-way ANOVA tests are conducted. For the second correlation, linear regression is conducted because there is no theoretical or empirical support for a non-linear (curvilinear) correlation, and because the primary purpose of this research is to evaluate the effect rather than predicting.Data analysesMeasuring project change cost performance: change cost factor As discussed earlier, project change cost performance cannot be measured by the absolute value of changes but rather is measured by the ratio relative to project actual total cost. Therefore, change cost factor is used in this research to measure the performance of project change cost. This metric measures the total cost of changes as a fraction of actual total project cost (Construction Industry Institute, 1998). For industrial sector owner projects, actual total project cost includes total installation cost at turnover but excludes land costs; for building sector owner projects, it is the total cost of design and construction to preparethe facility for occupancy; and for contractor projects, it is the total cost of the final scope of work.The impact of individual change management practice element implementation on project change cost performanceThere is no theoretical support for hypothesizing the existence of interaction effects of change management practice elements on change cost factor, nor are there any sound practical or empirical indications. From the perspective of the practitioner, whether owner or contractor, any of these change management practice elements can be implemented independently, although it may be convenient to use some of them in combination with others. For this reason, multiple one-way ANOVA rather than k-way ANOVA is conducted to examine the effect of each single change management practice element on change cost factor.According to Stevens (1996), influential points rather than outliers should be of greatest concern because their involvement impacts significantly on the statistical result. A Cook’s distance greater than 1 usually flags an extremely influential point,4 and so cases with Cook’s distance over 1 are precluded from further analysis here. The remaining cases are subjected to a Kolmogorov–Smirnov test to check data normality, and the hypothesis of data normality is rejected by the result. Meanwhile, a basic descriptive statistic demonstrates that cell sizes of projects using or not using a certain change management practice element are extremely uneven. For some change management practice elements, such as 1 and 2, less than 10% of all cases answered ‘No’. In view of these violations to assumptions of ordinary ANOVA, Brown and Forsythe’s F-test of equality of means is performed as a substitute for an ordinary F-test because it is robust against non-normality, unequal group sizes and heterogeneity of variances (Garson, 2006).In addition to the ANOVA test for the equality of means, Levene’s testis used to test the equality of the variances (in Yes and No groups). The incentive for performing this test is that change management practice elements are able not only to reduce the average level of change cost but also to control the variation range of change cost, thus making the project’s change cost performance more predictable. Table 2 shows the significant results of the two tests on change management practice elements.For change management practice elements 4, 5, 6 and 10, the Brown–Forsythe F-test statistic is significant. In addition, the observed mean change cost factor values of the No group are higher than those for the Yes group. This indicates that the projects using each of these change management practice elements achieved significantly better change cost performance than the projects that did not use the elements. Although it cannot be certain that these elements are the sole reason for better change cost performance, the result statistically ascertains that there is an undeniable correlation between using these change management practice elements and better change cost performance. Further, because the Levene statistic is significant and the observed standard deviation of the No group is higher than that of the Yes group, the probability of suffering an outrageously high change cost is significantly lower for projects following change management practice elements 4, 6 and 10 than for others.The impact of overall change management practice implementation on project change cost performanceProject characteristic variable selectionThe efficacy of overall change management practice in different types of projects can vary widely depending on project nature, industrial type, project complexity, project size, contract methods and the level of experience of project participants. In order to separate out the effects of project characteristics and focus on the partial effect of changemanagement practice on project change cost factor, it is necessary to categorize and group projects based on different characteristic factors and then investigate the correlation between overall change management practice and change cost factor.Theoretical and empirical information is employed to select which project characteristic variables should be considered. Although the database includes a number of project characteristic metrics, only the following five variables are selected: respondent type, project nature, industrial type, transformed complexity and cost category. Table 3 shows detailed categories of the five variables.Measuring overall change management practice implementation: change management practice indexThe overall implementation of the change management practice elements is measured by the change management practice index, a continuous variable scored on a 0 to 10 scale, with 0 meaning no use and 10 meaning extensive use of all of the elements.The data analysis is conducted with two continuous variables—change management practice index and change cost factor—while controlling for the five project characteristic variables. As an independent variable, the change management practice index value is not spread from 0 to 10 because this index is calculated by the implementation of individual change management practice elements, which are measured as categorical variables. This makes many data points cluster on several change management practice index levels. However, since there are more than 15 change management practice index levels, it can be deemed to be a quasi-interval variable. Therefore, linear regression is used instead of ANOVA, and in order to retain the pristine quantitative information in project change costs, non-parametric or ranking regression methods are not used.The correlation between change management practice index and change cost factor in different project categoriesBefore fitting the model, influential points are detected (Cook’s distance D.1) and deleted. Normality of error terms is not a serious assumption for a bivariate linear regression (Wesolowsky, 1976). According to Wesolowsky, formal tests of normality are not necessary with large sample sizes (in this case, n.50 for each project characteristic category) because ‘in large samples lack of normality has no important consequences’(Wesolowsky, 1976). Meanwhile, linear regression is robust in the face of small or medium violations against the homoscedasticity of variances assumption.Table 4 shows the significant findings of the bivariate linear regressions in each project characteristic category. In the table, for both contractor and owner projects all of the significant beta coefficients are negative for add-on projects, heavy industrial projects, medium- and high-complexity projects, and projects with budgets between US$15m and $50m. This indicates that higher change management practice index values are associated with lower change cost factor values in the corresponding project characteristic categories. These results are acceptable for such an exploratory non-physical-science study because even the smallest sample size of these regressions is over 50 and the power of the test5 is almost over 0.50, which is acceptable (Stevens, 1996).ConclusionsAlthough there has been a consensus in both academia and industry that project change management practices can improve project change cost performance, it is shown that individual change management practice elements are not equally effective. Generally, for those projects in which a ‘contingency plan for changesusceptible areas in the early phases has been prepared, all changes are required to go through a formal changejustification procedure, and the contract specifies how to manage changes’, the possibility of incurring an extremely high project change cost compared with actual project cost is significantly lower than for other projects. In addition, those pro jects in which ‘changes are evaluated against project business drivers and success criteria’ perform better on average than other projects in terms of project change cost performance. Therefore, these four change management practice elements are highly recommended for construction projects.The impact of project ownership on change management practice implementation is not noticeable because for both contractor and owner projects a high overall change management practice implementation score is associated with better project change cost performance. However, this relationship can be further fortified in some specific project types. The analysis shows that add-on projects have better correlations between overall change management practice implementation and change cost performance than do grassroots and modernization projects. The results also indicate that heavy industrial, highly complex, and US$15–50m projects have better correlations than the other categories.。
毕业设计外文翻译英文
Bid Compensation Decision Model for Projectswith Costly Bid PreparationS.Ping Ho,A.M.ASCE 1Abstract:For projects with high bid preparation cost,it is often suggested that the owner should consider paying bid compensation to the most highly ranked unsuccessful bidders to stimulate extra effort or inputs in bid preparation.Whereas the underlying idea of using bid compensation is intuitively sound,there is no theoretical basis or empirical evidence for such suggestion.Because costly bid preparation often implies a larger project scale,the issue of bid compensation strategy is important to practitioners and an interest of study.This paper aims to study the impacts of bid compensation and to develop appropriate bid compensation strategies.Game theory is applied to analyze the behavioral dynamics between competing bidders and project owners.A bid compensation model based on game theoretic analysis is developed in this study.The model provides equilibrium solutions under bid compensation,quantitative formula,and quali-tative implications for the formation of bid compensation strategies.DOI:10.1061/(ASCE )0733-9364(2005)131:2(151)CE Database subject headings:Bids;Project management;Contracts;Decision making;Design/build;Build/Operate/Transfer;Construction industry .IntroductionAn often seen suggestion in practice for projects with high bid preparation cost is that the owner should consider paying bid compensation,also called a stipend or honorarium,to the unsuc-cessful bidders.For example,according to the Design–build Manual of Practice Document Number 201by Design–Build In-stitute of America (DBIA )(1996a ),it is suggested that that “the owner should consider paying a stipend or honorarium to the unsuccessful proposers”because “excessive submittal require-ments without some compensation is abusive to the design–build industry and discourages quality teams from participating.”In another publication by DBIA (1995),it is also stated that “it is strongly recommended that honorariums be offered to the unsuc-cessful proposers”and that “the provision of reasonable compen-sation will encourage the more sought-after design–build teams to apply and,if short listed,to make an extra effort in the prepara-tion of their proposal.”Whereas bid preparation costs depend on project scale,delivery method,and other factors,the cost of pre-paring a proposal is often relatively high in some particular project delivery schemes,such as design–build or build–operate–transfer (BOT )contracting.Plus,costly bid preparation often im-plying a large project scale,the issue of bid compensation strat-egy should be important to practitioners and of great interest of study.Existing research on the procurement process in constructionhas addressed the selection of projects that are appropriate for certain project delivery methods (Molenaar and Songer 1998;Molenaar and Gransberg 2001),the design–build project procure-ment processes (Songer et al.1994;Gransberg and Senadheera 1999;Palaneeswaran and Kumaraswamy 2000),and the BOT project procurement process (United Nations Industrial Develop-ment Organization 1996).However,the bid compensation strat-egy for projects with a relatively high bid preparation cost has not been studied.Among the issues over the bidder’s response to the owner’s procurement or bid compensation strategy,it is in own-er’s interest to understand how the owner can stimulate high-quality inputs or extra effort from the bidder during bid prepara-tion.Whereas the argument for using bid compensation is intuitively sound,there is no theoretical basis or empirical evi-dence for such an argument.Therefore,it is crucial to study under what conditions the bid compensation is effective,and how much compensation is adequate with respect to different bidding situa-tions.This paper focuses on theoretically studying the impacts of bid compensation and tries to develop appropriate compensation strategies for projects with a costly bid preparation.Game theory will be applied to analyze the behavioral dynamics between com-peting bidders.Based on the game theoretic analysis and numeric trials,a bid compensation model is developed.The model pro-vides a quantitative framework,as well as qualitative implica-tions,on bid compensation strategies.Research Methodology:Game TheoryGame theory can be defined as “the study of mathematical models of conflict and cooperation between intelligent rational decision-makers”(Myerson 1991).Among economic theories,game theory has been successfully applied to many important issues such as negotiations,finance,and imperfect markets.Game theory has also been applied to construction management in two areas.Ho (2001)applied game theory to analyze the information asymme-try problem during the procurement of a BOT project and its1Assistant Professor,Dept.of Civil Engineering,National Taiwan Univ.,Taipei 10617,Taiwan.E-mail:spingho@.twNote.Discussion open until July 1,2005.Separate discussions must be submitted for individual papers.To extend the closing date by one month,a written request must be filed with the ASCE Managing Editor.The manuscript for this paper was submitted for review and possible publication on March 5,2003;approved on March 1,2004.This paper is part of the Journal of Construction Engineering and Management ,V ol.131,No.2,February 1,2005.©ASCE,ISSN 0733-9364/2005/2-151–159/$25.00.D o w n l o a d e d f r o m a s c e l i b r a r y .o r g b y N A N J I N G U N I VE R S I T Y OF o n 01/06/14. C o p y r i g h t A S C E . F o r p e r s o n a l u s e o n l y ; a l l r i g h t s r e s e r v e d .implication in project financing and government policy.Ho and Liu (2004)develop a game theoretic model for analyzing the behavioral dynamics of builders and owners in construction claims.In competitive bidding,the strategic interactions among competing bidders and that between bidders and owners are com-mon,and thus game theory is a natural tool to analyze the prob-lem of concern.A well-known example of a game is the “prisoner’s dilemma”shown in Fig.1.Two suspects are arrested and held in separate cells.If both of them confess,then they will be sentenced to jail for 6years.If neither confesses,each will be sentenced for only 1year.However,if one of them confesses and the other does not,then the honest one will be rewarded by being released (in jail for 0year )and the other will be punished for 9years in jail.Note that in each cell,the first number represents player No.1’s payoff and the second one represents player No.2’s.The prisoner’s dilemma is called a “static game,”in which they act simultaneously;i.e.,each player does not know the other player’s decision before the player makes the decision.If the payoff matrix shown in Fig.1is known to all players,then the payoff matrix is a “common knowledge”to all players and this game is called a game of “complete information.”Note that the players of a game are assumed to be rational;i.e.,to maximize their payoffs.To answer what each prisoner will play/behave in this game,we will introduce the concept of “Nash equilibrium ,”one of the most important concepts in game theory.Nash equilibrium is a set of actions that will be chosen by each player.In a Nash equilib-rium,each player’s strategy should be the best response to the other player’s strategy,and no player wants to deviate from the equilibrium solution.Thus,the equilibrium or solution is “strate-gically stable”or “self-enforcing”(Gibbons 1992).Conversely,a nonequilibrium solution is not stable since at least one of the players can be better off by deviating from the nonequilibrium solution.In the prisoner’s dilemma,only the (confess,confess )solution where both players choose to confess,satisfies the stabil-ity test or requirement of Nash equilibrium.Note that although the (not confess,not confess )solution seems better off for both players compared to Nash equilibrium;however,this solution is unstable since either player can obtain extra benefit by deviating from this solution.Interested readers can refer to Gibbons (1992),Fudenberg and Tirole (1992),and Myerson (1991).Bid Compensation ModelIn this section,the bid compensation model is developed on the basis of game theoretic analysis.The model could help the ownerform bid compensation strategies under various competition situ-ations and project characteristics.Illustrative examples with nu-merical results are given when necessary to show how the model can be used in various scenarios.Assumptions and Model SetupTo perform a game theoretic study,it is critical to make necessary simplifications so that one can focus on the issues of concern and obtain insightful results.Then,the setup of a model will follow.The assumptions made in this model are summarized as follows.Note that these assumptions can be relaxed in future studies for more general purposes.1.Average bidders:The bidders are equally good,in terms oftheir technical and managerial capabilities.Since the design–build and BOT focus on quality issues,the prequalification process imposed during procurement reduces the variation of the quality of bidders.As a result,it is not unreasonable to make the “average bidders”assumption.plete information:If all players consider each other tobe an average bidder as suggested in the first assumption,it is natural to assume that the payoffs of each player in each potential solution are known to all players.3.Bid compensation for the second best bidder:Since DBIA’s(1996b )manual,document number 103,suggests that “the stipend is paid only to the most highly ranked unsuccessful offerors to prevent proposals being submitted simply to ob-tain a stipend,”we shall assume that the bid compensation will be offered to the second best bidder.4.Two levels of efforts:It is assumed that there are two levelsof efforts in preparing a proposal,high and average,denoted by H and A ,respectively.The effort A is defined as the level of effort that does not incur extra cost to improve quality.Contrarily,the effort H is defined as the level of effort that will incur extra cost,denoted as E ,to improve the quality of a proposal,where the improvement is detectable by an effec-tive proposal evaluation system.Typically,the standard of quality would be transformed to the evaluation criteria and their respective weights specified in the Request for Pro-posal.5.Fixed amount of bid compensation,S :The fixed amount canbe expressed by a certain percentage of the average profit,denoted as P ,assumed during the procurement by an average bidder.6.Absorption of extra cost,E :For convenience,it is assumedthat E will not be included in the bid price so that the high effort bidder will win the contract under the price–quality competition,such as best-value approach.This assumption simplifies the tradeoff between quality improvement and bid price increase.Two-Bidder GameIn this game,there are only two qualified bidders.The possible payoffs for each bidder in the game are shown in a normal form in Fig.2.If both bidders choose “H ,”denoted by ͑H ,H ͒,both bidders will have a 50%probability of wining the contract,and at the same time,have another 50%probability of losing the con-tract but being rewarded with the bid compensation,S .As a re-sult,the expected payoffs for the bidders in ͑H ,H ͒solution are ͑S /2+P /2−E ,S /2+P /2−E ͒.Note that the computation of the expected payoff is based on the assumption of the average bidder.Similarly,if the bidders choose ͑A ,A ͒,the expected payoffswillFig.1.Prisoner’s dilemmaD o w n l o a d e d f r o m a s c e l i b r a r y .o r g b y N A N J I N G U N I VE R S I T Y OF o n 01/06/14. C o p y r i g h t A S C E . F o r p e r s o n a l u s e o n l y ; a l l r i g h t s r e s e r v e d .be ͑S /2+P /2,S /2+P /2͒.If the bidders choose ͑H ,A ͒,bidder No.1will have a 100%probability of winning the contract,and thus the expected payoffs are ͑P −E ,S ͒.Similarly,if the bidders choose ͑A ,H ͒,the expected payoffs will be ͑S ,P −E ͒.Payoffs of an n -bidder game can be obtained by the same reasoning.Nash EquilibriumSince the payoffs in each equilibrium are expressed as functions of S ,P ,and E ,instead of a particular number,the model will focus on the conditions for each possible Nash equilibrium of the game.Here,the approach to solving for Nash equilibrium is to find conditions that ensure the stability or self-enforcing require-ment of Nash equilibrium.This technique will be applied throughout this paper.First,check the payoffs of ͑H ,H ͒solution.For bidder No.1or 2not to deviate from this solution,we must haveS /2+P /2−E ϾS →S ϽP −2E͑1͒Therefore,condition (1)guarantees ͑H ,H ͒to be a Nash equilib-rium.Second,check the payoffs of ͑A ,A ͒solution.For bidder No.1or 2not to deviate from ͑A ,A ͒,condition (2)must be satisfiedS /2+P /2ϾP −E →S ϾP −2E͑2͒Thus,condition (2)guarantees ͑A ,A ͒to be a Nash equilibrium.Note that the condition “S =P −2E ”will be ignored since the con-dition can become (1)or (2)by adding or subtracting an infinitely small positive number.Thus,since S must satisfy either condition (1)or condition (2),either ͑H ,H ͒or ͑A ,A ͒must be a unique Nash equilibrium.Third,check the payoffs of ͑H ,A ͒solution.For bid-der No.1not to deviate from H to A ,we must have P −E ϾS /2+P /2;i.e.,S ϽP −2E .For bidder No.2not to deviate from A to H ,we must have S ϾS /2+P /2−E ;i.e.,S ϾP −2E .Since S cannot be greater than and less than P −2E at the same time,͑H ,A ͒solution cannot exist.Similarly,͑A ,H ͒solution cannot exist either.This also confirms the previous conclusion that either ͑H ,H ͒or ͑A ,A ͒must be a unique Nash equilibrium.Impacts of Bid CompensationBid compensation is designed to serve as an incentive to induce bidders to make high effort.Therefore,the concerns of bid com-pensation strategy should focus on whether S can induce high effort and how effective it is.According to the equilibrium solu-tions,the bid compensation decision should depend on the mag-nitude of P −2E or the relative magnitude of E compared to P .If E is relatively small such that P Ͼ2E ,then P −2E will be positive and condition (1)will be satisfied even when S =0.This means that bid compensation is not an incentive for high effort when the extra cost of high effort is relatively low.Moreover,surprisingly,S can be damaging when S is high enough such that S ϾP −2E .On the other hand,if E is relatively large so that P −2E is negative,then condition (2)will always be satisfied since S can-not be negative.In this case,͑A ,A ͒will be a unique Nash equi-librium.In other words,when E is relatively large,it is not in the bidder’s interest to incur extra cost for improving the quality of proposal,and therefore,S cannot provide any incentives for high effort.To summarize,when E is relatively low,it is in the bidder’s interest to make high effort even if there is no bid compensation.When E is relatively high,the bidder will be better off by making average effort.In other words,bid compensation cannot promote extra effort in a two-bidder game,and ironically,bid compensa-tion may discourage high effort if the compensation is too much.Thus,in the two-bidder procurement,the owner should not use bid compensation as an incentive to induce high effort.Three-Bidder GameNash EquilibriumFig.3shows all the combinations of actions and their respective payoffs in a three-bidder game.Similar to the two-bidder game,here the Nash equilibrium can be solved by ensuring the stability of the solution.For equilibrium ͑H ,H ,H ͒,condition (3)must be satisfied for stability requirementS /3+P /3−E Ͼ0→S Ͼ3E −P͑3͒For equilibrium ͑A ,A ,A ͒,condition (4)must be satisfied so that no one has any incentives to choose HS /3+P /3ϾP −E →S Ͼ2P −3E͑4͒In a three-bidder game,it is possible that S will satisfy conditions (3)and (4)at the same time.This is different from the two-bidder game,where S can only satisfy either condition (1)or (2).Thus,there will be two pure strategy Nash equilibria when S satisfies conditions (3)and (4).However,since the payoff of ͑A ,A ,A ͒,S /3+P /3,is greater than the payoff of ͑H ,H ,H ͒,S /3+P /3−E ,for all bidders,the bidder will choose ͑A ,A ,A ͒eventually,pro-vided that a consensus between bidders of making effort A can be reached.The process of reaching such consensus is called “cheap talk,”where the agreement is beneficial to all players,and no player will want to deviate from such an agreement.In the design–build or BOT procurement,it is reasonable to believe that cheap talk can occur.Therefore,as long as condition (4)is satis-fied,͑A ,A ,A ͒will be a unique Nash equilibrium.An important implication is that the cheap talk condition must not be satisfied for any equilibrium solution other than ͑A ,A ,A ͒.In other words,condition (5)must be satisfied for all equilibrium solution except ͑A ,A ,A͒Fig.2.Two-biddergameFig.3.Three-bidder gameD o w n l o a d e d f r o m a s c e l i b r a r y .o r g b y N A N J I N G U N I VE R S I T Y OF o n 01/06/14. C o p y r i g h t A S C E . F o r p e r s o n a l u s e o n l y ; a l l r i g h t s r e s e r v e d .S Ͻ2P −3E ͑5͒Following this result,for ͑H ,H ,H ͒to be unique,conditions (3)and (5)must be satisfied;i.e.,we must have3E −P ϽS Ͻ2P −3E͑6͒Note that by definition S is a non-negative number;thus,if one cannot find a non-negative number to satisfy the equilibrium con-dition,then the respective equilibrium does not exist and the equi-librium condition will be marked as “N/A”in the illustrative fig-ures and tables.Next,check the solution where two bidders make high efforts and one bidder makes average effort,e.g.,͑H ,H ,A ͒.The ex-pected payoffs for ͑H ,H ,A ͒are ͑S /2+P /2−E ,S /2+P /2−E ,0͒.For ͑H ,H ,A ͒to be a Nash equilibrium,S /3+P /3−E Ͻ0must be satisfied so that the bidder with average effort will not deviate from A to H ,S /2+P /2−E ϾS /2must be satisfied so that the bidder with high effort will not deviate from H to A ,and condi-tion (5)must be satisfied as argued previously.The three condi-tions can be rewritten asS Ͻmin ͓3E −P ,2P −3E ͔andP −2E Ͼ0͑7͒Note that because of the average bidder assumption,if ͑H ,H ,A ͒is a Nash equilibrium,then ͑H ,A ,H ͒and ͑A ,H ,H ͒will also be the Nash equilibria.The three Nash equilibria will constitute a so-called mixed strategy Nash equilibrium,denoted by 2H +1A ,where each bidder randomizes actions between H and A with certain probabilities.The concept of mixed strategy Nash equilib-rium shall be explained in more detail in next section.Similarly,we can obtain the requirements for solution 1H +2A ,condition (5)and S /2+P /2−E ϽS /2must be satisfied.The requirements can be reorganized asS Ͻ2P −3EandP −2E Ͻ0͑8͒Note that the conflicting relationship between “P −2E Ͼ0”in condition (7)and “P −2E Ͻ0”in condition (8)seems to show that the two types of Nash equilibria are exclusive.Nevertheless,the only difference between 2H +1A and 1H +2A is that the bidder in 2H +1A equilibrium has a higher probability of playing H ,whereas the bidder in 1H +2A also mixes actions H and A but with lower probability of playing H .From this perspective,the difference between 2H +1A and 1H +2A is not very distinctive.In other words,one should not consider,for example,2H +1A ,to be two bidders playing H and one bidder playing A ;instead,one should consider each bidder to be playing H with higher probabil-ity.Similarly,1H +2A means that the bidder has a lower probabil-ity of playing H ,compared to 2H +1A .Illustrative Example:Effectiveness of Bid Compensation The equilibrium conditions for a three-bidder game is numerically illustrated and shown in Table 1,where P is arbitrarily assumed as 10%for numerical computation purposes and E varies to rep-resent different costs for higher efforts.The “*”in Table 1indi-cates that the zero compensation is the best strategy;i.e.,bid compensation is ineffective in terms of stimulating extra effort.According to the numerical results,Table 1shows that bid com-pensation can promote higher effort only when E is within the range of P /3ϽE ϽP /2,where zero compensation is not neces-sarily the best strategy.The question is that whether it is benefi-cial to the owner by incurring the cost of bid compensation when P /3ϽE ϽP /2.The answer to this question lies in the concept and definition of the mix strategy Nash equilibrium,2H +1A ,as explained previously.Since 2H +1A indicates that each bidderwill play H with significantly higher probability,2H +1A may already be good enough,knowing that we only need one bidder out of three to actually play H .We shall elaborate on this concept later in a more general setting.As a result,if the 2H +1A equilib-rium is good enough,the use of bid compensation in a three-bidder game will not be recommended.Four-Bidder Game and n-Bidder GameNash Equilibrium of Four-Bidder GameThe equilibrium of the four-bidder procurement can also be ob-tained.As the number of bidders increases,the number of poten-tial equilibria increases as well.Due to the length limitation,we shall only show the major equilibria and their conditions,which are derived following the same technique applied previously.The condition for pure strategy equilibrium 4H ,is4E −P ϽS Ͻ3P −4E͑9͒The condition for another pure strategy equilibrium,4A ,isS Ͼ3P −4E͑10͒Other potential equilibria are mainly mixed strategies,such as 3H +1A ,2H +2A ,and 1H +3A ,where the numeric number asso-ciated with H or A represents the number of bidders with effort H or A in a equilibrium.The condition for the 3H +1A equilibrium is3E −P ϽS Ͻmin ͓4E −P ,3P −4E ͔͑11͒For the 2H +2A equilibrium the condition is6E −3P ϽS Ͻmin ͓3E −P ,3P −4E ͔͑12͒The condition for the 1H +3A equilibrium isS Ͻmin ͓6E −3P ,3P −4E ͔͑13͒Illustrative Example of Four-Bidder GameTable 2numerically illustrates the impacts of bid compensation on the four-bidder procurement under different relative magni-tudes of E .When E is very small,bid compensation is not needed for promoting effort H .However,when E grows gradually,bid compensation becomes more effective.As E grows to a larger magnitude,greater than P /2,the 4H equilibrium would become impossible,no matter how large S is.In fact,if S is too large,bidders will be encouraged to take effort A .When E is extremely large,e.g.,E Ͼ0.6P ,the best strategy is to set S =0.The “*”in Table 2also indicates the cases that bid compensation is ineffec-Table pensation Impacts on a Three-Bidder GameEquilibriumE ;P =10%3H 2H +1A 1H +2A 3A E ϽP /3e.g.,E =2%S Ͻ14%*N/A N/N 14%ϽS P /3ϽE ϽP /2e.g.,E =4%2%ϽS Ͻ8%S Ͻ2%N/A 8%ϽS P /2ϽE Ͻ͑2/3͒P e.g.,E =5.5%N/AN/AS Ͻ3.5%*3.5%ϽS͑2/3͒P ϽEe.g.,E =7%N/A N/A N/A Always*Note:*denotes that zero compensation is the best strategy;and N/A =the respective equilibrium does not exist.D o w n l o a d e d f r o m a s c e l i b r a r y .o r g b y N A N J I N G U N I VE R S I T Y OF o n 01/06/14. C o p y r i g h t A S C E . F o r p e r s o n a l u s e o n l y ; a l l r i g h t s r e s e r v e d .tive.To conclude,in a four-bidder procurement,bid compensation is not effective when E is relatively small or large.Again,similar to the three-bidder game,when bid compensation becomes more effective,it does not mean that offering bid compensation is the best strategy,since more variables need to be considered.Further analysis shall be performed later.Nash Equilibrium of n -Bidder GameIt is desirable to generalize our model to the n -bidder game,al-though only very limited qualified bidders will be involved in most design–build or BOT procurements,since for other project delivery methods it is possible to have many bidders.Interested readers can follow the numerical illustrations for three-and four-bidder games to obtain the numerical solutions of n -bidder game.Here,only analytical equilibrium solutions will be solved.For “nA ”to be the Nash equilibrium,we must have P −E ϽS /n +P /n for bidder A not to deviate.In other words,condition (14)must be satisfiedS Ͼ͑n −1͒P −nE͑14͒Note that condition (14)can be rewritten as S Ͼn ͑P −E ͒−P ,which implies that it is not likely for nA to be the Nash equilib-rium when there are many bidders,unless E is very close to or larger than P .Similar to previous analysis,for “nH ”to be the equilibrium,we must have S /n +P /n −E Ͼ0for stability requirement,and condition (15)for excluding the possibility of cheap talk or nA equilibrium.The condition for the nH equilibrium can be reorga-nized as condition (16).S Ͻ͑n −1͒P −nE ͑15͒nE −P ϽS Ͻ͑n −1͒P −nE͑16͒Note that if E ϽP /n ,condition (16)will always be satisfied and nH will be a unique equilibrium even when S =0.In other words,nH will not be the Nash equilibrium when there are many bidders,unless E is extremely small,i.e.,E ϽP /n .For “aH +͑n −a ͒A ,where 2Ͻa Ͻn ”to be the equilibrium so-lution,we must have S /a +P /a −E Ͼ0for bidder H not to devi-ate,S /͑a +1͒+P /͑a +1͒−E Ͻ0for bidder A not to deviate,and condition (15).These requirements can be rewritten asaE −P ϽS Ͻmin ͓͑a +1͒E −P ,͑n −1͒P −nE ͔͑17͒Similarly,for “2H +͑n −2͒A ,”the stability requirements for bidder H and A are S /͑n −1͒ϽS /2+P /2−E and S /3+P /3−E Ͻ0,re-spectively,and thus the equilibrium condition can be written as ͓͑n −1͒/͑n −3͔͒͑2E −P ͒ϽS Ͻmin ͓3E −P ,͑n −1͒P −nE ͔͑18͒For the “1H +͑n −1͒A ”equilibrium,we must haveS Ͻmin ͕͓͑n −1͒/͑n −3͔͒͑2E −P ͒,͑n −1͒P −nE ͖͑19͒An interesting question is:“What conditions would warrant that the only possible equilibrium of the game is either “1H +͑n −1͒A ”or nA ,no matter how large S is?”A logical response to the question is:when equilibria “aH +͑n −a ͒A ,where a Ͼ2”and equilibrium 2H +͑n −2͒A are not possible solutions.Thus,a suf-ficient condition here is that for any S Ͼ͓͑n −1͒/͑n −3͔͒͑2E −P ͒,the “S Ͻ͑n −1͒P −nE ”is not satisfied.This can be guaranteed if we have͑n −1͒P −nE Ͻ͓͑n −1͒/͑n −3͔͒͑2E −P ͒→E Ͼ͓͑n −1͒/͑n +1͔͒P͑20͒Conditions (19)and (20)show that when E is greater than ͓͑n −1͒/͑n +1͔͒P ,the only possible equilibrium of the game is either 1H +͑n −1͒A or nA ,no matter how large S is.Two important practical implications can be drawn from this finding.First,when n is small in a design–build contract,it is not unusual that E will be greater than ͓͑n −1͒/͑n +1͔͒P ,and in that case,bid compensa-tion cannot help to promote higher effort.For example,for a three-bidder procurement,bid compensation will not be effective when E is greater than ͑2/4͒P .Second,when the number of bidders increases,bid compensation will become more effective since it will be more unlikely that E is greater than ͓͑n −1͒/͑n +1͔͒P .The two implications confirm the previous analyses of two-,three-,and four-bidder game.After the game equilibria and the effective range of bid compensation have been solved,the next important task is to develop the bid compensation strategy with respect to various procurement situations.Table pensation Impacts on a Four-Bidder GameEquilibriumE ;P =10%4H 3H +1A 2H +2A 1H +3A 4A E ϽP /4e.g.,E =2%S Ͻ22%*N/A N/A N/A S Ͼ22%P /4ϽE ϽP /3e.g.,E =3%2%ϽS Ͻ18%S Ͻ2%N/A N/A S Ͼ18%P /3ϽE ϽP /2e.g.,E =4%6%ϽS Ͻ14%2%ϽS Ͻ6%S Ͻ2%N/A S Ͼ14%P /2ϽE Ͻ͑3/5͒P e.g.,E =5.5%N/A 6.5%ϽS Ͻ8%3%ϽS Ͻ6.5%S Ͻ3%S Ͼ8%͑3/5͒P ϽE Ͻ͑3/4͒P e.g.,E =6.5%N/AN/AN/AS Ͻ4%*S Ͼ4%͑3/4͒P ϽEe.g.,E =8%N/A N/A N/A N/AAlways*Note:*denotes that zero compensation is the best strategy;and N/A=respective equilibrium does not exist.D o w n l o a d e d f r o m a s c e l i b r a r y .o r g b y N A N J I N G U N I VE R S I T Y OF o n 01/06/14. C o p y r i g h t A S C E . F o r p e r s o n a l u s e o n l y ; a l l r i g h t s r e s e r v e d .。
毕业设计英文翻译中英文对照版
Feasibility assessment of a leading-edge-flutter wind power generator前缘颤振风力发电机的可行性评估Luca Caracoglia卢卡卡拉克格里亚Department of Civil and Environmental Engineering, Northeastern University, 400 Snell Engineering Center, 360 Huntington A venue, Boston, MA 02115, USA美国东北大学土木与环境工程斯内尔工程中心400,亨廷顿大道360,波士顿02115This study addresses the preliminary technical feasibility assessment of a mechanical apparatus for conversion of wind energy. 这项研究涉及的是风能转换的机械设备的初步技术可行性评估。
The proposed device, designated as ‘‘leading-edge-fl utter wind power generator’’, employs aeroelastic dynamic instability of a blade airfoil, torsionally rotating about its leading edge. 这种被推荐的定义为“前缘颤振风力发电机”的设备,采用的气动弹性动态不稳定叶片翼型,通过尖端旋转产生扭矩。
Although the exploitation of aeroelastic phenomena has been proposed by the research community for energy harvesting, this apparatus is compact, simple and marginally susceptible to turbulence and wake effects.虽然气动弹性现象的开发已经有研究界提出可以通过能量采集。
毕业设计 英文翻译
9 Continuous-Time DynamicNeural Networks9 连续时间动态神经网络9.1 Dynamic Neural Network Structures: An Introduction9.2 Hopfield Dynamic Neural Network (DNN) and Its Implementation 9.3 Hopfield Dynamic Neural Networks (DNNs) as Gradient-like Systems9.4 Modifications of Hopfield Dynamic Neural Networks9.5 Other DNN Models9.6 Conditions for Equilibrium Points in DNN9.7 Concluding RemarksProblems9.1动态神经网络结构导论9.2动态的Hopfield神经网络(DNN)及其实现9.3动态的Hopfield神经网络(DNNS)为梯度样系统9.4修改动态的Hopfield神经网络9.5其他DNN模型9.6条件平衡点在DNN9.7结语问题As seen in the previous chapters, a neural network consists of many interconnected simple processing units, called neurons, which form the layered configurations. An individual neuron aggregates its weighed inputs and yields an output through a nonlinear activation function with a threshold. In artificial neural networks there are three types of connections: intralayer, interlayer, and recurrent connections. The intralayer connections, which are also called lateral connections or cross-layer connections, are links between neurons in the same layer of the network. The interlayer connections are links between neurons in different layers. The recurrent connections provide self-feedback links to the neurons. In interlayer connections, the signals are transformed in one of the two ways: either feedforward or feedback.就像在前面的章节中,神经网络由许多互连简单处理单元,称为神经元,形成层状结构。
毕业设计英文 翻译(原文)
编号:毕业设计(论文)外文翻译(原文)院(系):桂林电子科技大学专业:电子信息工程学生姓名: xx学号: xxxxxxxxxxxxx 指导教师单位:桂林电子科技大学姓名: xxxx职称: xx2014年x月xx日Timing on and off power supplyusesThe switching power supply products are widely used in industrial automation and control, military equipment, scientific equipment, LED lighting, industrial equipment,communications equipment,electrical equipment,instrumentation, medical equipment, semiconductor cooling and heating, air purifiers, electronic refrigerator, LCD monitor, LED lighting, communications equipment, audio-visual products, security, computer chassis, digital products and equipment and other fields.IntroductionWith the rapid development of power electronics technology, power electronics equipment and people's work, the relationship of life become increasingly close, and electronic equipment without reliable power, into the 1980s, computer power and the full realization of the switching power supply, the first to complete the computer Power new generation to enter the switching power supply in the 1990s have entered into a variety of electronic, electrical devices, program-controlled switchboards, communications, electronic testing equipment power control equipment, power supply, etc. have been widely used in switching power supply, but also to promote the rapid development of the switching power supply technology .Switching power supply is the use of modern power electronics technology to control the ratio of the switching transistor to turn on and off to maintain a stable output voltage power supply, switching power supply is generally controlled by pulse width modulation (PWM) ICs and switching devices (MOSFET, BJT) composition. Switching power supply and linear power compared to both the cost and growth with the increase of output power, but the two different growth rates. A power point, linear power supply costs, but higher than the switching power supply. With the development of power electronics technology and innovation, making the switching power supply technology to continue to innovate, the turning points of this cost is increasingly move to the low output power side, the switching power supply provides a broad space for development.The direction of its development is the high-frequency switching power supply, high frequency switching power supply miniaturization, and switching power supply into a wider range of application areas, especially in high-tech fields, and promote the miniaturization of high-tech products, light of. In addition, the development and application of the switching power supply in terms of energy conservation, resource conservation and environmental protection are of great significance.classificationModern switching power supply, there are two: one is the DC switching power supply; the other is the AC switching power supply. Introduces only DC switching power supply and its function is poor power quality of the original eco-power (coarse) - such as mains power or battery power, converted to meet the equipment requirements of high-quality DC voltage (Varitronix) . The core of the DC switching power supply DC / DC converter. DC switching power supply classification is dependent on the classification of DC / DC converter. In other words, the classification of the classification of the DC switching power supply and DC/DC converter is the classification of essentially the same, the DC / DC converter is basically a classification of the DC switching power supply.DC /DC converter between the input and output electrical isolation can be divided into two categories: one is isolated called isolated DC/DC converter; the other is not isolated as non-isolated DC / DC converter.Isolated DC / DC converter can also be classified by the number of active power devices. The single tube of DC / DC converter Forward (Forward), Feedback (Feedback) two. The double-barreled double-barreled DC/ DC converter Forward (Double Transistor Forward Converter), twin-tube feedback (Double Transistor Feedback Converter), Push-Pull (Push the Pull Converter) and half-bridge (Half-Bridge Converter) four. Four DC / DC converter is the full-bridge DC / DC converter (Full-Bridge Converter).Non-isolated DC / DC converter, according to the number of active power devices can be divided into single-tube, double pipe, and four three categories. Single tube to a total of six of the DC / DC converter, step-down (Buck) DC / DC converter, step-up (Boost) DC / DC converters, DC / DC converter, boost buck (Buck Boost) device of Cuk the DC / DC converter, the Zeta DC / DC converter and SEPIC, the DC / DC converter. DC / DC converters, the Buck and Boost type DC / DC converter is the basic buck-boost of Cuk, Zeta, SEPIC, type DC / DC converter is derived from a single tube in this six. The twin-tube cascaded double-barreled boost (buck-boost) DC / DC converter DC / DC converter. Four DC / DC converter is used, the full-bridge DC / DC converter (Full-Bridge Converter).Isolated DC / DC converter input and output electrical isolation is usually transformer to achieve the function of the transformer has a transformer, so conducive to the expansion of the converter output range of applications, but also easy to achieve different voltage output , or a variety of the same voltage output.Power switch voltage and current rating, the converter's output power is usually proportional to the number of switch. The more the number of switch, the greater the output power of the DC / DC converter, four type than the two output power is twice as large,single-tube output power of only four 1/4.A combination of non-isolated converters and isolated converters can be a single converter does not have their own characteristics. Energy transmission points, one-way transmission and two-way transmission of two DC / DC converter. DC / DC converter with bi-directional transmission function, either side of the transmission power from the power of lateral load power from the load-lateral side of the transmission power.DC / DC converter can be divided into self-excited and separately controlled. With the positive feedback signal converter to switch to self-sustaining periodic switching converter, called self-excited converter, such as the the Luo Yeer (Royer,) converter is a typical push-pull self-oscillating converter. Controlled DC / DC converter switching device control signal is generated by specialized external control circuit.the switching power supply.People in the field of switching power supply technology side of the development of power electronic devices, while the development of the switching inverter technology, the two promote each other to promote the switching power supply annual growth rate of more than two digits toward the light, small, thin, low-noise, high reliability, the direction of development of anti-jamming. Switching power supply can be divided into AC / DC and DC / DC two categories, AC / AC DC / AC, such as inverters, DC / DC converter is now modular design technology and production processes at home and abroad have already matured and standardization, and has been recognized by the user, but AC / DC modular, its own characteristics make the modular process, encounter more complex technology and manufacturing process. Hereinafter to illustrate the structure and characteristics of the two types of switching power supply.Self-excited: no external signal source can be self-oscillation, completely self-excited to see it as feedback oscillation circuit of a transformer.Separate excitation: entirely dependent on external sustain oscillations, excited used widely in practical applications. According to the excitation signal structure classification; can be divided into pulse-width-modulated and pulse amplitude modulated two pulse width modulated control the width of the signal is frequency, pulse amplitude modulation control signal amplitude between the same effect are the oscillation frequency to maintain within a certain range to achieve the effect of voltage stability. The winding of the transformer can generally be divided into three types, one group is involved in the oscillation of the primary winding, a group of sustained oscillations in the feedback winding, there is a group of load winding. Such as Shanghai is used in household appliances art technological production of switching power supply, 220V AC bridge rectifier, changing to about 300V DC filter added tothe collector of the switch into the transformer for high frequency oscillation, the feedback winding feedback to the base to maintain the circuit oscillating load winding induction signal, the DC voltage by the rectifier, filter, regulator to provide power to the load. Load winding to provide power at the same time, take up the ability to voltage stability, the principle is the voltage output circuit connected to a voltage sampling device to monitor the output voltage changes, and timely feedback to the oscillator circuit to adjust the oscillation frequency, so as to achieve stable voltage purposes, in order to avoid the interference of the circuit, the feedback voltage back to the oscillator circuit with optocoupler isolation.technology developmentsThe high-frequency switching power supply is the direction of its development, high-frequency switching power supply miniaturization, and switching power supply into the broader field of application, especially in high-tech fields, and promote the development and advancement of the switching power supply, an annual more than two-digit growth rate toward the light, small, thin, low noise, high reliability, the direction of the anti-jamming. Switching power supply can be divided into AC / DC and DC / DC two categories, the DC / DC converter is now modular design technology and production processes at home and abroad have already matured and standardized, and has been recognized by the user, but modular AC / DC, because of its own characteristics makes the modular process, encounter more complex technology and manufacturing process. In addition, the development and application of the switching power supply in terms of energy conservation, resource conservation and environmental protection are of great significance.The switching power supply applications in power electronic devices as diodes, IGBT and MOSFET.SCR switching power supply input rectifier circuit and soft start circuit, a small amount of applications, the GTR drive difficult, low switching frequency, gradually replace the IGBT and MOSFET.Direction of development of the switching power supply is a high-frequency, high reliability, low power, low noise, jamming and modular. Small, thin, and the key technology is the high frequency switching power supply light, so foreign major switching power supply manufacturers have committed to synchronize the development of new intelligent components, in particular, is to improve the secondary rectifier loss, and the power of iron Oxygen materials to increase scientific and technological innovation in order to improve the magnetic properties of high frequency and large magnetic flux density (Bs), and capacitor miniaturization is a key technology. SMT technology allows the switching power supply has made considerable progress, the arrangement of the components in the circuit board on bothsides, to ensure that the light of the switching power supply, a small, thin. High-frequency switching power supply is bound to the traditional PWM switching technology innovation, realization of ZVS, ZCS soft-switching technology has become the mainstream technology of the switching power supply, and a substantial increase in the efficiency of the switching power supply. Indicators for high reliability, switching power supply manufacturers in the United States by reducing the operating current, reducing the junction temperature and other measures to reduce the stress of the device, greatly improve the reliability of products.Modularity is the overall trend of switching power supply, distributed power systems can be composed of modular power supply, can be designed to N +1 redundant power system, and the parallel capacity expansion. For this shortcoming of the switching power supply running noise, separate the pursuit of high frequency noise will also increase, while the use of part of the resonant converter circuit technology to achieve high frequency, in theory, but also reduce noise, but some The practical application of the resonant converter technology, there are still technical problems, it is still a lot of work in this field, so that the technology to be practical.Power electronics technology innovation, switching power supply industry has broad prospects for development. To accelerate the pace of development of the switching power supply industry in China, it must take the road of technological innovation, out of joint production and research development path with Chinese characteristics and contribute to the rapid development of China's national economy.Developments and trends of the switching power supply1955 U.S. Royer (Roger) invented the self-oscillating push-pull transistor single-transformer DC-DC converter is the beginning of the high-frequency conversion control circuit 1957 check race Jen, Sen, invented a self-oscillating push-pull dual transformers, 1964, U.S. scientists canceled frequency transformer in series the idea of switching power supply, the power supply to the size and weight of the decline in a fundamental way. 1969 increased due to the pressure of the high-power silicon transistor, diode reverse recovery time shortened and other components to improve, and finally made a 25-kHz switching power supply.At present, the switching power supply to the small, lightweight and high efficiency characteristics are widely used in a variety of computer-oriented terminal equipment, communications equipment, etc. Almost all electronic equipment is indispensable for a rapid development of today's electronic information industry power mode. Bipolar transistor made of 100kHz, 500kHz power MOS-FET made, though already the practical switching power supply is currently available on the market, but its frequency to be further improved. Toimprove the switching frequency, it is necessary to reduce the switching losses, and to reduce the switching losses, the need for high-speed switch components. However, the switching speed will be affected by the distribution of the charge stored in the inductance and capacitance, or diode circuit to produce a surge or noise. This will not only affect the surrounding electronic equipment, but also greatly reduce the reliability of the power supply itself. Which, in order to prevent the switching Kai - closed the voltage surge, RC or LC buffers can be used, and the current surge can be caused by the diode stored charge of amorphous and other core made of magnetic buffer . However, the high frequency more than 1MHz, the resonant circuit to make the switch on the voltage or current through the switch was a sine wave, which can reduce switching losses, but also to control the occurrence of surges. This switch is called the resonant switch. Of this switching power supply is active, you can, in theory, because in this way do not need to greatly improve the switching speed of the switching losses reduced to zero, and the noise is expected to become one of the high-frequency switching power supply The main ways. At present, many countries in the world are committed to several trillion Hz converter utility.the principle of IntroductionThe switching power supply of the process is quite easy to understand, linear power supplies, power transistors operating in the linear mode and linear power, the PWM switching power supply to the power transistor turns on and off state, in both states, on the power transistor V - security product is very small (conduction, low voltage, large current; shutdown, voltage, current) V oltammetric product / power device is power semiconductor devices on the loss.Compared with the linear power supply, the PWM switching power supply more efficient process is achieved by "chopping", that is cut into the amplitude of the input DC voltage equal to the input voltage amplitude of the pulse voltage. The pulse duty cycle is adjusted by the switching power supply controller. Once the input voltage is cut into the AC square wave, its amplitude through the transformer to raise or lower. Number of groups of output voltage can be increased by increasing the number of primary and secondary windings of the transformer. After the last AC waveform after the rectifier filter the DC output voltage.The main purpose of the controller is to maintain the stability of the output voltage, the course of their work is very similar to the linear form of the controller. That is the function blocks of the controller, the voltage reference and error amplifier can be designed the same as the linear regulator. Their difference lies in the error amplifier output (error voltage) in the drive before the power tube to go through a voltage / pulse-width conversion unit.Switching power supply There are two main ways of working: Forward transformand boost transformation. Although they are all part of the layout difference is small, but the course of their work vary greatly, have advantages in specific applications.the circuit schematicThe so-called switching power supply, as the name implies, is a door, a door power through a closed power to stop by, then what is the door, the switching power supply using SCR, some switch, these two component performance is similar, are relying on the base switch control pole (SCR), coupled with the pulse signal to complete the on and off, the pulse signal is half attentive to control the pole voltage increases, the switch or transistor conduction, the filter output voltage of 300V, 220V rectifier conduction, transmitted through the switching transformer secondary through the transformer to the voltage increase or decrease for each circuit work. Oscillation pulse of negative semi-attentive to the power regulator, base, or SCR control voltage lower than the original set voltage power regulator cut-off, 300V power is off, switch the transformer secondary no voltage, then each circuit The required operating voltage, depends on this secondary road rectifier filter capacitor discharge to maintain. Repeat the process until the next pulse cycle is a half weeks when the signal arrival. This switch transformer is called the high-frequency transformer, because the operating frequency is higher than the 50HZ low frequency. Then promote the pulse of the switch or SCR, which requires the oscillator circuit, we know, the transistor has a characteristic, is the base-emitter voltage is 0.65-0.7V is the zoom state, 0.7V These are the saturated hydraulic conductivity state-0.1V-0.3V in the oscillatory state, then the operating point after a good tune, to rely on the deep negative feedback to generate a negative pressure, so that the oscillating tube onset, the frequency of the oscillating tube capacitor charging and discharging of the length of time from the base to determine the oscillation frequency of the output pulse amplitude, and vice versa on the small, which determines the size of the output voltage of the power regulator. Transformer secondary output voltage regulator, usually switching transformer, single around a set of coils, the voltage at its upper end, as the reference voltage after the rectifier filter, then through the optocoupler, this benchmark voltage return to the base of the oscillating tube pole to adjust the level of the oscillation frequency, if the transformer secondary voltage is increased, the sampling coil output voltage increases, the positive feedback voltage obtained through the optocoupler is also increased, this voltage is applied oscillating tube base, so that oscillation frequency is reduced, played a stable secondary output voltage stability, too small do not have to go into detail, nor it is necessary to understand the fine, such a high-power voltage transformer by switching transmission, separated and after the class returned by sampling the voltage from the opto-coupler pass separated after class, so before the mains voltage, and after the classseparation, which is called cold plate, it is safe, transformers before power is independent, which is called switching power supply.the DC / DC conversionDC / DC converter is a fixed DC voltage transformation into a variable DC voltage, also known as the DC chopper. There are two ways of working chopper, one Ts constant pulse width modulation mode, change the ton (General), the second is the frequency modulation, the same ton to change the Ts, (easy to produce interference). Circuit by the following categories:Buck circuit - the step-down chopper, the average output voltage U0 is less than the input voltage Ui, the same polarity.Boost Circuit - step-up chopper, the average output voltage switching power supply schematic U0 is greater than the input voltage Ui, the same polarity.Buck-Boost circuit - buck or boost chopper, the output average voltage U0 is greater than or less than the input voltage Ui, the opposite polarity, the inductance transmission.Cuk circuit - a buck or boost chopper, the output average voltage U0 is greater than or less than the input voltage Ui, the opposite polarity, capacitance transmission.The above-mentioned non-isolated circuit, the isolation circuit forward circuits, feedback circuit, the half-bridge circuit, the full bridge circuit, push-pull circuit. Today's soft-switching technology makes a qualitative leap in the DC / DC the U.S. VICOR company design and manufacture a variety of ECI soft-switching DC / DC converter, the maximum output power 300W, 600W, 800W, etc., the corresponding power density (6.2 , 10,17) W/cm3 efficiency (80-90)%. A the Japanese Nemic Lambda latest using soft-switching technology, high frequency switching power supply module RM Series, its switching frequency (200 to 300) kHz, power density has reached 27W/cm3 with synchronous rectifier (MOSFETs instead of Schottky diodes ), so that the whole circuit efficiency by up to 90%.AC / DC conversionAC / DC conversion will transform AC to DC, the power flow can be bi-directional power flow by the power flow to load known as the "rectification", referred to as "active inverter power flow returned by the load power. AC / DC converter input 50/60Hz AC due must be rectified, filtered, so the volume is relatively large filter capacitor is essential, while experiencing safety standards (such as UL, CCEE, etc.) and EMC Directive restrictions (such as IEC, FCC, CSA) in the AC input side must be added to the EMC filter and use meets the safety standards of the components, thus limiting the miniaturization of the volume of AC / DC power, In addition, due to internal frequency, high voltage, current switching, making the problem difficult to solve EMC also high demands on the internal high-density mountingcircuit design, for the same reason, the high voltage, high current switch makes power supply loss increases, limiting the AC / DC converter modular process, and therefore must be used to power system optimal design method to make it work efficiency to reach a certain level of satisfaction.AC / DC conversion circuit wiring can be divided into half-wave circuit, full-wave circuit. Press the power phase can be divided into single-phase three-phase, multiphase. Can be divided into a quadrant, two quadrant, three quadrants, four-quadrant circuit work quadrant.he selection of the switching power supplySwitching power supply input on the anti-jamming performance, compared to its circuit structure characteristics (multi-level series), the input disturbances, such as surge voltage is difficult to pass on the stability of the output voltage of the technical indicators and linear power have greater advantages, the output voltage stability up to (0.5)%. Switching power supply module as an integrated power electronic devices should be selected。
毕业设计外文原文加译文
Basic Concepts PrimerTOPIC P.1: Bridge MechanicsBasic Equations of Bridge Mechanicswhere: A =area; cross-sectional areaA w = areaof web c = distance from neutral axisto extreme fiber (or surface) of beamE = modulus of elasticityF = force; axial force f a= axial stress f b= bending stress f v = shear stress I = moment of inertia L = original length M = applied moment S = stressV = vertical shear force due toexternal loadsD L = change in length e = strainBasic Concepts Primer Topic P.1 Bridge MechanicsP.1.1Introduction Mechanics is the branch of physical science that deals with energy and forces andtheir relation to the equilibrium, deformation, or motion of bodies. The bridgeinspector will primarily be concerned with statics, or the branch of mechanicsdealing with solid bodies at rest and with forces in equilibrium.The two most important reasons for a bridge inspector to study bridge mechanicsare:Ø To understand how bridge members functionØ To recognize the impact a defect may have on the load-carrying capacityof a bridge component or elementWhile this section presents the basic principles of bridge mechanics, the referenceslisted in the bibliography should be referred to for a more complete presentation ofthis subject.P.1.2Bridge Design Loadings Bridge design loadings are loads that a bridge is designed to carry or resist and which determine the size and configuration of its members. Bridge members are designed to withstand the loads acting on them in a safe and economical manner. Loads may be concentrated or distributed depending on the way in which they are applied to the structure.A concentrated load, or point load, is applied at a single location or over a very small area. Vehicle loads are considered concentrated loads.A distributed load is applied to all or part of the member, and the amount of load per unit of length is generally constant. The weight of superstructures, bridge decks, wearing surfaces, and bridge parapets produce distributed loads. Secondary loads, such as wind, stream flow, earth cover and ice, are also usually distributed loads.Highway bridge design loads are established by the American Association of State Highway and Transportation Officials (AASHTO). For many decades, the primary bridge design code in the United States was the AASHTO Standard Specifications for Highway Bridges (Specifications), as supplemented by agency criteria as applicable.During the 1990’s AASHTO developed and approved a new bridge design code, entitled AASHTO LRFD Bridge Design Specifications. It is based upon the principles of Load and Resistance Factor Design (LRFD), as described in Topic P.1.7.P.1.1SECTION P: Basic Concepts PrimerTopic P.1: Bridge MechanicsP.1.2Bridge design loadings can be divided into three principal categories:Ø Dead loadsØ Primary live loads Ø Secondary loadsDead LoadsDead loads do not change as a function of time and are considered full-time, permanent loads acting on the structure. They consist of the weight of the materials used to build the bridge (see Figure P.1.1). Dead load includes both the self-weight of structural members and other permanent external loads. They can be broken down into two groups, initial and superimposed.Initial dead loads are loads which are applied before the concrete deck is hardened, including the beam itself and the concrete deck. Initial deck loads must be resisted by the non-composite action of the beam alone. Superimposed dead loads are loads which are applied after the concrete deck has hardened (on a composite bridge), including parapets and any anticipated future deck pavement. Superimposed dead loads are resisted by the beam and the concrete deck acting compositely. Non-composite and composite action are described in Topic P.1.10.Dead load includes both the self-weight of the structural members and other permanent external loads.Example of self-weight: A 6.1 m (20-foot) long beam weighs 0.73 kN per m (50 pounds per linear foot). The total weight of the beam is 4.45 kN (1000 pounds). This weight is called the self-weight of the beam.Example of an external dead load: If a utility such as a water line is permanently attached to the beam in the previous example, then the weight of the water line is an external dead load. The weight of the water line plus the self weight of the beam comprises the total dead load.Total dead load on a structure may change during the life of the bridge due to additions such as deck overlays, parapets, utility lines, and inspection catwalks.Figure P.1.1 Dead Load on a BridgePrimary Live LoadsLive loads are considered part-time or temporary loads, mostly of short-term duration, acting on the structure. In bridge applications, the primary live loads are moving vehicular loads (see Figure P.1.2).To account for the affects of speed, vibration, and momentum, highway live loads are typically increased for impact. Impact is expressed as a fraction of the liveSECTION P: Basic Concepts PrimerTopic P.1: Bridge MechanicsP.1.3load, and its value is a function of the span length.Standard vehicle live loads have been established by AASHTO for use in bridge design and rating. It is important to note that these standard vehicles do not represent actual vehicles. Rather, they were developed to allow a relatively simple method of analysis based on an approximation of the actual live load.Figure P.1.2 Vehicle Live Load on a BridgeAASHTO Truck LoadingsThere are two basic types of standard truck loadings described in the current AASHTO Specifications . The first type is a single unit vehicle with two axles spaced at 14 feet (4.3 m) and designated as a highway truck or "H" truck (see Figure P.1.3). The weight of the front axle is 20% of the gross vehicle weight, while the weight of the rear axle is 80% of the gross vehicle weight. The "H" designation is followed by the gross tonnage of the particular design vehicle.Example of an H truck loading: H20-35 indicates a 20 ton vehicle with a front axle weighing 4 tons, a rear axle weighing 16 tons, and the two axles spaced 14 feet apart. This standard truck loading was first published in 1935.The second type of standard truck loading is a two unit, three axle vehicle comprised of a highway tractor with a semi-trailer. It is designated as a highway semi-trailer truck or "HS" truck (see Figure P.1.4).The tractor weight and wheel spacing is identical to the H truck loading. The semi-trailer axle weight is equal to the weight of the rear tractor axle, and its spacing from the rear tractor axle can vary from 4.3 to 9.1 m (14 to 30 feet). The "HS" designation is followed by a number indicating the gross weight in tons of the tractor only.SECTION P: Basic Concepts PrimerTopic P.1: Bridge MechanicsP.1.414’-0”(4.3 m)8,000 lbs (35 kN) 32,000 lbs (145 kN)(3.0 m)10’-0”CLEARANCE AND LOAD LANE WIDTH6’-0” (1.8 m)2’-0” (0.6 m)Figure P.1.3 AASHTO H20 Truck14’-0”(4.3 m)8,000 lbs (35 kN) 32,000 lbs (145 kN)(3.0 m)10’-0”CLEARANCE AND LOAD LANE WIDTH6’-0”(1.8 m)2’-0” (0.6 m)32,000 lbs (145 kN)VFigure P.1.4 AASHTO HS20 TruckExample of an HS truck loading: HS20-44 indicates a vehicle with a front tractor axle weighing 4 tons, a rear tractor axle weighing 16 tons, and a semi-trailer axle weighing 16 tons. The tractor portion alone weighs 20 tons, but the gross vehicle weight is 36 tons. This standard truck loading was first published in 1944.In specifications prior to 1944, a standard loading of H15 was used. In 1944, theSECTION P: Basic Concepts Primer Topic P.1: Bridge MechanicsP.1.5H20-44 Loading HS20-44 Loadingpolicy of affixing the publication year of design loadings was adopted. In specifications prior to 1965, the HS20-44 loading was designated as H20-S16-44, with the S16 identifying the gross axle weight of the semi-trailer in tons.The H and HS vehicles do not represent actual vehicles, but can be considered as "umbrella" loads. The wheel spacings, weight distributions, and clearance of the Standard Design Vehicles were developed to give a simpler method of analysis, based on a good approximation of actual live loads.The H and HS vehicle loads are the most common loadings for design, analysis, and rating, but other loading types are used in special cases.AASHTO Lane LoadingsIn addition to the standard truck loadings, a system of equivalent lane loadings was developed in order to provide a simple method of calculating bridge response to a series, or “train”, of trucks. Lane loading consists of a uniform load per linear foot of traffic lane combined with a concentrated load located on the span to produce the most critical situation (see Figure P.1.5).For design and load capacity rating analysis, an investigation of both a truck loading and a lane loading must be made to determine which produces the greatest stress for each particular member. Lane loading will generally govern over truck loading for longer spans. Both the H and HS loadings have corresponding lane loads.* Use two concentrated loads for negative moment in continuous spans (Refer to AASHTO Page 23)Figure P.1.5 AASHTO Lane Loadings.Alternate Military LoadingThe Alternate Military Loading is a single unit vehicle with two axles spaced at 1.2 m (4 feet) and weighing 110 kN (12 tons)each. It has been part of the AASHTO Specifications since 1977. Bridges on interstate highways or other highways which are potential defense routes are designed for either an HS20 loading or an Alternate Military Loading (see Figure P.1.6).SECTION P: Basic Concepts PrimerTopic P.1: Bridge MechanicsP.1.6110 kN (24 k)110 kN (24 k)Figure P.1.6 Alternate Military LoadingLRFD Live LoadsThe AASHTO LRFD design vehicular live load, designated HL-93, is a modified version of the HS-20 highway loadings from the AASHTO StandardSpecifications. Under HS-20 loading as described earlier, the truck or lane load is applied to each loaded lane. Under HL-93 loading, the design truck or tandem, in combination with the lane load, is applied to each loaded lane.The LRFD design truck is exactly the same as the AASHTO HS-20 design truck. The LRFD design tandem, on the other hand, consists of a pair of 110 kN axials spread at 1.2 m (25 kip axles spaced 4 feet) apart. The transverse wheel spacing of all of the trucks is 6 feet.The magnitude of the HL-93 lane load is equal to that of the HS-20 lane load. The lane load is 9 kN per meter (0.64 kips per linear foot) longitudinally and it is distributed uniformly over a 3 m (10 foot) width in the transverse direction. The difference between the HL-93 lane load and the HS-20 lane load is that the HL-93 lane load does not include a point load.Finally, for LRFD live loading, the dynamic load allowance, or impact, is applied to the design truck or tandem but is not applied to the design lane load. It is typically 33 percent of the design vehicle.Permit VehiclesPermit vehicles are overweight vehicles which, in order to travel a state’s highways, must apply for a permit from that state. They are usually heavy trucks (e.g., combination trucks, construction vehicles,or cranes) that have varying axle spacings depending upon the design of the individual truck. To ensure that these vehicles can safely operate on existing highways and bridges, most states require that bridges be designed for a permit vehicle or that the bridge be checked to determine if it can carry a specific type of vehicle. For safe and legal operation, agencies issue permits upon request that identify the required gross weight, number of axles, axle spacing, and maximum axle weights for a designated route (see Figure P.1.7).SECTION P: Basic Concepts PrimerTopic P.1: Bridge MechanicsP.1.7Figure P.1.7 910 kN (204 kip) Permit Vehicle (for Pennsylvania)Secondary LoadsIn addition to dead loads and primary live loads, bridge components are designed to resist secondary loads, which include the following:Ø Earth pressure - a horizontal force acting on earth-retaining substructureunits, such as abutments and retaining wallsØ Buoyancy -the force created due to the tendency of an object to rise whensubmerged in waterØ Wind load on structure - wind pressure on the exposed area of a bridge Ø Wind load on live load -wind effects transferred through the live loadvehicles crossing the bridgeØ Longitudinal force -a force in the direction of the bridge caused bybraking and accelerating of live load vehiclesØ Centrifugal force -an outward force that a live load vehicle exerts on acurved bridgeØ Rib shortening -a force in arches and frames created by a change in thegeometrical configuration due to dead loadØ Shrinkage - applied primarily to concrete structures, this is a multi-directional force due to dimensional changes resulting from the curing processØ Temperature -since materials expand as temperature increases andcontract as temperature decreases, the force caused by these dimensional changes must be consideredØ Earthquake -bridge structures must be built so that motion during anearthquake will not cause a collapseØ Stream flow pressure -a horizontal force acting on bridge componentsconstructed in flowing waterØ Ice pressure - a horizontal force created by static or floating ice jammedagainst bridge componentsØ Impact loading - the dynamic effect of suddenly receiving a live load;this additional force can be up to 30% of the applied primary live load forceØ Sidewalk loading - sidewalk floors and their immediate supports aredesigned for a pedestrian live load not exceeding 4.1 kN per square meter (85 pounds per square foot)Ø Curb loading -curbs are designed to resist a lateral force of not less than7.3 kN per linear meter (500 pounds per linear foot)Ø Railing loading -railings are provided along the edges of structures forprotection of traffic and pedestrians; the maximum transverse load appliedto any one element need not exceed 44.5 kN (10 kips)SECTION P: Basic Concepts PrimerTopic P.1: Bridge MechanicsP.1.8A bridge may be subjected to several of these loads simultaneously. The AASHTO Specifications have established a table of loading groups. For each group, a set of loads is considered with a coefficient to be applied for each particular load. The coefficients used were developed based on the probability of various loads acting simultaneously.P.1.3Material Response to LoadingsEach member of a bridge has a unique purpose and function, which directly affects the selection of material, shape, and size for that member. Certain terms are used to describe the response of a bridge material to loads. A working knowledge of these terms is essential for the bridge inspector.ForceA force is the action that one body exerts on another body. Force has two components: magnitude and direction (see Figure P.1.8). The basic English unit of force is called pound (abbreviated as lb.). The basic metric unit of force is called Newton (N). A common unit of force used among engineers is a kip (K), which is 1000 pounds. In the metric system, the kilonewton (kN), which is 1000 Newtons, is used. Note: 1 kip = 4.4 kilonewton.FyFigure P.1.8 Basic Force ComponentsStressStress is a basic unit of measure used to denote the intensity of an internal force. When a force is applied to a material, an internal stress is developed. Stress is defined as a force per unit of cross-sectional area.The basic English unit of stress is pounds per square inch (abbreviated as psi). However, stress can also be expressed in kips per square inch (ksi) or in any other units of force per unit area. The basic metric unit of stress is Newton per square meter, or Pascal (Pa). An allowable unit stress is generally established for a given material. Note: 1 ksi = 6.9 Pa.)A (Area )F (Force )S (Stress =毕业设计外文译文桥梁力学基本概论《美国桥梁检测手册》译文:桥梁结构的基础方程S=F/A(见1.8节)fa=P/A(见1.14节)ε=△L/L(见1.9节)fb=Mc/I(见1.16节)E=S/ε(见1.11节)fv=V/Aw(见1.18节)桥梁额定承载率=(允许荷载–固定荷载)*车辆总重量/车辆活荷载冲击力式中:A=面积;横截面面积Aw=腹板面积c=中性轴与横梁边缘纤维或外表面之间的距离E=弹性模量F=轴心力;轴向力fa=轴向应力fb=弯曲应力fv=剪切应力I=惯性距L=原长M=作用力距S=应力V=由外荷载引起的垂直剪应力△L=长度变量ε=应变1桥梁主要的基本概论第一章桥梁力学1.1引言结构力学是研究物体的能量、力、能量和力的平衡关系、变形及运动的物理科学的分支。
毕业设计外文翻译英文翻译英文原稿
Harmonic source identification and current separationin distribution systemsYong Zhao a,b,Jianhua Li a,Daozhi Xia a,*a Department of Electrical Engineering Xi’an Jiaotong University, 28 West Xianning Road, Xi’an, Shaanxi 710049, Chinab Fujian Electric Power Dispatch and Telecommunication Center, 264 Wusi Road, Fuzhou, Fujian, 350003, China AbstractTo effectively diminish harmonic distortions, the locations of harmonic sources have to be identified and their currents have to be separated from that absorbed by conventional linear loads connected to the same CCP. In this paper, based on the intrinsic difference between linear and nonlinear loads in their V –I characteristics and by utilizing a new simplified harmonic source model, a new principle for harmonic source identification and harmonic current separation is proposed. By using this method, not only the existence of harmonic source can be determined, but also the contributions of the harmonic source and the linear loads to harmonic voltage distortion can be distinguished. The detailed procedure based on least squares approximation is given. The effectiveness of the approach is illustrated by test results on a composite load.2004 Elsevier Ltd. All rights reserved.Keywords: Distribution system; Harmonic source identification; Harmonic current separation; Least squares approximation1. IntroductionHarmonic distortion has experienced a continuous increase in distribution systems owing to the growing use of nonlinear loads. Many studies have shown that harmonics may cause serious effects on power systems, communication systems, and various apparatus [1–3]. Harmonic voltages at each point on a distribution network are not only determined by the harmonic currents produced by harmonic sources (nonlinear loads), but also related to all linear loads (harmonic current sinks) as well as the structure and parameters of the network. To effectively evaluate and diminish the harmonic distortion in power systems, the locations of harmonic sources have to be identified and the responsibility of the distortion caused by related individual customers has to be separated.As to harmonic source identification, most commonly the negative harmonic power is considered as an essential evidence of existing harmonic source [4–7]. Several approaches aiming at evaluating the contribution of an individual customer can also be found in the literatures. Schemes based on power factor measurement to penalize the customer’s harmonic currents are discussed in Ref. [8]. However, it would be unfair to use economical penalization if we could not distinguish whether the measured harmonic current is from nonlinear load or from linear load.In fact, the intrinsic difference between linear and nonlinear loads lies in their V –I characteristics. Harmonic currents of a linear load are i n linear proportion to its supplyharmonic voltages of the same order 次, whereas the harmonic currents of a nonlinear load are complex nonlinear functions of its supply fundamental 基波and harmonic voltage components of all orders. To successfully identify and isolate harmonic source in an individual customer or several customers connected at same point in the network, the V –I characteristics should be involved and measurement of voltages and currents under several different supply conditions should be carried out.As the existing approaches based on measurements of voltage and current spectrum or harmonic power at a certain instant cannot reflect the V –I characteristics, they may not provide reliable information about the existence and contribution of harmonic sources, which has been substantiated by theoretical analysis or experimental researches [9,10].In this paper, to approximate the nonlinear characteristics and to facilitate the work in harmonic source identification and harmonic current separation, a new simplified harmonic source model is proposed. Then based on the difference between linear and nonlinear loads in their V –I characteristics, and by utilizing the harmonic source model, a new principle for harmonic source identification and harmonic current separation is presented. By using the method, not only the existence of harmonic source can be determined, but also the contributions of the harmonic sources and the linear loads can be separated. Detailed procedure of harmonic source identification and harmonic current separation based on least squares approximation is presented. Finally, test results on a composite load containing linear and nonlinear loads are given to illustrate the effectiveness of the approach.2. New principle for harmonic source identification and current separationConsider a composite load to be studied in a distribution system, which may represent an individual consumer or a group of customers supplied by a common feeder 支路in the system. To identify whether it contains any harmonic source and to separate the harmonic currents generated by the harmonic sources from that absorbed by conventional linear loads in the measured total harmonic currents of the composite load, the following assumptions are made.(a) The supply voltage and the load currents are both periodical waveforms withperiod T; so that they can be expressed by Fourier series as1()s i n (2)h h h v t ht T πθ∞==+ (1)1()sin(2)h h h i t ht πφ∞==+The fundamental frequency and harmonic components can further be presented bycorresponding phasorshr hi h h hr hi h hV jV V I jI I θφ+=∠+=∠ , 1,2,3,...,h n = (2)(b) During the period of identification, the composite load is stationary, i.e. both its composition and circuit parameters of all individual loads keep unchanged.Under the above assumptions, the relationship between the total harmonic currents of the harmonic sources(denoted by subscript N) in the composite load and the supply voltage, i.e. the V –I characteristics, can be described by the following nonlinear equation ()()()N i t f v t = (3)and can also be represented in terms of phasors as()()122122,,,...,,,,,,...,,Nhr r i nr ni Nh Nhi r inr ni I V V V V V I I V V V V V ⎡⎤=⎢⎥⎣⎦ 2,3,...,h n = (4)Note that in Eq. (4), the initial time (reference time) of the voltage waveform has been properly selected such that the phase angle u1 becomes 0 and 10i V =, 11r V V =in Eq. (2)for simplicity.The V –I characteristics of the linear part (denote by subscript L) of the composite load can be represented by its equivalent harmonic admittance Lh Lh Lh Y G jB =+, and the total harmonic currents absorbed by the linear part can be described as,Lhr LhLh hr Lh Lhi LhLh hi I G B V I I B G V -⎡⎤⎡⎤⎡⎤==⎢⎥⎢⎥⎢⎥⎣⎦⎣⎦⎣⎦2,3,...,h n = (5)From Eqs. (4) and (5), the whole harmonic currents absorbed by the composite load can be expressed as()()122122,,,...,,,,,,...,,hr Lhr Nhr r i nr ni h hi Lhi Nhi r inr ni I I I V V V V V I I I I V V V V V ⎡⎤⎡⎤⎡⎤==-⎢⎥⎢⎥⎢⎥⎣⎦⎣⎦⎣⎦ 2,3,...,h n = (6)As the V –I characteristics of harmonic source are nonlinear, Eq. (6) can neither be directly used for harmonic source identification nor for harmonic current separation. To facilitate the work in practice, simplified methods should be involved. The common practice in harmonic studies is to represent nonlinear loads by means of current harmonic sources or equivalent Norton models [11,12]. However, these models are not of enough precision and new simplified model is needed.From the engineering point of view, the variations of hr V and hi V ; ordinarily fall into^3% bound of the rated bus voltage, while the change of V1 is usually less than ^5%. Within such a range of supply voltages, the following simplified linear relation is used in this paper to approximate the harmonic source characteristics, Eq. (4)112222112322,ho h h r r h i i hnr nr hni ni Nh ho h h r r h i i hnr nr hni ni a a V a V a V a V a V I b b V b V b V b V b V ++++++⎡⎤=⎢⎥++++++⎣⎦2,3,...,h n = (7)这个地方不知道是不是原文写错?23h r r b V 其他的都是2The precision and superiority of this simplified model will be illustrated in Section 4 by test results on several kinds of typical harmonic sources.The total harmonic current (Eq. (6)) then becomes112222112222,2,3,...,Lh Lh hr ho h h r r h i i hnr nr hni ni h Lh Lh hi ho h h r r h i i hnr nr hni ni G B V a a V a V a V a V a V I B G V b b V b V b V b V b V h n-++++++⎡⎤⎡⎤⎡⎤=-⎢⎥⎢⎥⎢⎥++++++⎣⎦⎣⎦⎣⎦= (8)It can be seen from the above equations that the harmonic currents of the harmonic sources (nonlinear loads) and the linear loads differ from each other intrinsically in their V –I characteristics. The harmonic current component drawn by the linear loads is uniquely determined by the harmonic voltage component with same order in the supply voltage. On the other hand, the harmonic current component of the nonlinear loads contains not only a term caused by the same order harmonic voltage but also a constant term and the terms caused by fundamental and harmonic voltages of all other orders. This property will be used for identifying the existence of harmonic source sin composite load.As the test results shown in Section 4 demonstrate that the summation of the constant term and the component related to fundamental frequency voltage in the harmonic current of nonlinear loads is dominant whereas other components are negligible, further approximation for Eq. (7) can be made as follows.Let112'012()()nh h hkr kr hki ki k k h Nhnh h hkr kr hki kik k h a a V a V a V I b b V b V b V =≠=≠⎡⎤+++⎢⎥⎢⎥=⎢⎥⎢⎥+++⎢⎥⎢⎥⎣⎦∑∑ hhr hhi hr Nhhhr hhi hi a a V I b b V ⎡⎤⎡⎤''=⎢⎥⎢⎥⎣⎦⎣⎦hhrhhihr Lh Lh Nh hhrhhi hi a a V I I I b b V ''⎡⎤⎡⎤'''=-=⎢⎥⎢⎥''⎣⎦⎣⎦,2,3,...,hhr hhiLh Lh hhrhhi hhr hhi Lh Lh hhr hhi a a G B a a h n b b B G b b ''-⎡⎤⎡⎤⎡⎤=-=⎢⎥⎢⎥⎢⎥''⎣⎦⎣⎦⎣⎦The total harmonic current of the composite load becomes112012(),()2,3,...,nh h hkr kr hki ki k k hhhrhhi hr h Lh NhLhNh n hhrhhi hi h h hkr kr hki kik k h a a V a V a V a a V I I I I I b b V b b V b V b V h n=≠=≠⎡⎤+++⎢⎥⎢⎥''⎡⎤⎡⎤''=-=-=-⎢⎥⎢⎥⎢⎥''⎣⎦⎣⎦⎢⎥+++⎢⎥⎢⎥⎣⎦=∑∑ (9)By neglecting ''Nh I in the harmonic current of nonlinear load and adding it to the harmonic current of linear load, 'Nh I can then be deemed as harmonic current of thenonlinear load while ''Lh I can be taken as harmonic current of the linear load. ''Nh I =0 means the composite load contains no harmonic sources, while ''0NhI ≠signify that harmonic sources may exist in this composite load. As the neglected term ''Nh I is not dominant, it is obviousthat this simplification does not make significant error on the total harmonic current of nonlinear load. However, it makes the possibility or the harmonic source identification and current separation.3. Identification procedureIn order to identify the existence of harmonic sources in a composite load, the parameters in Eq. (9) should be determined primarily, i.e.[]0122hr h h h rh i hhr hhihnr hni C a a a a a a a a ''= []0122hi h h h rh i hhrhhihnr hni C b b b b b b b b ''=For this purpose, measurement of different supply voltages and corresponding harmoniccurrents of the composite load should be repeatedly performed several times in some short period while keeping the composite load stationary. The change of supply voltage can for example be obtained by switching in or out some shunt capacitors, disconnecting a parallel transformer or changing the tap position of transformers with OLTC. Then, the least squares approach can be used to estimate the parameters by the measured voltages and currents. The identification procedure will be explained as follows.(1) Perform the test for m (2m n ≥)times to get measured fundamental frequency andharmonic voltage and current phasors ()()k k h h V θ∠,()()k k hh I φ∠,()1,2,,,1,2,,k m h n == .(2) For 1,2,,k n = ,transfer the phasors corresponding to zero fundamental voltage phase angle ()1(0)k θ=and change them into orthogonal components, i.e.()()11kkr V V = ()10ki V =()()()()()()()()()()11cos sin kkkkk kkkhr h h hihhV V h V V h θθθθ=-=-()()()()()()()()()()11cos sin k kkkk kkkhrhhhihhI I h I I h φθφθ=-=-,2,3,...,h n =(3)Let()()()()()()()()1221Tk k k k k k k k r i hr hi nr ni VV V V V V V V ⎡⎤=⎣⎦ ,()1,2,,k m = ()()()12Tm X V V V ⎡⎤=⎣⎦ ()()()12T m hr hr hr hrW I I I ⎡⎤=⎣⎦()()()12Tm hi hi hihi W I I I ⎡⎤=⎣⎦ Minimize ()()()211hr mk hr k I C V=-∑ and ()()()211him k hi k IC V=-∑, and determine the parametershr C and hi C by least squares approach as [13]:()()11T T hr hr T T hi hiC X X X W C X X X W --== (10)(4) By using Eq. (9), calculate I0Lh; I0Nh with the obtained Chr and Chi; then the existence of harmonic source is identified and the harmonic current is separated.It can be seen that in the course of model construction, harmonic source identification and harmonic current separation, m times changing of supply system operating condition and measuring of harmonic voltage and currents are needed. More accurate the model, more manipulations are necessary.To compromise the needed times of the switching operations and the accuracy of the results, the proposed model for the nonlinear load (Eq. (7)) and the composite load (Eq. (9)) can be further simplified by only considering the dominant terms in Eq. (7), i.e.01111,Nhr h h hhr hhi hr Nh Nhi ho h hhrhhi hi I a a V a a V I I b b V b b V +⎡⎤⎡⎤⎡⎤⎡⎤==+⎢⎥⎢⎥⎢⎥⎢⎥+⎣⎦⎣⎦⎣⎦⎣⎦2,3,,h n = (11) 01111h h Nh ho h a a V I b b V +⎡⎤'=⎢⎥+⎣⎦01111,hr hhrhhi hr h h h LhNh hi hhr hhihi ho h I a a V a a V I I I I b b V b b V ''+⎡⎤⎡⎤⎡⎤⎡⎤''==-=-⎢⎥⎢⎥⎢⎥⎢⎥''+⎣⎦⎣⎦⎣⎦⎣⎦2,3,,h n = (12) In this case, part equations in the previous procedure should be changed as follows[]01hr h h hhrhhi C a a a a ''= []01hi h h hhrhhiC b b b b ''= ()()()1Tk k k hr hi V V V ⎡⎤=⎣⎦ Similarly, 'Nh I and 'Lh I can still be taken as the harmonic current caused by thenonlinear load and the linear load, respectively.4. Experimental validation4.1. Model accuracyTo demonstrate the validity of the proposed harmonic source models, simulations are performed on the following three kind of typical nonlinear loads: a three-phase six-pulse rectifier, a single-phase capacitor-filtered rectifier and an acarc furnace under stationary operating condition.Diagrams of the three-phase six-pulse rectifier and the single-phase capacitor-filtered rectifier are shown in Figs. 1 and 2 [14,15], respectively, the V –I characteristic of the arc furnace is simplified as shown in Fig. 3 [16].The harmonic currents used in the simulation test are precisely calculated from their mathematical model. As to the supply voltage, VekT1 is assumed to be uniformly distributed between 0.95 and 1.05, VekThr and VekThi ek 1; 2;…;m T are uniformly distributed between20.03 and 0.03 with base voltage 10 kV and base power 1 MVFig. 1. Diagram of three-phase six-pulse rectifier.Fig. 2. Diagram of single-phase capacitor-filtered rectifierFig. 3. Approximate V –I characteristics of arc furnace.Three different models including the harmonic current source (constant current) model, the Norton model and the proposed simplified model are simulated and estimated by the least squares approach for comparison.For the three-phase six-pulse rectifier with fundamental currentI=1.7621; the1 parameters in the simplified model for fifth and seventh harmonic currents are listed in Table 1.To compare the accuracy of the three different models, the mean and standard deviations of the errors on Ihr; Ihi and Ih between estimated value and the simulated actual value are calculated for each model. The error comparison of the three models on the three-phase six-pulse rectifier is shown in Table 2, where mhr; mhi and mha denote the mean, and shr; shi and sha represent the standard deviations. Note that I1 and _Ih in Table 2are the current values caused by rated pure sinusoidal supply voltage.Error comparisons on the single-phase capacitor-filtered rectifier and the arc furnace load are listed in Table 3 and 4, respectively.It can be seen from the above test results that the accuracy of the proposed model is different for different nonlinear loads, while for a certain load, the accuracy will decrease as the harmonic order increase. However, the proposed model is always more accurate than other two models.It can also be seen from Table 1 that the componenta50 t a51V1 and b50 t b51V1 are around 20:0074 t0:3939 0:3865 and 0:0263 t 0:0623 0:0886 while the componenta55V5r and b55V5i will not exceed 0:2676 £0:03 0:008 and 0:9675 £0:003 0:029; respectively. The result shows that the fifth harmonic current caused by the summation of constant term and the fundamental voltage is about 10 times of that caused by harmonic voltage with same order, so that the formal is dominant in the harmonic current for the three-phase six-pulse rectifier. The same situation exists for other harmonic orders and other nonlinear loads.4.2. Effectiveness of harmonic source identification and current separationTo show the effectiveness of the proposed harmonic source identification method, simulations are performed on a composite load containing linear load (30%) and nonlinear loads with three-phase six-pulse rectifier (30%),single-phase capacitor-filtered rectifier (20%) and ac arc furnace load (20%).For simplicity, only the errors of third order harmonic current of the linear and nonlinear loads are listed in Table 5, where IN3 denotes the third order harmonic current corresponding to rated pure sinusoidal supply voltage; mN3r ;mN3i;mN3a and mL3r ;mL3i;mL3a are error means of IN3r ; IN3i; IN3 and IL3r ; IL3i; IL3 between the simulated actual value and the estimated value;sN3r ;sN3i;sN3a and sL3r ;sL3i;sL3a are standard deviations.Table 2Table 3It can be seen from Table 5 that the current errors of linear load are less than that of nonlinear loads. This is because the errors of nonlinear load currents are due to both the model error and neglecting the components related to harmonic voltages of the same order, whereas only the later components introduce errors to the linear load currents. Moreover, it can be found that more precise the composite load model is, less error is introduced. However, even by using the very simple model (12), the existence of harmonic sources can be correctly identified and the harmonic current of linear and nonlinear loads can be effectively separated. Table 4Error comparison on the arc furnaceTable 55. ConclusionsIn this paper, from an engineering point of view, firstly anew linear model is presented for representing harmonic sources. On the basis of the intrinsic difference between linear and nonlinear loads in their V –I characteristics, and by using the proposed harmonic source model, a new concise principle for identifying harmonic sources and separating harmonic source currents from that of linear loads is proposed. The detailed modeling and identification procedure is also developed based on the least squares approximation approach. Test results on several kinds of typical harmonic sources reveal that the simplified model is of sufficient precision, and is superior to other existing models. The effectiveness of the harmonic source identification approach is illustrated using a composite nonlinear load.AcknowledgementsThe authors wish to acknowledge the financial support by the National Natural Science Foundation of China for this project, under the Research Program Grant No.59737140. References[1] IEEE Working Group on Power System Harmonics, The effects of power system harmonics on power system equipment and loads. IEEE Trans Power Apparatus Syst 1985;9:2555–63.[2] IEEE Working Group on Power System Harmonics, Power line harmonic effects on communication line interference. IEEE Trans Power Apparatus Syst 1985;104(9):2578–87.[3] IEEE Task Force on the Effects of Harmonics, Effects of harmonic on equipment. IEEE Trans Power Deliv 1993;8(2):681–8.[4] Heydt GT. Identification of harmonic sources by a State Estimation Technique. IEEE Trans Power Deliv 1989;4(1):569–75.[5] Ferach JE, Grady WM, Arapostathis A. An optimal procedure for placing sensors and estimating the locations of harmonic sources in power systems. IEEE Trans Power Deliv 1993;8(3):1303–10.[6] Ma H, Girgis AA. Identification and tracking of harmonic sources in a power system using Kalman filter. IEEE Trans Power Deliv 1996;11(3):1659–65.[7] Hong YY, Chen YC. Application of algorithms and artificial intelligence approach for locating multiple harmonics in distribution systems. IEE Proc.—Gener. Transm. Distrib 1999;146(3):325–9.[8] Mceachern A, Grady WM, Moncerief WA, Heydt GT, McgranaghanM. Revenue and harmonics: an evaluation of someproposed rate structures. IEEE Trans Power Deliv 1995;10(1):474–82.[9] Xu W. Power direction method cannot be used for harmonic sourcedetection. Power Engineering Society Summer Meeting, IEEE; 2000.p. 873–6.[10] Sasdelli R, Peretto L. A VI-based measurement system for sharing the customer and supply responsibility for harmonic distortion. IEEETrans Instrum Meas 1998;47(5):1335–40.[11] Arrillaga J, Bradley DA, Bodger PS. Power system harmonics. NewYork: Wiley; 1985.[12] Thunberg E, Soder L. A Norton approach to distribution networkmodeling for harmonic studies. IEEE Trans Power Deliv 1999;14(1):272–7.[13] Giordano AA, Hsu FM. Least squares estimation with applications todigital signal processing. New York: Wiley; 1985.[14] Xia D, Heydt GT. Harmonic power flow studies. Part I. Formulationand solution. IEEE Trans Power Apparatus Syst 1982;101(6):1257–65.[15] Mansoor A, Grady WM, Thallam RS, Doyle MT, Krein SD, SamotyjMJ. Effect of supply voltage harmonics on the input current of single phase diode bridge rectifier loads. IEEE Trans Power Deliv 1995;10(3):1416–22.[16] Varadan S, Makram EB, Girgis AA. A new time domain voltage source model for an arc furnace using EMTP. IEEE Trans Power Deliv 1996;11(3):1416–22.。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
72页Machine Tools Objectived.Machine tools are the main engines of the manufacturing industry. This chapter covers a few of the details that are common to all classes of machine tools discussed in this book. After completing the chapter, the reader will be able to>understand the classification of the various machine tools used in manufacturing industries.>identify the differences between generating and forming of surfaces. > identify various methods used to generate different types of surfaces. >distinguish between the different accuracies and surface finishes that are achievable with different machine tools.>understand the different components of the machine tools and their functions.>learn about the different support structures used in the machine tools. >understand the various actuation systems that are useful to generate the required surfaces.>Learn the different types of guideways used in the machine tools.>understand the work holding requirements.3.1 INTRODUCTIONThe earliest known machine tools are the Egyptian foot-operated lathes.These machine tools were developed essentially to allow for the introduction of accuracy in manufacturing.A machine tool is defined as one which while holding the cutting tools, would be able to remove metal from a workpiece in order to generate the requisite job of given size, configuration, and finish. It is different from a machine, which is essentially a means of converting the source of power from one form to the other. The machine tools are the mother machines since without them, no components can be produced in their finished form. They are very old and the industrial revolution owes its success to them.A machine tool is required to provide support to the workpiece and cutting tools as well as provide motion to one or both of them in order to generate the required shape on the workpiece. The form generated depends upon the type of machine tool used.In the last two centuries, the machine tools have been developed substantially. The machine tool versatility has grown to cater to the varied needs Of the new inventors coming with major developments. For example,James Watt's steam engine could be proven only after a satisfactory method was found to bore the engine cylinder with a boring bar by Wilkinson around 1775.A machine tool is designed to perform certain primaryfunctions,but the extent to which it can be exploited to perform secondary functions is a measure of its flexi bility.Generally,the flexibility of the machine tool is inc reased by the use of secondary functional attachments,s uch as radius or spherical turning attachment for a cent re lathe.Alternatively,to improve productivity,special atta chments are added,which also reduce the flexibility.3.2CLASSIFICATION OF MACHINE TOOLSThere are many ways in which the machine tools can be classified.One such classification based on the produc tion capability and application is shown below:1.General purpose machine tools(GPM)are those designed to perform a variety of machining operations on a wide range of components.By the very nature of generalisation,the general purpose machine tools are thou gh capable of carrying out a variety of tasks,would no t be suitable for large production,since the setting time for any given operation is large.Thus,the idle time on the general purpose machine tools is more and the mac hine utilisation is poor.The machine utilisation may be termed as the percentage of actual machining or chip g enerating time to the actual time available.This is much lower for the general purpose machine tools.They m ay also be termed as the basic machine tools. Further,skilled operators would be required to run the general purpose machine tools.Hence,their utility is in job shops,such as catering to small batch and large v ariety job production,where the requirement is versatility rather than production capability.Examples are lathe,shaper,and milling machine.2Production machine tools are those where a number of functions of the machine tools are automated such t hat the operator skill required to produce the component is reduced.Also,this would help in reducing the idle t ime of the machine tool,thus improving the machine ut ilisation.It is also possible that a general purpose machi ne tool may be converted into a production machine to ol by the utilisation of jigs and fixtures for holding the workpiece.These have been developed from the basic m achine tools.Some examples are capstan lathes,turret la thes,automats,and multiple spindle drilling ma chines. The setting time for a given job is more.Also,tooling design for a given job is more time consuming and ex pensive.Hence the production machine tools can only beused for large volume production.3.Special purpose machine tools(SPM)are those mac hine tools in which the setting operation for the job a nd tools is practically eliminated and complete automationi s achieved.ms greatly reduces the actual manufacturing t ime of a component and helps in the reduction of cos ts.These tools are used for mass manufacturing.These machine tools are expensive compared to the general pur pose machines since they are specifically designed for the given application,and are restrictive in their application c apabilities.Examples are cam shaft grinding machine,conn ecting rod twin boring machine,and piston turning lathe.4.Single purpose machine tools are those,which are designed specifically for doing a single operation ona class of jobs or on a single job.These tools ha ve thehighest amount of automation and are used for really high rates of production.These are used specifically for one product only,and thus have the least flexibili ty.However,these do not require any manual interven tion andare most cost effective.Examples are transfer linescomposed of unit heads for completely machining any given product.The application of the above four types can be shown graphically in Fig. 3.1.Fig. 3.1Application of machine tools based on the capability. 3.3GENERATING AND FORMINGGenerally,the component shape is produced in machine tools by two different techniques,generating and forming. Generating is the technique in which the required pr ofile is obtained by manipulating the relative motionsof the workpiece and the cutting tool edge.Thus,the obtained contour would not be identical to the shape of the cutting tool edge.This is generally used for a majority of the general profiles required.The type of surface generated depends on the primary motion ofthe workpiece as well as the secondary or feed motio n of the cutting tool.For example,when the workpiece is rotated and a single point tool is moved along a straight line paralle l to the axis ofrotation of the workpiece,a helical s urface is generated,as shown in Fig. 3.2(a).If the pitch of the helix or feed rate is extremely small,or the surface generated may be approximated to a cylin der.This is carried out in ladles and is called turning or cylindrical turning.Fig. 3.2Generating and forming of surfaces by machine tools.An alternate method of obtaining the given profile is called forming in which,the shape of the cutting toolis impressed upon the workpiece,as shown in fig. 3.2 (b).Thus,the accuracy Of the obtained shape dependupon the accuracy of the form of the tool used.However,many of the machine tool operations areactually combinations of the above two.For example. when a dove tail is cut,the actual profile is obtained by sweeping the angular cutter along the straight line. Thus,it involves forming(angular cutter profile)and gene rating(sweeping along a line),as shown in Fig. 3.3.Fig3.3Generation of surface.3.4METHODS OF GENERATING SURFACESFig. 3.4Classification of machine tools using single point cuttingtools.A large number of surfaces can be generated or formed with the help of the motions given to the tooland the workpiece.The shape of the tool also makesa very important contribution to the final surface obtaine d Basically,there are two types of motions given in a machine tool.The primary motion given to the workpiece or cutting tool constitutes the cutting speed,which cause s a relative motion between the tool and workpiece suc h that the face of the cutting tool approaches the mat erial to be ually,the primary motion consum es most of the cutting power.The secondary motion is one which feeds the tool relatively past the workpiece. The combination of the primary and secondary motions is responsible for the generation of specific surfaces.Someti mes,there would be a tertiary movement in between thecuts for specific surfaces.A classification of machine tools based on the motions is shown in Fig. 3.4,for single point tools,an d Fig. 3.5for multi-point tools.In the case of job rot ation,cylindrical surfaces would be generated,as shown i n Fig. 3.6,when a tool is fed in a direction parallelto the axis of rotation.When the feeding direction is not parallel to the axis of rotation,complex surfaces, such as cones(Fig. 3.7),or contours(Fig. 3.8)can begenerated.The tools used in the above cases are of si ngle point.If the tool motion is perpendicular to the a xis of rotation,a plane surface would be generated,as shown in Fig. 3.9.However,if a cutting tool of a giv en form is fed in a direction perpendicular to the axis of rotation,also called plunge cutting,a contour surface of revolution would be obtained,as shown in Fig. 3.10.Fig. 3.5Classification of machine tools using multi-point cutting tools. Plane surface generation in shaping Plane surfaces can be generated when the job or tool reciprocates for the primary motion,as shown in Fig. 3.11,without any rota tion.With the multi-point tools generally plane surfaces aregene rated,as shown in Fig. 3.12.However,in this situation, a combination of forming and generating,is used to get a variety of complex surfaces,which are otherwise i mpossible to get through the single-point tool operations. Some typical examples are the spur gear hobbing and spiral milling of formed cavities.3.5ACCURACY AND FINISH ACHIEVABLEIt is necessary to select a given machine tool or m chining operation for a job such that it is the lowest cost option.There are various operations possible for a given type of surface and each one has its own charac teristics in terms of possible accuracy,surface finish,and cost.This selection is made at the time of process pla nning.The obtainable accuracy for various types of machi ne tools is shown in Table 3.1.The surface finish expe cted from the various processes is shown in Fig. 3.13.The values presented in Table 3.1and Fig. 3.13areonly a rough guide.The actual values greatly vary depe nding on the condition of the machine tool,the cutting tool used,and the various cutting process parameters.80Manufacturing TechnologyBASIC ELEMENTS OF MACHINE TOOLS3.6 BASIC ELEMENTS OF MACHINE TOOLSThe various components that are present in all the mac hine tools may be identified as follows:•Work holding device to hold the workpiece in the correct orientation to achieve the required in manufacturin g,for example chuck.•Tool holding device to hold the cutting tool in the correct position with respect to the workpiece,and provi de enough holding force to counteract the cutting forces acting on the tool,example tool•Work motion mechanism to provide the necessary sp eed to the workpiece for generating the surface,example head stock.•Tool motion mechanism to provide the various motio ns needed for the tool in conjunction with workpiece m otion in order to generate the required surface profiles, example carriage.•Support structure to support all the mechanisms sho wn above,and maintain their relative position with respe ct to each other,and allow for relative movement betw een the various parts to obtain the*requisite part pr ofile and accuracy,example bed.The type of device or mechanism used varies depending on the type of machine tool and the function it is expected to serve.In this chapter,someof the more common elements would be discussed.How ever,further details may be found in the chapters wher e the actual machine tools are discussed.The various motions that need to be provided in the machine tool are cutting speed and feed.The range of speed and feed rates to be provided in a given machi ne tool depends on the capability of the machine tool and the range of work materials that are expected to be processed.Basically,the actual speed and feed chosen depends upon the•work material,•required production rate,•required surface finish,and•expected accuracy.The drive units in a machine tool are expected to provide the required speed and convert the rotational sp eed into linear motion.Details of these may be foundin books dealing with machine tool design.3.7 SUPPORT STRUCTURESThe broad categories of support structures found in vario us machine tools are shown in Fig. 3.14.They may be classified as beds(horizontal structures)or columns(vertic al structures).The main requirements of the support structure are •Rigidity•Accuracy of guideways•Impact resistance•Wear resistanceBed provides a support for all the elements presentin a machine tool.It also provides the true relative po sitions Of all units in machine tools.Some of these un its may be sliding on the bed or fixed.For the purpo se Of sliding,accurate guideways are provided.Bed weig ht is approximately half the total weight of the machine tool.The basic construction of a bed is like a box,to provide the highest possible rigidity with low weight.To increase the rigidity,the basic box structure is added wi th various types of ribs,as shown in Fig. 3.15.The a ddition of ribs complicates the manufacturing process for the beds.Beds are generally constructed using cast iron or alloy c ast iron consisting of alloying elements,such as nickel,c hromium,and molybdenum.With cast iron,because of t he intricate designs of the beds,the casting defects may not be fully eliminated.Alloy steel structure is also used for making beds. The predominant manufacturing method used is welding.T he following advantages can be claimed for steel constru ction:(a)With steels,the wall thickness can be reduced .Thus,greater strength and stiffness for the same weight would be possible with alloy steel bed construction.(b)Walls of different thicknesses can be conveniently welded.Whereas in casting,this would create problems.(c)Repair of welded structures would be easier.(d)Large machining allowances would have to be provi ded for casting to remove the defects and hard Concrete is also tried as bed material.Its choice is ma inly because of the large damping capacity.For precision machine tools and measuring machines,granite is also us ed as the bed material.The major types of bed styles used in the machine tools are shown in Fig. 3.16.。