毕设外文翻译

合集下载

毕业设计外文翻译原文.

毕业设计外文翻译原文.

Optimum blank design of an automobile sub-frameJong-Yop Kim a ,Naksoo Kim a,*,Man-Sung Huh baDepartment of Mechanical Engineering,Sogang University,Shinsu-dong 1,Mapo-ku,Seoul 121-742,South KoreabHwa-shin Corporation,Young-chun,Kyung-buk,770-140,South KoreaReceived 17July 1998AbstractA roll-back method is proposed to predict the optimum initial blank shape in the sheet metal forming process.The method takes the difference between the ®nal deformed shape and the target contour shape into account.Based on the method,a computer program composed of a blank design module,an FE-analysis program and a mesh generation module is developed.The roll-back method is applied to the drawing of a square cup with the ¯ange of uniform size around its periphery,to con®rm its validity.Good agreement is recognized between the numerical results and the published results for initial blank shape and thickness strain distribution.The optimum blank shapes for two parts of an automobile sub-frame are designed.Both the thickness distribution and the level of punch load are improved with the designed blank.Also,the method is applied to design the weld line in a tailor-welded blank.It is concluded that the roll-back method is an effective and convenient method for an optimum blank shape design.#2000Elsevier Science S.A.All rights reserved.Keywords:Blank design;Sheet metal forming;Finite element method;Roll-back method1.IntroductionIt is important to determine the optimum blank shape of a sheet metalpart.However,because its deformation during the forming process is very complicated,it is not easy to design the optimum blank shape even by the skilled labor based on the experience of many years.Recently,computa-tional analysis for a complex automobile part has been able to be carried out easily due to improved computer perfor-mance and the numerical analysis technique.In the analysis process,all kinds of variables that affect the deformation should be considered.The optimum blank shape leads to the prevention of tearing,uniform thickness distribution and to the reduction of the press load during drawing.If the blank shape is designed optimally,the formability will be increased and the ®nal product will require the least amount of trimming at the end of theprocess.Therefore,it is desirable to design the blank shape with a uniform ¯ange of its periphery after deep drawing.Several numerical solutions for the deep drawing process of non-circular components have been reported.Hasek and Lange [1]gave an analytical solution to this problem usingthe slip-line ®eld-method with the assumption of plane-strain ¯ange deformation.Also,Jimma [2]and Karima [3]used the same method.V ogel and Lee [4]and Chen and Sowerby [5]developed ideal blank shapes by the method of plane-stress characteristics.Sowerby et al.[6]developed a geometric mapping method providing a trans-formation between a ¯at sheet and the ®nal surface.Majlessi and Lee [7,8]developed a multi-stage sheet metal forming analysis method.Chung and Richmond [9±12]determined ideal con®gurations for both the initial and the intermediate stages that are required to form a speci®ed ®nal shape using the ideal forming theory.Lee and Huh [13]introduced a three-dimensional multi-step inverse method for the optimum design of blank shapes.Toh and Kobayashi [14]developed arigid±plastic ®nite-element method for the drawing of general shapes based on membrane theory and ®nite-strain formulations.Zhaotao [15]used the boundary element method for a 2D potential problem to design optimum blank shapes.This paper presents an optimum design method of blank shapes for the square cup drawing process considering process variables.An optimum blank shape of square cup drawing was obtained using the proposed method.Also,it was applied to the deep drawing of an automobile sub-frame,and an optimum blank shape with a uniform ¯ange at its periphery weredetermined.Journal of Materials Processing Technology 101(200031±43*Corresponding author.Tel.: 82-2-705-8635;fax: 82-2-712-0799.E-mailaddress :nskim@ccs.sogang.ac.kr (Naksoo Kim0924-0136/00/$±see front matter #2000Elsevier Science S.A.All rights reserved.PII:S 0924-0136(9900436-72.Design of optimum blank shapeThe de®nition of the optimum blank shape is the mini-mization of the difference between the outer contour of the deformed blank and the target contour that indicates the residual ¯ange of uniform size around the periphery of the product.The target contour is generated from the outer contour of the product and determines an optimum blank shape using the results of ®nite-element simulation with the roll-back method.In the process of blank design the simula-tion is performed using an explicit ®nite-element software PAM-STAMP and the interface program is developed for con-necting the blank design module,the remeshing module,the post-processor module and the FE-analysis package.2.1.Roll-back method`The roll-back method starts by de®ning the target con-tour.After determining the length of the ¯ange that remains around the periphery of the product,the pro®le of the target contour is created by offsetting an equal distance from the outer contour of the product and its mesh system is gener-ated by beam elements.The process of blank design is illustrated in Fig.1.The mesh system of the prepared square blank for initial analysis is shown in Fig.1(a.After an analysis,the mesh system of the deformed blank and the target contour are shown in Fig.1(b.At the ¯ange of the deformed blank,a distinction is made between the interior ¯ange within the target contour and the exterior ¯ange out ofthe target contour.The ¯ange out of the target contour is the part that will be trimmed and the ¯ange within the target contour is the part which does not keep shape is due to the incompletion of the blank shape.Thus the modi®ed blank shape should be designed to take the shape of the outer contour of the product completely.The contour of themodi®ed blank shape using the roll-back method and the initial blank shape is shown in Fig.1(c.The mesh system of the modi®ed blank shape for FE-analysis is shown in Fig.1(d.The blank design method will be introduced in detail.The quarter of the deformed blank and the target contour are shown in Fig.2(a.According to the previous explanation,the remained ¯ange can be divided into the interior and the exterior ¯ange.The design process of region A is shown in Fig.2(b.In the mesh of the deformed blank a square grid IJKL on the target contour will be considered,and then the internal dividing point Q in will be calculated at the ratio of m tonFig.1.Illustrating the process of ®nding the optimum blank:(ainitial blankshape;(bdeformed blank and target contour;(croll-back blank and contour;(dmodi®ed blankshape.Fig.2.The roll-back process of a mesh located on the surface of the ¯ange:(aa mesh located on the surface of the ¯ange;(bregion A:residual drawing part out of target contour;(cregion B:residual drawing part inside the target contour.32J.-Y.Kim et al./Journal of Materials Processing Technology 101(200031±43between the node J and K.This point is mapped back into the mesh system of the initial blank.The internal dividing point Q H in is calculated at the ratio of m to n between the same node J H and K H.The following process is performed on the element of the deformed blank on the target contour.The describing point of the outer contour of themodi®ed blank shape can be calculated.If the coordinates of the nodes J and K areJ(x1,y1,K(x2,y2and the coordinates of the nodes J H and K H are J H x H1Y y H1 Y K H x H2Y y H2 ,the ratio of m to n ism X n JQJKX QKJK(1The coordinate of the internal dividing point Q H in can be expressed asQ H inmx H2 nx H1m nYmy H2 ny H1m n(2The design process of region B is shown in Fig.2(c.In the mesh of the deformed blank a square grid MNOP of which the outward edge crosses the target contour should be considered,and then the external dividing point Q out can be calculated at the ratio of m to n between nodes O and P.This point is mapped back into the mesh system of the initial blank.The external dividing point Q H out can be calculated at the ratio of m to nbetween the same nodes Q H and P H.If the coordinates of the nodes O and P areO(x1,y1,P(x2,y2and the coordinates of the O H and P H are O H x H1Y y H1 Y P H xH2Y y H2 ,the ratio of m to n ism X n OQOPX QPOP(3The coordinate of the external dividing point Q H out can be expressed asQ H outmx H2Ànx H1Ymy H2Àny H1(4The following process is performed on all the element of the deformed blank related on the target contour.The points describing the outer contour of modi®ed blank shape can be calculated.When all points of two cases are connected by the spline,the outer contour of modi®ed blank can be described.This process is shown in Fig.3.2.2.The development of the optimum blank design programTo optimize the initial blank shape,a design program was developed following the prescribing method and procedures. This program consists of the blank shaper designmodule, the mesh generation module and the post-processor module. The whole procedure is illustrated in Fig.4.To perform the design process of a blank shape,an interface module is needed.This module is developed to read the output®le of ®nite-element analysis and design the optimum blank shape and generate theinput®le.3.Designs of blank shape and application3.1.Blank design of a square cupTo verify the validity of the roll-back method,it is applied to the process of square cup deep drawing.Several numerical solutions of the deep drawing process for non-circular components have been reported recently.The pub-lished blank shapes by Lee and coworkers[16±18]are compared with the resultusing the roll-back method.The Fig.3.Flowchart of the blank design module.Fig.4.Flow chart of the main program.J.-Y.Kim et al./Journal of Materials Processing Technology101(200031±4333dimensions of the die and punch set for an analysis are shown in Fig.5.The material of the sheet metal is cold-rolled steel for an automobile part.The following are the material propertiesand process variables.Stress±strain relation:"s58X 78Â 0X 00003 "e0X 274 kgf a mm 2 ;Lankford value:"R 1X 679;initial blank size:160mm Â160mm square blank;initial thickness:t 0.69mm;friction coef®cient:m 0.123;and blank-holding force:4000kgf (1kgf 9.81N.The deformed shapes of the square cup obtained from the initial blank and the optimum blank are shown in Fig.6.Inthe present work the optimum blank shape for a square cup that is of 40mm height and 5mm width of ¯ange will be determined.Each modi®ed blank shape after the application of the roll-back method is illustrated in Fig.7.When an 160mm Â160mm square blank is used for an initial blank the outer contour of deformed blank is shown in Fig.7(a.A ®rst modi®ed blank shape can be calculated with the result of the initial square blank.An analysis result is shown inFig.7(b.The difference between the deformed shape and the target contour issigni®cant.If the blank design process is repeated several times the difference decreases and con-verges to zero.Hence a square cup with a uniform ¯ange at its periphery can be made.The comparison between the ®nal result and a published result is shown in Fig.8.In the transverse direction the optimum blank shape using the roll-back method is larger than the published result.The load±displacement curves in square cup drawing process with various initial blank shapes are shown in Fig.9.As the modi®cation is repeated,the gap of the load±displacement curves before and after iteration decreases.Thus after the third modi®cation the maximum value of the load becomes the mean value between that of the ®rst and second modi®cation.After three modi®cations the optimum blank shape is determined,then the result with the optimum blank shape is compared with results in the literature.The thickness strain distribution in the diagonal direction is shown in Fig.10(a,whilst the thickness strain distribution in the transverse direction is shown in Fig.10(b.In the thickness strain distribution the result using the roll-back method is slightly different from the published result,but the overall strain distributions are quite similar.It is thus veri®ed that the roll-back method is a useful approach in the design of optimum blank shapes.3.2.Blank design of the left member of a front sub-frameAn analysis for members of a box-type front sub-frame is performed.The left member is selected as one of the subjects for analysis because its shape is shallow but complex.Fig.11shows the manufacturing set-up as modeled for the numer-ical simulation.The left member requires a uniform ¯ange for the spot welding between the upper and the lower parts besides the improvement of formability.It is recommended that the length of uniform ¯ange is 30mm.The target contour is de®ned at the position which is 30mm from the outer contour of product and is shown in Fig.12.Its mesh system is generated by beam elements.The material of the sheet metal is SAPH38P,a hot-rolled steel for automobile parts.The following are the material properties and process variables.Stress±strain relationship:"s 629Â"e 0X 274(MPa;Lankford value:"R1X 030;initial thickness:t 2.3mm;friction coef®cient:m 0.1;blank holding pressure:1MPa.Fig.5.Geometrical description of the tooling for the deep drawing of a square cup (dimensions:mm.Fig.6.The deformed shape of square cups with FE-mesh geometry where the cup height is 40mm:(adeformed shape of the square cup obtained from the initialblank;(bdeformed shape of the square cup obtained from the optimum blank.34J.-Y.Kim et al./Journal of Materials Processing Technology 101(200031±43A hexagonal blank is used as the initial blank.After three modi®cations the optimum blank shape is determined.For this case,the load±displacement curves with various blank shapes are shown in Fig.13.The comparison of the initial ¯ange and the deformed ¯ange with various blank shapes is shown in Fig.14.As the modi®cation is repeated,the maximum punch load is reduced and the outer contour may be drawn to the target contour at the same time.The thickness distribution is improved step by step;the thickness distribution with various blank shapes being shown in Fig.15.The comparison between the optimum blank shape designed by the roll-back method and the blank shape for mass production is illustrated in Fig.16.The optimum blank shape shows curvature because the outer contour of the product and the ¯ow rate of the sheet metal are considered.However,the blank shape for mass production is simple and straight because the convenience of cutting is considered.To verify the result an initial blank cut by a laser-cutting machine was prepared.The ®nal shape drawn with the initial blank in the press shop isshownparison of the initial ¯ange shapes and the deformed ¯angeshapes:(ainitial square blank;(b®rst modi®ed blank;(csecond modi®ed blank;(dthird modi®edblank.parison of the initial blank contour between the roll-back method and Huh's method.J.-Y.Kim et al./Journal of Materials Processing Technology 101(200031±4335in Fig.17.It had a ¯ange of uniform size around its periphery.The thickness distribution at the position of four sections in the longitudinal direction of the left member was mea-sured.Fig.18shows a comparison of thickness between the computed results and the experimental results in each sec-tion.In section A,the thickness distribution has some error at the end of the ¯ange,whilst in sections B and C,the computed results are compatible with the experimental results.In section D,the computed results predicted that a split might happen,but the experimental cup did notsplit.Fig.9.Load±displacement curves in the square cup drawing process with various initial blankshapes.Fig.10.Thickness strain distribution in a square cup:(adiagonal direction;(btransversedirection.Fig.11.FE-model for a sub-frame left member.If the initial blank shape,the ®nal shape and thickness distribution are considered,the results predicted by the roll-back method has a good agreement with the experimental values.Therefore,as well as the roll-back method being applicable to a simple shape,it can be applied to a complex and large shape.3.3.Blank design of No.2member of front sub-frameAn analysis of No.2member is performed,with its deep and complex shape.Its optimum blank shape is designed using the roll-back method.Fig.19shows the manufacturing set-up as modeled for the numerical simulation.Because its drawing depth is very deep,eccentricity may occur due to the blank initial position or shape.Thus the target contour is de®ned at the position that is 40mm from the outer contour of product and it is shown in Fig.20.A square blank is used as the initial blank.After threemodi®cations the optimum blank shape isdetermined.Fig.12.Target contour for the leftmember.Fig.13.Load±displacement curves in the left member drawing process with various blankparison of the initial ¯ange shapes and the deformed ¯ange shapes:(ainitial blank;(b®rst modi®ed blank;(csecond modi®ed blank;(dthird modi®ed blank.Fig.15.Thickness distribution with various blank shapes(unit:mm:(ainitial blank;(b®rst modi®ed blank;(csecond modi®ed blank;(dthird modi®ed blank.parison of the initial blank shapes predicted by the roll-back method and those designed by skilled labor.For this case,load±displacement curves for various blank shapes are shown in Fig.21,whilst a comparison of the initial ¯ange and the deformed ¯ange with various blank shapes in shown in Fig.22.The thickness distribution with the initial shape is shown in Fig.23,whilst the thickness distribution with the optimum blank shape is shown in Fig.24.The thickness distribution of the side-wall and of the ®llet connecting the side-wall to the top isimproved.Fig.17.Left member drawn in the press shop with the initial blank predicted by the roll-backmethod.Fig.18.(aSections for measuring the thickness distribution.(b±eThickness distributions at sections A±D,respectively.3.4.Design of the welding line with TWB analysis of No.2memberAfter designing the optimum blank shape of No.2member,a tailor-welded blank is applied to this member.To reduce the weight of the sub-frame,structural analysis is performed.On the area where the stress intensity level is low,it is proposed to reduce the thickness locally.Therefore,it is required to design a tailor-welded blank that makes a speci®ed shape after deformation.When two sheet metals of different thickness are welded together,their metal ¯ow is different from that of sheet metal of the same thickness.Thus it is dif®cult to design the location of the weld line.In this simulation the weld line is designed by the use of the roll-back method and the welding line should be located at the speci®ed position after deformation:the speci®ed position is 120mm on both sides of the centerline.Thus the target line is de®ned and meshed by beam elements.The outer contour of TWB and the welding line are shown in Fig.25,and the results are shown in Figs.26and 27.The welding lines can be reached to the target line but,on the top of the blank that has the lower thickness,fracture may occur.This is the same as the result that in the deep drawing of a tailor-welded blank with different thickness,failure occurred at the ¯at bottom of the punch parallel to the weld line.This is due to the deformation not beingdis-Fig.19.FE-model for the sub-frame leftmember.Fig.20.Target contour for the No.2member.Fig.21.Load±displacement curves in the No.2member drawing process with various blank shapes.J.-Y. Kim et al. / Journal of Materials Processing Technology 101 (2000 31±43 41 Fig. 23. Thickness distribution with the initial blank shape (unit: mm: (a front view; (b rear view. Fig. 24. Thickness distribution with the optimum blank shape (unit: mm: (a front view; (b rear view. Fig. 22. Comparison of the initial ¯ange shapes and the deformed ¯ange shapes: (a initial blank; (b ®rst modi®ed blank; (c second modi®ed blank; (d third modi®ed blank. Fig. 25. Comparison of the weld line between the initial blank shape and the deformed blank shape.42 J.-Y. Kim et al. / Journal of Materials Processing Technology 101 (2000 31±43 4. Conclusions In this paper the roll-back method that designs an optimum blank shape is proposed. Based on the method, a computer program composed of a blank design module,an FE-analysis program and a mesh generation module is developed and it is applied to the deep drawing of a front sub-frame. The results of the present paper are summarized as follows: 1. To verify the validity of the proposed method it is applied to the deep drawing of a square cup. The outer contour may be drawn to the target contour. 2. The roll-back method is applied to the optimum blank design of a left member of an automobile sub-frame. The thickness distribution and the load level are improved. When the initial blank shape, the ®nal shape and thickness distribution are compared, the results predicted by the roll-back method have a good agreement with the experimental results. It is concluded that this method can be applied to the deep drawing of the complex automobile parts. 3. The analysis of No. 2 member with a tailor-welded blank is performed. The position of welding lines on the initial blank is designed. The roll-back method can be applied to the design of the welding line position. 4. In most cases, the edge of blank takes the shape of the target contour within a few iterations, which shows that the roll-back method is an effective and convenient method for an optimum blank shape design. Fig. 26. Deformed shape of No. 2 member with the tailor-welded blank. Fig. 27. Deformed shape of No. 2 member with the tailor-welded blank: (a front view; (b rear view. tributed uniformly, most of the stretching being concentrated on the side of the blank with lower strength. The process condition without fracture should be determined for the combination of the drawing depth and the two different thickness as shown in Fig.28. References [1] V.V. Hasek, K. Lange, Use of slip line ®eld method in deep drawing of large irregular shaped components, Proceedings of the Seventh NAMRC, Ann Arbor, MI, 1979, pp. 65±71. [2] T. Jimma, Deep drawing convex polygon shell researches on the deep drawing of sheet metal by the slip line theory. First report, Jpn. Soc. Tech. Plasticity 11 (116 (1970 653±670. [3] M. Karima, Blank development and tooling design for drawn parts using a modi®ed slip line ®eld based approach, ASME Trans. 11 (1989 345±350. [4] J.H. Vogel, D. Lee, An analysis method for deep drawing process design, Int. J. Mech. Sci. 32 (1990 891. [5] X. Chen, R. Sowerby, The development of ideas blank shapes by the method of plane stress characteristics, Int. J. Mech. Sci. 34 (2 (1992159±166. [6] R. Sowerby, J.L. Duncan, E. Chu, The modelling of sheet metal stamping, Int. J. Mech. Sci. 28 (7 (1986 415±430. [7] S.A. Majlessi, D. Lee, Further development of sheet metal forming analysis method, ASME Trans. 109 (1987 330±337. [8] S.A. Majlessi, D. Lee, Development of multistage sheet metal forming analysis method, J. Mater. Shap. Technol. 6 (1 (1988 41± 54. [9] K. Chung, O. Richmond, Ideal forming-I. Homogeneous deformation with minimum plastic work, Int. J. Mech. Sci. 34 (7 (1992 575±591. [10] K. Chung, O. Richmond, Ideal forming-II. Sheet forming with optimum deformation, Int. J. Mech. Sci. 34 (8 (1992 617±633. Fig. 28. Thickness distribution with the tailor-welded blank (unit: mm: (a front view; (b rear view.J.-Y. Kim et al. / Journal of Materials Processing Technology 101 (2000 31±43 [11] K. Chung, O. Richmond, Sheet forming process design based on ideal forming theory, Proceedings of the Fourth International Conference on NUMIFORM, 1992, pp. 455±460.[12] K. Chung, O. Richmond, The mechanics of ideal forming, ASME Trans. 61 (1994 176±181. [13] C.H. Lee, H. Huh, Blank design and strain prediction of automobile stamping parts by and inverse ®nite element approach, J. Mater. Process. Technol. 63 (1997 645±650. [14] C.H. Toh, S. Kobayashi, Deformation analysis and blank design in square cup drawing, Int. J. Mech. Tool Des. Res. 25 (1 (1985 15± 32. 43 [15] Z. Zhatao, L. Bingwen, Determination of blank shapes for drawing irregular cups using and electrical analogue methods, Int. J. Mech. Sci. 28 (8 (1986 499±503. [16] H. Huh, S.S. Han, Analysis of square cup deep drawing from two types of blanks with a modi®ed membrane ®nite element method, Trans. KSME 18 (10 (1994 2653±2663. [17] C.H. Lee, H. Huh, Blank design and strain prediction in sheet metal forming process, Trans. KSME A 20 (6 (1996 1810±1818. [18] C.H. Lee, H. Huh, Three-dimensional multi-step inverse analysis for optimum design of initial blank in sheet metal forming, Trans. KSME A 21 (12 (1997 2055±2067.。

毕业设计外文翻译_英文版

毕业设计外文翻译_英文版

A Design and Implementation of Active NetworkSocket ProgrammingK.L. Eddie Law, Roy LeungThe Edward S. Rogers Sr. Department of Electrical and Computer EngineeringUniversity of TorontoToronto, Canadaeddie@, roy.leung@utoronto.caAbstract—The concept of programmable nodes and active networks introduces programmability into communication networks. Code and data can be sent and modified on their ways to destinations. Recently, various research groups have designed and implemented their own design platforms. Each design has its own benefits and drawbacks. Moreover, there exists an interoperability problem among platforms. As a result, we introduce a concept that is similar to the network socket programming. We intentionally establish a set of simple interfaces for programming active applications. This set of interfaces, known as Active Network Socket Programming (ANSP), will be working on top of all other execution environments in future. Therefore, the ANSP offers a concept that is similar to “write once, run everywhere.” It is an open programming model that active applications can work on all execution environments. It solves the heterogeneity within active networks. This is especially useful when active applications need to access all regions within a heterogeneous network to deploy special service at critical points or to monitor the performance of the entire networks. Instead of introducing a new platform, our approach provides a thin, transparent layer on top of existing environments that can be easily installed for all active applications.Keywords-active networks; application programming interface; active network socket programming;I. I NTRODUCTIONIn 1990, Clark and Tennenhouse [1] proposed a design framework for introducing new network protocols for the Internet. Since the publication of that position paper, active network design framework [2, 3, 10] has slowly taken shape in the late 1990s. The active network paradigm allows program code and data to be delivered simultaneously on the Internet. Moreover, they may get executed and modified on their ways to their destinations. At the moment, there is a global active network backbone, the ABone, for experiments on active networks. Apart from the immaturity of the executing platform, the primary hindrance on the deployment of active networks on the Internet is more on the commercially related issues. For example, a vendor may hesitate to allow network routers to run some unknown programs that may affect their expected routing performance. As a result, alternatives were proposed to allow active network concept to operate on the Internet, such as the application layer active networking (ALAN) project [4] from the European research community. In the ALAN project, there are active server systems located at different places in the networks and active applications are allowed to run in these servers at the application layer. Another potential approach from the network service provider is to offer active network service as the premium service class in the networks. This service class should provide the best Quality of Service (QoS), and allow the access of computing facility in routers. With this approach, the network service providers can create a new source of income.The research in active networks has been progressing steadily. Since active networks introduce programmability on the Internet, appropriate executing platforms for the active applications to execute should be established. These operating platforms are known as execution environments (EEs) and a few of them have been created, e.g., the Active Signaling Protocol (ASP) [12] and the Active Network Transport System (ANTS) [11]. Hence, different active applications can be implemented to test the active networking concept.With these EEs, some experiments have been carried out to examine the active network concept, for example, the mobile networks [5], web proxies [6], and multicast routers [7]. Active networks introduce a lot of program flexibility and extensibility in networks. Several research groups have proposed various designs of execution environments to offer network computation within routers. Their performance and potential benefits to existing infrastructure are being evaluated [8, 9]. Unfortunately, they seldom concern the interoperability problems when the active networks consist of multiple execution environments. For example, there are three EEs in ABone. Active applications written for one particular EE cannot be operated on other platforms. This introduces another problem of resources partitioning for different EEs to operate. Moreover, there are always some critical network applications that need to run under all network routers, such as collecting information and deploying service at critical points to monitor the networks.In this paper, a framework known as Active Network Socket Programming (ANSP) model is proposed to work with all EEs. It offers the following primary objectives.• One single programming interface is introduced for writing active applications.• Since ANSP offers the programming interface, the design of EE can be made independent of the ANSP.This enables a transparency in developing andenhancing future execution environments.• ANSP addresses the interoperability issues among different execution environments.• Through the design of ANSP, the pros and cons of different EEs will be gained. This may help design abetter EE with improved performance in future.The primary objective of the ANSP is to enable all active applications that are written in ANSP can operate in the ABone testbed . While the proposed ANSP framework is essential in unifying the network environments, we believe that the availability of different environments is beneficial in the development of a better execution environment in future. ANSP is not intended to replace all existing environments, but to enable the studies of new network services which are orthogonal to the designs of execution environments. Therefore, ANSP is designed to be a thin and transparent layer on top of all execution environments. Currently, its deployment relies on automatic code loading with the underlying environments. As a result, the deployment of ANSP at a router is optional and does not require any change to the execution environments.II. D ESIGN I SSUES ON ANSPThe ANSP unifies existing programming interfaces among all EEs. Conceptually, the design of ANSP is similar to the middleware design that offers proper translation mechanisms to different EEs. The provisioning of a unified interface is only one part of the whole ANSP platform. There are many other issues that need to be considered. Apart from translating a set of programming interfaces to other executable calls in different EEs, there are other design issues that should be covered, e.g., • a unified thread library handles thread operations regardless of the thread libraries used in the EEs;• a global soft-store allows information sharing among capsules that may execute over different environmentsat a given router;• a unified addressing scheme used across different environments; more importantly, a routing informationexchange mechanism should be designed across EEs toobtain a global view of the unified networks;• a programming model that should be independent to any programming languages in active networks;• and finally, a translation mechanism to hide the heterogeneity of capsule header structures.A. Heterogeneity in programming modelEach execution environment provides various abstractions for its services and resources in the form of program calls. The model consists of a set of well-defined components, each of them has its own programming interfaces. For the abstractions, capsule-based programming model [10] is the most popular design in active networks. It is used in ANTS [11] and ASP [12], and they are being supported in ABone. Although they are developed based on the same capsule model, their respective components and interfaces are different. Therefore, programs written in one EE cannot run in anther EE. The conceptual views of the programming models in ANTS and ASP are shown in Figure 1.There are three distinct components in ANTS: application, capsule, and execution environment. There exist user interfaces for the active applications at only the source and destination routers. Then the users can specify their customized actions to the networks. According to the program function, the applications send one or more capsules to carry out the operations. Both applications and capsules operate on top of an execution environment that exports an interface to its internal programming resources. Capsule executes its program at each router it has visited. When it arrives at its destination, the application at destination may either reply it with another capsule or presents this arrival event to the user. One drawback with ANTS is that it only allows “bootstrap” application.Figure 1. Programming Models in ASP and ANTS.In contrast, ASP does not limit its users to run “bootstrap” applications. Its program interfaces are different from ANTS, but there are also has three components in ASP: application client, environment, and AAContext. The application client can run on active or non-active host. It can start an active application by simply sending a request message to the EE. The client presents information to users and allows its users to trigger actions at a nearby active router. AAContext is the core of the network service and its specification is divided into two parts. One part specifies its actions at its source and destination routers. Its role is similar to that of the application in ANTS, except that it does not provide a direct interface with the user. The other part defines its actions when it runs inside the active networks and it is similar to the functional behaviors of a capsule in ANTS.In order to deal with the heterogeneity of these two models, ANSP needs to introduce a new set of programming interfaces and map its interfaces and execution model to those within the routers’ EEs.B. Unified Thread LibraryEach execution environment must ensure the isolation of instance executions, so they do not affect each other or accessThe authors appreciate the Nortel Institute for Telecommunications (NIT) at the University of Toronto to allow them to access the computing facilitiesothers’ information. There are various ways to enforce the access control. One simple way is to have one virtual machine for one instance of active applications. This relies on the security design in the virtual machines to isolate services. ANTS is one example that is using this method. Nevertheless, the use of multiple virtual machines requires relatively large amount of resources and may be inefficient in some cases. Therefore, certain environments, such as ASP, allow network services to run within a virtual machine but restrict the use of their services to a limited set of libraries in their packages. For instance, ASP provides its thread library to enforce access control. Because of the differences in these types of thread mechanism, ANSP devises a new thread library to allow uniform accesses to different thread mechanisms.C. Soft-StoreSoft-store allows capsule to insert and retrieve information at a router, thus allowing more than one capsules to exchange information within a network. However, problem arises when a network service can execute under different environments within a router. The problem occurs especially when a network service inserts its soft-store information in one environment and retrieves its data at a later time in another environment at the same router. Due to the fact that execution environments are not allowed to exchange information, the network service cannot retrieve its previous data. Therefore, our ANSP framework needs to take into account of this problem and provides soft-store mechanism that allows universal access of its data at each router.D. Global View of a Unified NetworkWhen an active application is written with ANSP, it can execute on different environment seamlessly. The previously smaller and partitioned networks based on different EEs can now be merging into one large active network. It is then necessary to advise the network topology across the networks. However, different execution environments have different addressing schemes and proprietary routing protocols. In order to merge these partitions together, ANSP must provide a new unified addressing scheme. This new scheme should be interpretable by any environments through appropriate translations with the ANSP. Upon defining the new addressing scheme, a new routing protocol should be designed to operate among environments to exchange topology information. This allows each environment in a network to have a complete view of its network topology.E. Language-Independent ModelExecution environment can be programmed in any programming language. One of the most commonly used languages is Java [13] due to its dynamic code loading capability. In fact, both ANTS and ASP are developed in Java. Nevertheless, the active network architecture shown in Figure 2 does not restrict the use of additional environments that are developed in other languages. For instance, the active network daemon, anted, in Abone provides a workspace to execute multiple execution environments within a router. PLAN, for example, is implemented in Ocaml that will be deployable on ABone in future. Although the current active network is designed to deploy multiple environments that can be in any programming languages, there lacks the tool to allow active applications to run seamlessly upon these environments. Hence, one of the issues that ANSP needs to address is to design a programming model that can work with different programming languages. Although our current prototype only considers ANTS and ASP in its design, PLAN will be the next target to address the programming language issue and to improve the design of ANSP.Figure 2. ANSP Framework Model.F. Heterogeneity of Capsule Header StructureThe structures of the capsule headers are different in different EEs. They carries capsule-related information, for example, the capsule types, sources and destinations. This information is important when certain decision needs to be made within its target environment. A unified model should allow its program code to be executed on different environments. However, the capsule header prevents different environments to interpret its information successfully. Therefore, ANSP should carry out appropriate translation to the header information before the target environment receives this capsule.III. ANSP P ROGRAMMING M ODELWe have outlined the design issues encountered with the ANSP. In the following, the design of the programming model in ANSP will be discussed. This proposed framework provides a set of unified programming interfaces that allows active applications to work on all execution environments. The framework is shown in Figure 2. It is composed of two layers integrated within the active network architecture. These two layers can operate independently without the other layer. The upper layer provides a unified programming model to active applications. The lower layer provides appropriate translation procedure to the ANSP applications when it is processed by different environments. This service is necessary because each environment has its own header definition.The ANSP framework provides a set of programming calls which are abstractions of ANSP services and resources. A capsule-based model is used for ANSP, and it is currently extended to map to other capsule-based models used in ANTSand ASP. The mapping possibility to other models remains as our future works. Hence, the mapping technique in ANSP allows any ANSP applications to access the same programming resources in different environments through a single set of interfaces. The mapping has to be done in a consistent and transparent manner. Therefore, the ANSP appears as an execution environment that provides a complete set of functionalities to active applications. While in fact, it is an overlay structure that makes use of the services provided from the underlying environments. In the following, the high-level functional descriptions of the ANSP model are described. Then, the implementations will be discussed. The ANSP programming model is based upon the interactions between four components: application client , application stub , capsule , and active service base.Figure 3. Information Flow with the ANSP.•Application Client : In a typical scenario, an active application requires some means to present information to its users, e.g., the state of the networks. A graphical user interface (GUI) is designed to operate with the application client if the ANSP runs on a non-active host.•Application Stub : When an application starts, it activates the application client to create a new instance of application stub at its near-by active node. There are two responsibilities for the application stub. One of them is to receive users’ instructions from the application client. Another one is to receive incoming capsules from networks and to perform appropriate actions. Typically, there are two types of actions, thatare, to reply or relay in capsules through the networks, or to notify the users regarding the incoming capsule. •Capsule : An active application may contain several capsule types. Each of them carries program code (also referred to as forwarding routine). Since the application defines a protocol to specify the interactions among capsules as well as the application stubs. Every capsule executes its forwarding routine at each router it visits along the path between the source and destination.•Active Service Base : An active service base is designed to export routers’ environments’ services and execute program calls from application stubs and capsules from different EEs. The base is loaded automatically at each router whenever a capsule arrives.The interactions among components within ANSP are shown in Figure 3. The designs of some key components in the ANSP will be discussed in the following subsections. A. Capsule (ANSPCapsule)ANSPXdr decode () ANSPXdr encode () int length ()Boolean execute ()New types of capsule are created by extending the abstract class ANSPCapsule . New extensions are required to define their own forwarding routines as well as their serialization procedures. These methods are indicated below:The execution of a capsule in ANSP is listed below. It is similar to the process in ANTS.1. A capsule is in serial binary representation before it issent to the network. When an active router receives a byte sequence, it invokes decode() to convert the sequence into a capsule. 2. The router invokes the forwarding routine of thecapsule, execute(). 3. When the capsule has finished its job and forwardsitself to its next hop by calling send(), this call implicitly invokes encode() to convert the capsule into a new serial byte representation. length() isused inside the call of encode() to determine the length of the resulting byte sequence. ANSP provides a XDR library called ANSPXdr to ease the jobs of encoding and decoding.B. Active Service Base (ANSPBase)In an active node, the Active Service Base provides a unified interface to export the available resources in EEs for the rest of the ANSP components. The services may include thread management, node query, and soft-store operation, as shown in Table 1.TABLE I. ACTIVE SERVICE BASE FUNCTION CALLSFunction Definition Descriptionboolean send (Capsule, Address) Transmit a capsule towards its destination using the routing table of theunderlying environment.ANSPAddress getLocalHost () Return address of the local host as an ANSPAddress structure. This isuseful when a capsule wants to check its current location.boolean isLocal (ANSPAddress) Return true if its input argument matches the local host’s address andreturn false otherwise.createThread () Create a new thread that is a class ofANSPThreadInterface (discussed later in Section VIA “Unified Thread Abstraction”).putSStore (key, Object) Object getSStore (key) removeSStore (key)The soft-store operations are provided by putSStore(), getSSTore(), and removeSStore(), and they put, retrieve, and remove data respectively. forName (PathName) Supported in ANSP to retrieve a classobject corresponding to the given path name in its argument. This code retrieval may rely on the code loading mechanism in the environment whennecessary.C. Application Client (ANSPClient)boolean start (args[])boolean start (args[],runningEEs) boolean start (args[],startClient)boolean start (args[],startClient, runningEE)Application Client is an interface between users and the nearby active source router. It does the following responsibilities.1. Code registration: It may be necessary to specify thelocation and name of the application code in some execution environments, e.g., ANTS. 2. Application initialization: It includes selecting anexecution environment to execute the application among those are available at the source router. Each active application can create an application client instance by extending the abstract class, ANSPClient . The extension inherits a method, start(), to automatically handle both the registration and initialization processes. All overloaded versions of start() accept a list of arguments, args , that are passed to the application stub during its initialization. An optional argument called runningEEs allows an application client to select a particular set of environment variables, specified by a list of standardized numerical environment ID, the ANEP ID, to perform code registration. If this argument is not specified, the default setting can only include ANTS and ASP. D. Application Stub (ANSPApplication)receive (ANSPCapsule)Application stubs reside at the source and destination routers to initialize the ANSP application after the application clients complete the initialization and registration processes. It is responsible for receiving and serving capsules from the networks as well as actions requested from the clients. A new instance is created by extending the application client abstract class, ANSPApplication . This extension includes the definition of a handling routine called receive(), which is invoked when a stub receives a new capsule.IV. ANSP E XAMPLE : T RACE -R OUTEA testbed has been created to verify the design correctnessof ANSP in heterogeneous environments. There are three types of router setting on this testbed:1. Router that contains ANTS and a ANSP daemonrunning on behalf of ASP; 2. Router that contains ASP and a ANSP daemon thatruns on behalf of ANTS; 3. Router that contains both ASP and ANTS.The prototype is written in Java [11] with a traceroute testing program. The program records the execution environments of all intermediate routers that it has visited between the source and destination. It also measures the RTT between them. Figure 4 shows the GUI from the application client, and it finds three execution environments along the path: ASP, ANTS, and ASP. The execution sequence of the traceroute program is shown in Figure 5.Figure 4. The GUI for the TRACEROUTE Program.The TraceCapsule program code is created byextending the ANSPCapsule abstract class. When execute() starts, it checks the Boolean value of returning to determine if it is returning from the destination. It is set to true if TraceCapsule is traveling back to the source router; otherwise it is false . When traveling towards the destination, TraceCapsule keeps track of the environments and addresses of the routers it has visited in two arrays, path and trace , respectively. When it arrives at a new router, it calls addHop() to append the router address and its environment to these two arrays. When it finally arrives at the destination, it sets returning to false and forwards itself back to the source by calling send().When it returns to source, it invokes deliverToApp() to deliver itself to the application stub that has been running at the source. TraceCapsule carries information in its data field through the networks by executing encode() and decode(), which encapsulates and de-capsulates its data using External Data Representation (XDR) respectively. The syntax of ANSP XDR follows the syntax of XDR library from ANTS. length() in TraceCapsule returns the data length, or it can be calculated by using the primitive types in the XDRlibrary.Figure 5. Flow of the TRACEROUTE Capsules.V. C ONCLUSIONSIn this paper, we present a new unified layered architecture for active networks. The new model is known as Active Network Socket Programming (ANSP). It allows each active application to be written once and run on multiple environments in active networks. Our experiments successfully verify the design of ANSP architecture, and it has been successfully deployed to work harmoniously with ANTS and ASP without making any changes to their architectures. In fact, the unified programming interface layer is light-weighted and can be dynamically deployable upon request.R EFERENCES[1] D.D. Clark, D.L. Tennenhouse, “Architectural Considerations for a NewGeneration of Protocols,” in Proc. ACM Sigcomm’90, pp.200-208, 1990. [2] D. Tennenhouse, J. M. Smith, W. D. Sicoskie, D. J. Wetherall, and G. J.Minden, “A survey of active network research,” IEEE Communications Magazine , pp. 80-86, Jan 1997.[3] D. Wetherall, U. Legedza, and J. Guttag, “Introducing new internetservices: Why and how,” IEEE Network Magazine, July/August 1998. [4] M. Fry, A. Ghosh, “Application Layer Active Networking,” in ComputerNetworks , Vol.31, No.7, pp.655-667, 1999.[5] K. W. Chin, “An Investigation into The Application of Active Networksto Mobile Computing Environments”, Curtin University of Technology, March 2000.[6] S. Bhattacharjee, K. L. Calvert, and E. W. Zegura, “Self OrganizingWide-Area Network Caches”, Proc. IEEE INFOCOM ’98, San Francisco, CA, 29 March-2 April 1998.[7] L. H. Leman, S. J. Garland, and D. L. Tennenhouse, “Active ReliableMulticast”, Proc. IEEE INFOCOM ’98, San Francisco, CA, 29 March-2 April 1998.[8] D. Descasper, G. Parulkar, B. Plattner, “A Scalable, High PerformanceActive Network Node”, In IEEE Network, January/February 1999.[9] E. L. Nygren, S. J. Garland, and M. F. Kaashoek, “PAN: a high-performance active network node supporting multiple mobile code system”, In the Proceedings of the 2nd IEEE Conference on Open Architectures and Network Programming (OpenArch ’99), March 1999. [10] D. L. Tennenhouse, and D. J. Wetherall. “Towards an Active NetworkArchitecture”, In Proceeding of Multimedia Computing and Networking , January 1996.[11] D. J. Wetherall, J. V. Guttag, D. L. Tennenhouse, “ANTS: A toolkit forBuilding and Dynamically Deploying Network Protocols”, Open Architectures and Network Programming, 1998 IEEE , 1998 , Page(s): 117 –129.[12] B. Braden, A. Cerpa, T. Faber, B. Lindell, G. Phillips, and J. Kann.“Introduction to the ASP Execution Environment”: /active-signal/ARP/index.html .[13] “The java language: A white paper,” Tech. Rep., Sun Microsystems,1998.。

毕设三项文档之-外文翻译

毕设三项文档之-外文翻译

本科生毕业设计 (论文)
外文翻译
原文标题
Worlds Collide:
Exploring the Use of Social Media Technologies for
Online Learning
译文标题
世界的碰撞:
探索社交媒体技术在在线学习的应用
作者所在系别计算机科学与工程系作者所在专业计算机科学与技术作者所在班级
作者姓名
作者学号
指导教师姓名
指导教师职称讲师
完成时间2013年2月
北华航天工业学院教务处制
注:1. 指导教师对译文进行评阅时应注意以下几个方面:①翻译的外文文献与毕业设计(论文)的主题是否高度相关,并作为外文参考文献列入毕业设计(论文)的参考文献;②翻译的外文文献字数是否达到规定数量(3 000字以上);③译文语言是否准确、通顺、具有参考价值。

2. 外文原文应以附件的方式置于译文之后。

毕设外文文献+翻译1

毕设外文文献+翻译1

毕设外文文献+翻译1外文翻译外文原文CHANGING ROLES OF THE CLIENTS、ARCHITECTSAND CONTRACTORS THROUGH BIMAbstract:Purpose –This paper aims to present a general review of the practical implications of building information modelling (BIM) based on literature and case studies. It seeks to address the necessity for applying BIM and re-organising the processes and roles in hospital building projects. This type of project is complex due to complicated functional and technical requirements, decision making involving a large number of stakeholders, and long-term development processes.Design/methodology/approach–Through desk research and referring to the ongoing European research project InPro, the framework for integrated collaboration and the use of BIM are analysed.Findings –One of the main findings is the identification of the main factors for a successful collaboration using BIM, which can be recognised as “POWER”: product information sharing (P),organisational roles synergy (O), work processes coordination (W), environment for teamwork (E), and reference data consolidation (R).Originality/value –This paper contributes to the actual discussion in science and practice on the changing roles and processes that are required to develop and operate sustainable buildings with the support of integrated ICT frameworks and tools. It presents the state-of-the-art of European research projects and some of the first real cases of BIM application inhospital building projects.Keywords:Europe, Hospitals, The Netherlands, Construction works, Response flexibility, Project planningPaper type :General review1. IntroductionHospital building projects, are of key importance, and involve significant investment, and usually take a long-term development period. Hospital building projects are also very complex due to the complicated requirements regarding hygiene, safety, special equipments, and handling of a large amount of data. The building process is very dynamic and comprises iterative phases and intermediate changes. Many actors with shifting agendas, roles and responsibilities are actively involved, such as: the healthcare institutions, national and local governments, project developers, financial institutions, architects, contractors, advisors, facility managers, and equipment manufacturers and suppliers. Such building projects are very much influenced, by the healthcare policy, which changes rapidly in response to the medical, societal and technological developments, and varies greatly between countries (World Health Organization, 2000). In The Netherlands, for example, the way a building project in the healthcare sector is organised is undergoing a major reform due to a fundamental change in the Dutch health policy that was introduced in 2008.The rapidly changing context posts a need for a building with flexibility over its lifecycle. In order to incorporate life-cycle considerations in the building design, construction technique, and facility management strategy, a multidisciplinary collaboration is required. Despite the attempt for establishing integrated collaboration, healthcare building projects still facesserious problems in practice, such as: budget overrun, delay, and sub-optimal quality in terms of flexibility, end-user?s dissatisfaction, and energy inefficiency. It is evident that the lack of communication and coordination between the actors involved in the different phases of a building project is among the most important reasons behind these problems. The communication between different stakeholders becomes critical, as each stakeholder possesses different setof skills. As a result, the processes for extraction, interpretation, and communication of complex design information from drawings and documents are often time-consuming and difficult. Advanced visualisation technologies, like 4D planning have tremendous potential to increase the communication efficiency and interpretation ability of the project team members. However, their use as an effective communication tool is still limited and not fully explored. There are also other barriers in the information transfer and integration, for instance: many existing ICT systems do not support the openness of the data and structure that is prerequisite for an effective collaboration between different building actors or disciplines.Building information modelling (BIM) offers an integrated solution to the previously mentioned problems. Therefore, BIM is increasingly used as an ICT support in complex building projects. An effective multidisciplinary collaboration supported by an optimal use of BIM require changing roles of the clients, architects, and contractors; new contractual relationships; and re-organised collaborative processes. Unfortunately, there are still gaps in the practical knowledge on how to manage the building actors to collaborate effectively in their changing roles, and todevelop and utilise BIM as an optimal ICT support of the collaboration.This paper presents a general review of the practical implications of building information modelling (BIM) based on literature review and case studies. In the next sections, based on literature and recent findings from European research project InPro, the framework for integrated collaboration and the use of BIM are analysed. Subsequently, through the observation of two ongoing pilot projects in The Netherlands, the changing roles of clients, architects, and contractors through BIM application are investigated. In conclusion, the critical success factors as well as the main barriers of a successful integrated collaboration using BIM are identified.2. Changing roles through integrated collaboration and life-cycle design approachesA hospital building project involves various actors, roles, and knowledge domains. In The Netherlands, the changing roles of clients, architects, and contractors in hospital building projects are inevitable due the new healthcare policy. Previously under the Healthcare Institutions Act (WTZi), healthcare institutions were required to obtain both a license and a building permit for new construction projects and major renovations. The permit was issued by the Dutch Ministry of Health. The healthcare institutions were then eligible to receive financial support from the government. Since 2008, new legislation on the management of hospital building projects and real estate has come into force. In this new legislation, a permit for hospital building project under the WTZi is no longer obligatory, nor obtainable (Dutch Ministry of Health, Welfare and Sport, 2008). This change allows more freedom from the state-directed policy, and respectively,allocates more responsibilities to the healthcare organisations to deal with the financing and management of their real estate. The new policy implies that the healthcare institutions are fully responsible to man age and finance their building projects and real estate. The government?s support for the costs of healthcare facilities will no longer be given separately, but will be included in the fee for healthcare services. This means that healthcare institutions must earn back their investment on real estate through their services. This new policy intends to stimulate sustainable innovations in the design, procurement and management of healthcare buildings, which will contribute to effective and efficient primary healthcare services.The new strategy for building projects and real estate management endorses an integrated collaboration approach. In order to assure the sustainability during construction, use, and maintenance, the end-users, facility managers, contractors and specialist contractors need to be involved in the planning and design processes. The implications of the new strategy are reflected in the changing roles of the building actors and in the new procurement method.In the traditional procurement method, the design, and its details, are developed by the architect, and design engineers. Then, the client (the healthcare institution) sends an application to the Ministry of Healthto obtain an approval on the building permit and the financial support from the government. Following this, a contractor is selected through a tender process that emphasises the search for the lowest-price bidder. During the construction period, changes often take place due to constructability problems of the design and new requirements from the client.Because of the high level of technical complexity, and moreover, decision-making complexities, the whole process from initiation until delivery of a hospital building project can take up to ten years time. After the delivery, the healthcare institution is fully in charge of the operation of the facilities. Redesigns and changes also take place in the use phase to cope with new functions and developments in the medical world.The integrated procurement pictures a new contractual relationship between the parties involved in a building project. Instead of a relationship between the client and architect for design, and the client and contractor for construction, in an integrated procurement the client only holds a contractual relationship with the main party that is responsible for both design and construction. The traditional borders between tasks and occupational groups become blurred since architects, consulting firms, contractors, subcontractors, and suppliers all stand on the supply side in the building process while the client on the demand side. Such configuration puts the architect, engineer and contractor in a very different position that influences not only their roles, but also their responsibilities, tasks and communication with the client, the users, the team and other stakeholders.The transition from traditional to integrated procurement method requires a shift of mindset of the parties on both the demand and supply sides. It is essential for the client and contractor to have a fair and open collaboration in which both can optimally use their competencies. The effectiveness of integrated collaboration is also determined by the client?s capacity and strategy to organize innovative tendering procedures.A new challenge emerges in case of positioning an architect in a partnership with the contractor instead of with the client. In case of the architect enters a partnership with the contractor, an important issues is how to ensure the realisation of the architectural values as well as innovative engineering through an efficient construction process. In another case, the architect can stand at the client?s side in a strategic advisory role instead of being the designer. In this case, the architect?s responsibility is translating client?s requirements and wishes into the architectural values to be included in the design specification, and evaluating the contractor?s proposal against this. In any of this new role, the architect holds the responsibilities as stakeholder interest facilitator, custodian of customer value and custodian of design models.The transition from traditional to integrated procurement method also brings consequences in the payment schemes. In the traditional building process, the honorarium for the architect is usually based on a percentage of the project costs; this may simply mean that the more expensive the building is, the higher the honorarium will be. The engineer receives the honorarium based on the complexity of the design and the intensity of the assignment. A highly complex building, which takes a number of redesigns, is usually favourable for the engineers in terms of honorarium. A traditional contractor usually receives the commission based on the tender to construct the building at the lowest price by meeting the minimum specifications given by the client. Extra work due to modifications is charged separately to the client. After the delivery, the contractor is no longer responsible for the long-term use of the building. In the traditional procurement method, all risks are placed with theclient.In integrated procurement method, the payment is based on the achieved building performance; thus, the payment is non-adversarial. Since the architect, engineer and contractor have a wider responsibility on the quality of the design and the building, the payment is linked to a measurement system of the functional and technical performance of the building over a certain period of time. The honorarium becomes an incentive to achieve the optimal quality. If the building actors succeed to deliver a higher added-value thatexceed the minimum client?s requirements, they will receive a bonus in accordance to the client?s extra gain. The level of transparency is also improved. Open book accounting is an excellent instrument provided that the stakeholders agree on the information to be shared and to its level of detail (InPro, 2009).Next to the adoption of integrated procurement method, the new real estate strategy for hospital building projects addresses an innovative product development and life-cycle design approaches. A sustainable business case for the investment and exploitation of hospital buildings relies on dynamic life-cycle management that includes considerations and analysis of the market development over time next to the building life-cycle costs (investment/initial cost, operational cost, and logistic cost). Compared to the conventional life-cycle costing method, the dynamic life-cycle management encompasses a shift from focusing only on minimizing the costs to focusing on maximizing the total benefit that can be gained. One of the determining factors for a successful implementation of dynamic life-cycle management is the sustainable design of the building and building components, which means that the design carriessufficient flexibility to accommodate possible changes in the long term (Prins, 1992).Designing based on the principles of life-cycle management affects the role of the architect, as he needs to be well informed about the usage scenarios and related financial arrangements, the changing social and physical environments, and new technologies. Design needs to integrate people activities and business strategies over time. In this context, the architect is required to align the design strategies with the organisational, local and global policies on finance, business operations, health and safety, environment, etc.The combination of process and product innovation, and the changing roles of the building actors can be accommodated by integrated project delivery or IPD (AIA California Council, 2007). IPD is an approach that integrates people, systems, business structures and practices into a process that collaboratively harnesses the talents and insights of all participants to reduce waste and optimize efficiency through all phases of design, fabrication and construction. IPD principles can be applied to a variety of contractual arrangements. IPD teams will usually include members well beyond the basic triad of client, architect, and contractor. At a minimum, though, an Integrated Project should include a tight collaboration between the client, the architect, and the main contractor ultimately responsible for construction of the project, from the early design until the project handover. The key to a successful IPD is assembling a team that is committed to collaborative processes and is capable of working together effectively. IPD is built on collaboration. As a result, it can only be successful if the participants share and apply common values and goals.3. Changing roles through BIM applicationBuilding information model (BIM) comprises ICT frameworks and tools that can support the integrated collaboration based on life-cycle design approach. BIM is a digital representation of physical and functional characteristics of a facility. As such it serves as a shared knowledge resource for information about a facility forming a reliable basis for decisions during its lifecycle from inception onward (National Institute of Building Sciences NIBS, 2007). BIM facilitates time and place independent collaborative working. A basic premise of BIM is collaboration by different stakeholders at different phases of the life cycle of a facility to insert, extract, update or modify information in the BIM to support and reflect the roles of that stakeholder. BIM in its ultimate form, as a shared digital representation founded on open standards for interoperability, can become a virtual information model to be handed from the design team to the contractor and subcontractors and then to the client.BIM is not the same as the earlier known computer aided design (CAD). BIM goes further than an application to generate digital (2D or 3D) drawings. BIM is an integrated model in which all process and product information is combined, stored, elaborated, and interactively distributed to all relevant building actors. As a central model for all involved actors throughout the project lifecycle, BIM develops andevolves as the project progresses. Using BIM, the proposed design and engineering solutions can be measured against the client?s requirements and expected building performance. The functionalities of BIM to support the design process extend to multidimensional (nD), including: three-dimensional visualisation and detailing, clash detection, material schedule, planning, costestimate, production and logistic information, and as-built documents. During the construction process, BIM can support the communication between the building site, the factory and the design office– which is crucial for an effective and efficient prefabrication and assembly processes as well as to prevent or solve problems related to unforeseen errors or modifications. When the building is in use, BIM can be used in combination with the intelligent building systems to provide and maintain up-to-date information of the building performance, including the life-cycle cost.To unleash the full potential of more efficient information exchange in the AEC/FM industry in collaborative working using BIM, both high quality open international standards and high quality implementations of these standards must be in place. The IFC open standard is generally agreed to be of high quality and is widely implemented in software. Unfortunately, the certification process allows poor quality implementations to be certified and essentially renders the certified software useless for any practical usage with IFC. IFC compliant BIM is actually used less than manual drafting for architects and contractors, and show about the same usage for engineers. A recent survey shows that CAD (as a closed-system) is still the major form of technique used in design work (over 60 per cent) while BIM is used in around 20 percent of projects for architects and in around 10 per cent of projects for engineers and contractors.The application of BIM to support an optimal cross-disciplinary and cross-phase collaboration opens a new dimension in the roles and relationships between the building actors. Several most relevant issues are: the new role of a model manager; the agreement on the access right and IntellectualProperty Right (IPR); the liability and payment arrangement according to the type of contract and in relation to the integrated procurement; and the use of open international standards.Collaborative working using BIM demands a new expert role of a model manager who possesses ICT as well as construction process know-how (InPro, 2009). The model manager deals with the system as well as with the actors. He provides and maintains technological solutions required for BIM functionalities, manages the information flow, and improves the ICT skills of the stakeholders. The model manager does not take decisions on design and engineering solutions, nor the organisational processes, but his roles in the chain of decision making are focused on:the development of BIM, the definition of the structure and detail level of the model, and the deployment of relevant BIM tools, such as for models checking, merging, and clash detections;the contribution to collaboration methods, especially decision making and communication protocols, task planning, and risk management;and the management of information, in terms of data flow and storage, identification of communication errors, and decision or process (re-)tracking.Regarding the legal and organisational issues, one of the actual questions is: “In what way does the intellectual property right (IPR) in collaborative working using BIM differ from the IPR in a traditional teamwork?”. In terms of combine d work, the IPR of each element is at tached to its creator. Although it seems to be a fully integrated design, BIM actually resulted from a combination of works/elements; for instance: the outline of the building design, is created by the architect, the design for theelectrical system, is created by the electrical contractor, etc. Thus, in case of BIM as a combined work, the IPR is similar to traditional teamwork. Working with BIM with authorship registration functionalities may actually make it easier to keep track of the IPR.How does collaborative working, using BIM, effect the contractual relationship? On the one hand,collaborative working using BIM does not necessarily change the liability position in the contract nor does it obligate an alliance contract. The General Principles of BIM A ddendum confirms: …This does not effectuate or require a restructuring of contractual relationships or shifting of risks between or among the Project Participants other than as specifically required per the Protocol Addendum and its Attachments? (ConsensusDOCS, 2008). On the other hand, changes in terms of payment schemes can be anticipated. Collaborative processes using BIM will lead to the shifting of activities from to the early design phase. Much, if not all, activities in the detailed engineering and specification phase will be done in the earlier phases. It means that significant payment for the engineering phase, which may count up to 40 per cent of the design cost, can no longer be expected. As engineering work is done concurrently with the design, a new proportion of the payment in the early design phase is necessary.4. Review of ongoing hospital building projects using BIMIn The Netherlands, the changing roles in hospital building projects are part of the strategy, which aims at achieving a sustainable real estate in response to the changing healthcare policy. Referring to literature and previous research, the main factors that influence the success of the changing roles can be concluded as: the implementation of an integrated procurementmethod and a life-cycle design approach for a sustainable collaborative process; the agreement on the BIM structure and the intellectual rights; and the integration of the role of a model manager. The preceding sections have discussed the conceptual thinking on how to deal with these factors effectively. This current section observes two actual projects and compares the actual practice with the conceptual view respectively.The main issues, which are observed in the case studies, are: the selected procurement method and the roles of the involved parties within this method;the implementation of the life-cycle design approach;the type, structure, and functionalities of BIM used in the project;the openness in data sharing and transfer of the model, and the intended use of BIM in the future; and the roles and tasks of the model manager.The pilot experience of hospital building projects using BIM in the Netherlands can be observed at University Medical Centre St Radboud (further referred as UMC) and Maxima Medical Centre (further referred as MMC). At UMC, the new building project for the Faculty of Dentistry in the city of Nijmegen has been dedicated as a BIM pilot project. At MMC, BIM is used in designing new buildings for Medical Simulation and Mother-and-Child Centre in the city of Veldhoven.The first case is a project at the University Medical Centre (UMC) St Radboud. UMC is more than just a hospital. UMC combines medical services, education and research. More than 8500 staff and 3000 students work at UMC. As a part of the innovative real estate strategy, UMC has considered to use BIM for its building projects. The new development of the Faculty ofDentistry and the surrounding buildings on the Kapittelweg in Nijmegen has been chosen as a pilot project to gather practical knowledge and experience on collaborative processes with BIM support.The main ambition to be achieved through the use of BIM in the building projects at UMC can be summarised as follows: using 3D visualisation to enhance the coordination and communication among the building actors, and the user participation in design;integrating the architectural design with structural analysis, energy analysis, cost estimation, and planning;interactively evaluating the design solutions against the programme of requirements and specifications;reducing redesign/remake costs through clash detection during the design process; andoptimising the management of the facility through the registration of medical installations andequipments, fixed and flexible furniture, product and output specifications, and operational data.The second case is a project at the Maxima Medical Centre (MMC). MMC is a large hospital resulted from a merger between the Diaconessenhuis in Eindhoven and St Joseph Hospital in Veldhoven. Annually the 3,400 staff of MMC provides medical services to more than 450,000 visitors and patients. A large-scaled extension project of the hospital in Veldhoven is a part of its real estate strategy. A medical simulation centre and a women-and-children medical centre are among the most important new facilities within this extension project. The design has been developed using 3D modelling with several functionalities of BIM.The findings from both cases and the analysis are as follows.Both UMC and MMC opted for a traditional procurement method in which the client directly contracted an architect, a structural engineer, and a mechanical, electrical and plumbing (MEP) consultant in the design team. Once the design and detailed specifications are finished, a tender procedure will follow to select a contractor. Despite the choice for this traditional method, many attempts have been made for a closer and more effective multidisciplinary collaboration. UMC dedicated a relatively long preparation phase with the architect, structural engineer and MEP consultant before the design commenced. This preparation phase was aimed at creating a common vision on the optimal way for collaboration using BIM as an ICT support. Some results of this preparation phase are: a document that defines the common ambition for the project and the collaborative working process and a semi-formal agreement that states the commitment of the building actors for collaboration. Other than UMC, MMC selected an architecture firm with an in-house engineering department. Thus, the collaboration between the architect and structural engineer can take place within the same firm using the same software application.Regarding the life-cycle design approach, the main attention is given on life-cycle costs, maintenance needs, and facility management. Using BIM, both hospitals intend to get a much better insight in these aspects over the life-cycle period. The life-cycle sustainability criteria are included in the assignments for the design teams. Multidisciplinary designers and engineers are asked to collaborate more closely and to interact with the end-users to address life-cycle requirements. However, ensuring the building actors to engage in an integrated collaboration to generate sustainable design solutions that meet the life-cycle。

毕业设计外文文献翻译(原文+译文)

毕业设计外文文献翻译(原文+译文)

Environmental problems caused by Istanbul subway excavation and suggestionsfor remediation伊斯坦布尔地铁开挖引起的环境问题及补救建议Ibrahim Ocak Abstract:Many environmental problems caused by subway excavations have inevitably become an important point in city life. These problems can be categorized as transporting and stocking of excavated material, traffic jams, noise, vibrations, piles of dust mud and lack of supplies. Although these problems cause many difficulties,the most pressing for a big city like Istanbul is excava tion,since other listed difficulties result from it. Moreover, these problems are environmentally and regionally restricted to the period over which construction projects are underway and disappear when construction is finished. Currently, in Istanbul, there are nine subway construction projects in operation, covering approximately 73 km in length; over 200 km to be constructed in the near future. The amount of material excavated from ongoing construction projects covers approximately 12 million m3. In this study, problems—primarily, the problem with excavation waste(EW)—caused by subway excavation are analyzed and suggestions for remediation are offered.摘要:许多地铁开挖引起的环境问题不可避免地成为城市生活的重要部分。

毕业设计外文翻译原文

毕业设计外文翻译原文

CLUTCHThe engine produces the power to drive the vehicle. The drive line or drive train transfers the power of the engine to the wheels. The drive train consists of the parts from the back of the flywh eel to the wheels. These parts include the clutch, th e transmission, the drive shaft, and the final drive assembly (Figure 8-1).The clutch which includes the flywheel, clutch disc, pressure plate, springs, pressure plate cover and the linkage necessary to operate the clutch is a rotating mechanism between t he engine and the transmission (Figure 8-2). It operates through friction which comes from contact between the parts. That is the reason why the clutch is called a friction mechanism. After engagement, the clutch must continue to transmit all the engine torque to the transmission depending on the friction without slippage. The clutch is also used to disengage the engine from the drive train whenever the gears in the transmission are being shifted from one gear ratio to another.To start the engine or shift the gears, the driver has to depress the clutch pedal with the purpose of disengagement the transmission from the engine. At that time, the driven members connected to the transmission input shaft are either stationary or rotating at a speed that is slower or faster than the driving members connected to the engine crankshaft. There is no spring pressure on the clutch assembly parts. So there is no friction between the driving members and driven members. As the driver lets loose the clutch pedal, spring pre ssure increases on the clutch parts. Friction between the parts also increases. The pressure exerted by the springs on the driven members is controlled by the driver through the clutch pedal and linkage. The positive engagement of the driving and driven members is made possible by the friction between the surfaces of the members. When full spring pressure is applied, the speed of the driving and driven members should be the same. At themoment, the clutch must act as a solid coupling device and transmit al l engine power to the transmission, without slipping.However, the transmission should be engaged to the engine gradually in order to operate the car smoothly and minimize torsional shock on the drive train because an engine at idle just develops little power. Otherwise, the driving members are connected with the driven members too quickly and the engine would be stalled.The flywheel is a major part of the clutch. The flywheel mounts to the engine’s crankshaft and transmits engine torque to the clutch assembly. The flywheel, when coupled with the clutch disc and pressure plate makes and breaks the flow of power from the engine to the transmission.The flywheel provides a mounting location for the clutch assembly as well. When the clutch is applied, the flyw heel transfers engine torque to the clutch disc. Because of its weight, the flywheel helps to smooth engine operation. The flywheel also has a large ring gear at its outer edge, which engages with a pinion gear on the starter motor during engine cranking.The clutch disc fits between the flywheel and the pressure plate. The clutch disc has a splined hub that fits over splines on the transmission input shaft. A splined hub has grooves that match splines on the shaft. These splines fit in the grooves. Thus, t he two parts are held together. However, back-and-forth movement of the disc on the shaft is possible. Attached to the input shaft, At disc turns at the speed of the shaft.The clutch pressure plate is generally made of cast iron. It is round and about the same diameter as the clutch disc. One side of the pressure plate is machined smooth. This side will press the clutch disc facing are against the flywheel. The outer side has various shapes to facilitate attachment of spring and release mechanisms. The two primary types of pressure plate assemblies are coil spri ng assembly and diaphragmspring (Figure 8-3).In a coil spring clutch the pressure plate is backed by a number of coil springs and housed with them in a pressed-steel cover bolted to the flywheel. The springs push against the cover. Neither the driven plate nor the pressure plate is connected rigidly to the flywh eel and both can move either towards it or away. When the clutch pedal is depressed a thrust pad riding on a carbon or ball thrust bearing i s forced towards the flywheel. Levers pivoted so that they engage with the thrust pad at one end and the pressure plate at the other end pull the pressure plate ba ck against its springs. This releases pressure on the driven plate disconnecting the gearbox from the engine (Figure 8-4).Diaphragm spring pressure plate assemblies are widely used in most modern cars. The diaphragm spring is a single thin sheet of metal which yields when pressure is applied to it. When pressure is removed the metal springs back to its original shape. The centre portion of the diaphragm spring is slit into numerous fingers that act as release levers. When the clutch assembly rotates with the engine these weights are flung outwards by centrifugal forces and cause the levers to pre ss against the pressure plate. During disengagement of the clutch the fingers are moved forward by the release bearing. The spring pivots over the fulcrum ring and its outer rim moves away from the flywheel. The retracting spring pulls the pressure plate a way from the clutch plate thus disengaging the clutch (Figure 8-5).When engaged the release bearing and the fingers of the diaphragm spring move towards the transmission. As the diaphragm pivots over the pivot ring its outer rim forces the pressure plate against the clutch disc so that the clutch plate is engaged to the flywheel.The advantages of a diaphragm type pres sure plate assembly are its compactness, lower weight, fewer moving parts, less effort to engage, reduces rotational imbalance by providin g a balanced force around the pressure plate and less chances of clutch slippage.The clutch pedal is connected to the disengagement mechanism either by a cable or, more com monly, by a hydraulic system. Either way, pushing the pedal down operates the dise ngagement mechanism which puts pressure on the fingers of the clutch diaphragm via a release bearing and causes the diaphragm to release the clutch plate. With a hydraulic mechanism, the clutch pedal arm operates a piston in the clutch master cylinder. Thi s forces hydraulic fluid through a pipe to the clutch release cylinder where another piston operates the clutch disengagement mechanism. The alternative is to link the clutch pedal to the disengagement mechanism by a cable.The other parts including the cl utch fork, release bearing, bell-housing, bell housing cover, and pilot bushing are needed to couple and uncouple the transmission. The clutch fork, which connects to the linkage, actually operates the clutch. The release bearing fits between the clutch fork and the pressure plate assembly. The bell housing covers the clutch assembly. The bell housing c over fastens to the bottom of the bell housing. This removable cover allows a mechanic to inspect the clutch without removing the transmission and bell housing. A pilot bushing fits into the back of th e crankshaft and holds the transmission input shaft.A Torque ConverterThere are four components inside the very strong housing of the torque converter:1. Pump;2. Turbine;3. Stator;4. Transmission fluid.The housing of the torque converter is bolted to the flywheel of the engine, so it turns at what ever speed the engine is running at. The fins that make up the pump of the torque converter are at tached to the housing, so they also turn at the same speed a s the engine. The cutaway below shows how everything is connected inside the torque converter (Figure 8-6).The pump inside a torque converter is a type of centrifugal pump. As it spins, fluid is flung to the outside, much as the spin cycle of a washing machine flings water and clothes to the outside of the wash tub. As fluid is flung to the outside, a vacuum is created that draws more fluid in at the center.The fluid then enters the blades of the turbine, which is connected to the transmission. The turbin e causes the transmission to spin, which basically moves the car. The blades of the turbine are curved. This means that the fluid, which enters the turbine from the outside, has to change direction before it exits the center of the turbine. It is this directional change that causes the turbine to spin.The fluid exits the turbine at the center, moving in a different direction than when it entered. The fluid exits the turbine moving opposite the direction that the pump (and engine) is turning. If the fluid were allowed to hit the pump, it would slow the engine down, wasting power. This is why a torque converter has a stator.The stator resides in the very center of the torque converter. Its job is to redirect the fluid returning from the turbine before it hits the pump again. This dramatically increases the efficiency of the torque converter.The stator has a very aggressive blade design that almost completely reverses the direction of the fluid. A one-way clutch (inside the stator) connects the stator to a fixed shaft in the transmission. Because of this arrangement, the stator cannot spin with the fluid - i tc a n s p i n o n l y i n t h e o p p o s i t ed i re c t i o n,f o r c i ng th e f l ui d t oc h a n g ed i re c t i o n a s i t h i t s t h e s t a t o r b l a d e s.Something a little bit tricky happens when the car gets moving. There is a point, around 40 mph (64 kph), at which both the pump and the turbine are spinning at almost the same speed (the pump alwaysspins slightly faster). At this point, the fluid returns from the turbine, entering the pump already moving in the same direction as the pump, so the stator is not needed.Even though the turbine changes the direction of the fluid and flings it out the back, the fluid still ends up moving in the direction that the turbine is spinning because the turbin e is spinning faster in one direction than the fluid is being pumped in the other direction. If you were standing in the back of a pickup moving at 60 mph, and you threw a ball out the back of that pickup at 40 mph, the ball would still be going forward at 20 mph. This is similar to what happens in the tur bine: The fluid is being flung out the back in one direction, but not as fast as it was going to start with in the other direction.At these speeds, the fluid actually strikes the back sides of the stator blades, causing the stator to freewheel on its one-way clutch so it doesn’t hinder the fluid moving through it.Benefits and Weak PointsIn addition to the very important job of allowing a car come to a complete stop without stalling the engine; the torqu e converter actually gives the car more torque when you accelerate out of a Stop. Modern torque converters can multiply the torque of the engine by two to three times. This effect only happens when the engine is turning much faster than the transmission.At higher speeds, the transmission catches up to the engine, eventually moving at almost the same speed. Ideally, though, the transmission would move at exactly the same speed as the engine, because this difference in speed wastes power. This is part of th e reason why cars with automatic transmissions get worse gas mileage than cars with manual transmissions.To counter this effect, some cars have a torque converter with alockup clutch. When the two halves of the torque converter get up to speed, this clutch locks them together, eliminating the slip page and improving efficiency.。

毕业设计外文资料翻译——翻译译文

毕业设计外文资料翻译——翻译译文

毕业设计外文资料翻译(二)外文出处:Jules Houde 《Sustainable development slowed down by bad construction practices and natural and technological disasters》2、外文资料翻译译文混凝土结构的耐久性即使是工程师认为的最耐久和最合理的混凝土材料,在一定的条件下,混凝土也会由于开裂、钢筋锈蚀、化学侵蚀等一系列不利因素的影响而易受伤害。

近年来报道了各种关于混凝土结构耐久性不合格的例子。

尤其令人震惊的是混凝土的结构过早恶化的迹象越来越多。

每年为了维护混凝土的耐久性,其成本不断增加。

根据最近在国内和国际中的调查揭示,这些成本在八十年代间翻了一番,并将会在九十年代变成三倍。

越来越多的混凝土结构耐久性不合格的案例使从事混凝土行业的商家措手不及。

混凝土结构不仅代表了社会的巨大投资,也代表了如果耐久性问题不及时解决可能遇到的成本,更代表着,混凝土作为主要建筑材料,其耐久性问题可能导致的全球不公平竞争以及行业信誉等等问题。

因此,国际混凝土行业受到了强烈要求制定和实施合理的措施以解决当前耐久性问题的双重的挑战,即:找到有效措施来解决现有结构剩余寿命过早恶化的威胁。

纳入新的结构知识、经验和新的研究结果,以便监测结构耐久性,从而确保未来混凝土结构所需的服务性能。

所有参与规划、设计和施工过程的人,应该具有获得对可能恶化的过程和决定性影响参数的最低理解的可能性。

这种基本知识能力是要在正确的时间做出正确的决定,以确保混凝土结构耐久性要求的前提。

加固保护混凝土中的钢筋受到碱性的钝化层(pH值大于12.5)保护而阻止了锈蚀。

这种钝化层阻碍钢溶解。

因此,即使所有其它条件都满足(主要是氧气和水分),钢筋受到锈蚀也都是不可能的。

混凝土的碳化作用或是氯离子的活动可以降低局部面积或更大面积的pH值。

当加固层的pH值低于9或是氯化物含量超过一个临界值时,钝化层和防腐保护层就会失效,钢筋受腐蚀是可能的。

毕业设计外文文献翻译【范本模板】

毕业设计外文文献翻译【范本模板】

毕业设计(论文)外文资料翻译系别:专业:班级:姓名:学号:外文出处:附件: 1. 原文; 2。

译文2013年03月附件一:A Rapidly Deployable Manipulator SystemChristiaan J。

J。

Paredis, H. Benjamin Brown,Pradeep K. KhoslaAbstract:A rapidly deployable manipulator system combines the flexibility of reconfigurable modular hardware with modular programming tools,allowing the user to rapidly create a manipulator which is custom-tailored for a given task. This article describes two main aspects of such a system,namely,the Reconfigurable Modular Manipulator System (RMMS)hardware and the corresponding control software。

1 IntroductionRobot manipulators can be easily reprogrammed to perform different tasks, yet the range of tasks that can be performed by a manipulator is limited by mechanicalstructure。

Forexample,a manipulator well-suited for precise movement across the top of a table would probably no be capable of lifting heavy objects in the vertical direction. Therefore,to perform a given task,one needs to choose a manipulator with an appropriate mechanical structure.We propose the concept of a rapidly deployable manipulator system to address the above mentioned shortcomings of fixed configuration manipulators。

本科毕业设计外文翻译

本科毕业设计外文翻译

Section 3 Design philosophy, design method andearth pressures3.1 Design philosophy3.1.1 GeneralThe design of earth retaining structures requires consideration of the interaction between the ground and the structure. It requires the performance of two sets of calculations:1)a set of equilibrium calculations to determine the overall proportions and the geometry of the structure necessary to achieve equilibrium under the relevant earth pressures and forces;2)structural design calculations to determine the size and properties of thestructural sections necessary to resist the bending moments and shear forces determined from the equilibrium calculations.Both sets of calculations are carried out for specific design situations (see 3.2.2) in accordance with the principles of limit state design. The selected design situations should be sufficientlySevere and varied so as to encompass all reasonable conditions which can be foreseen during the period of construction and the life of the retaining wall.3.1.2 Limit state designThis code of practice adopts the philosophy of limit state design. This philosophy does not impose upon the designer any special requirements as to the manner in which the safety and stability of the retaining wall may be achieved, whether by overall factors of safety, or partial factors of safety, or by other measures. Limit states (see 1.3.13) are classified into:a) ultimate limit states (see 3.1.3);b) serviceability limit states (see 3.1.4).Typical ultimate limit states are depicted in figure 3. Rupture states which are reached before collapse occurs are, for simplicity, also classified andtreated as ultimate limit states. Ultimate limit states include:a) instability of the structure or any hart of it, including supports and foundations, considered as a rigid body;b) failure by rupture of the structure or any part of it, including supports and foundations.3.1.3 Ultimate limit states3.1.3.1 GeneralThe following ultimate limit states should be considered. Failure of a retaining wall as a result of:a) instability of the earth mass, e.g. a slip failure, overturning or a rotational failure where the disturbing moments on the structure exceed the restoring moments, a translational failure where the disturbing forces (see 1.3.8) exceed the restoring forces and a bearing failure. Instability of the earth mass aim-involving a slip failure ,may occur where:1)the wall is built on sloping ground which itself is close to limiting equilibrium; or2) the structure is underlain by a significant depth of clay whose undrained strength increases only gradually with depth; or3) the structure is founded on a relatively strong stratum underlain by weaker strata; or4) the structure is underlain by strata within which high pore water pressures may develop from natural or artificial sources.b) failure of structural members including the wall itself in bending or shear;c) excessive deformation of the wall or ground such that adjacent structures or services reach their ultimate limit state.3.1.3.2 analysis methodWhere the mode of failure involves a slip failure the methods of analysis, for stability of slopes, are described in BS 6031 and in BS 8081. Where the mode of failure involves a bearing capacity failure, the calculations should establish an effective width of foundation. The bearing pressures as determined from 4.2.2 should not exceed the ultimate bearing capacity in accordance with BS 8004.Where the mode of failure is by translational movement, with passive resistance excluded, stable equilibrium should be achieved using the design shear strength of the soil in contact with the base of the earth retaining structure.Where the mode of failure involves a rotational or translational movement, the stable equilibrium of the earth retaining structure depends on the mobilization of shear stresses within the soil. The full mobilization of the soil shear strength gives rise to limiting active and passive thrusts. Theselimiting thrusts act in concert on the structure only at the point of collapse, i.e. ultimate limit state.3.1.4 Serviceability limit statesThe following serviceability limit states should be considered:a) substantial deformation of the structure;b) substantial movement of the ground.The soil deformations, which accompany the full mobilization of shear strength in the surrounding soil, are large in comparison with the normally acceptable strains in service. Accordingly, for most earth retaining structures the serviceability limit state of displacement will be the governing criterion for a satisfactory equilibrium and not the ultimate limit state of overall stability. However, although it is generally impossible or impractical to calculate displacements directly, serviceability can be sufficiently assured by limiting the proportion of available strength actually mobilized in service; by the method given in 3.2.4 and 3.2.5.The design earth pressures used for serviceability limit state calculations will differ from those used for ultimate limit state calculations only where structures are to be subjected to differing design values of external loads (generally surcharge and live loads) for the ultimate limit state and for the serviceability limit state.3.1.5 Limit states and compatibility of deformationsThe deformation of an earth retaining structure is important because it has a direct effect upon the forces on the structure, the forces from the retained soil and the forces which result when the structure moves against the soil. The structural forces and bending moments due to earth pressures reduce as deformation of the structure increases.The maximum earth pressures on a retaining structure occur during workingconditions and the necessary equilibrium calculations (see 3.2.1) are based on the assumption that earth pressures greater than fully active pressure (see 1.3.11) and less than fully passive will act on the retaining structure during service. As ultimate limit state with respect to soil pressures is approached, with sufficient deformation of the structure, the active earth pressure (see 1.3.1) in the retained soil reduces to the fully active pressure and the passive resistance (see 1.3.15) tends to increase to the full available passive resistance (see 1.3.12).The compatibility of deformation of the structure and the corresponding earth pressures is important where the form of structure, for example a propped cantilever wall, prevents the occurrence of fully active pressure at the prop. It is alsoparticularly important where the structure behaves as a brittle material and loses strength as deformation increases, such as an unreinforced mass gravity structure or where the soil is liable to strain softening as deformation increases.3.1.6 Design values of parametersThese are applicable at the specified limit states in the specified design situations. All elements of safety and uncertainty should be incorporated into the design values.The selection of design values for soil parametersshould take account of:a) the possibility of unfavorable variations in the values of the parameters;b) the independence or interdependence of the various parameters involved in the calculation;c) the quality of workmanship and level of control specified for the construction.3.1.7 Applied loadsThe design value for the density of fill materials, should be a pessimistic or unfavorable assessment of actual density.For surcharges and live loadings different values may be appropriate for the differing conditions of serviceability and ultimate limit states and for different load combinations. The intention of this code of practice is to determine those earthpressures which will not be exceeded in a limit state, if external loads are correctly predicted. External loads, such as structural dead loads or vehicle surcharge loads may be specified in other codes as nominal or characteristic values. Some of the structural codes, with which this code interfaces, specify different load factors to be applied for serviceability or ultimate limit state the checks and for different load combinations,See 3.2.7 .Design values of loads, derived by factoring or otherwise, are intended, here, to behere most pessimistic or unfavorable loads which should he used in the calculations for the structure. Similarly, when external loads act on the active or retained side of the wall these same external loads should be derived in the same way. The soil is then treated as forming part of the whole structural system.3.1.8 Design soil strength (see 1.3.4)Assessment of the design values depends on the required or anticipated life of the structure, but account should be taken also of the short-term conditions which apply during and immediately following the period of construction. Single design values of soil strength should be obtained from a consideration of the representative values for peak and ultimate strength. The value so selected will satisfy, simultaneously, the considerations of ultimate and serviceability limit states. The design value should be the lower of:a) that value of soil strength, on the stress-strain relation leading to peak strength,which is mobilized at soil strains acceptable forserviceability. This can be expressed as the peak strength reduced by a mobilization factor M as given in 3.2.4 or 3.2.5; orb) that value which would be mobilized at collapse, after significant ground movements. This can general be taken t.o be the critical state strength. Design values selected in this way should be checked to ensure that they conform to 3.1.6. Design values should not exceed representative values of the fully softened critical state soil strength.3.1.9 Design earth pressuresThe design values of lateral earth pressure are intended to give an overestimate of the earth pressure on the active or retained side and an underestimate of the earth resistance on the passive side for small deformations of the structure as a whole, in the working state. Earth pressures reduce as fully active conditions are mobilized atpeak soil strength in the retained soil, under deformations larger than can be tolerated for serviceability. As collapse threatens, the retained soil approaches a critical state, in which its strength reduces to that of loose material and the earth pressures consequently tend to increase once more to active values based on critical state strength.The initial presumption should be that the design earth pressure will correspond to that arising from the design soil strength, see 3.1.8. But the mobilized earth pressure in service, for some walls, will exceed these values. This enhanced earth pressure will control the design, for example.a) Where clays may swell in the retained soil zone, or be subject to the effects of compaction in layers, larger earth pressures may occur in that zone, causing corresponding resistance from the ground, propping forces, or anchor tensions to increase so as t.o maintain overall equilibrium.b) Where clays may have lateral earth pressures in excess of the assessed values taking account of earth pressures prior to construction and the effects of wall installation and soil excavation or filling, the earth pressure inretained soil zones will be increased to maintain overall equilibrium.c) Where both the wall and backfill are placed on compressible soils, differential settlement due to consolidation may lead to rotation of the wall into the backfill. This increases the earth pressures in the retained zone.d) Where the structure is particularly stiff, for example fully piled box-shapedBridge abutments, higher earth pressures, caused, for example by compaction, may be preserved, notwithstanding that the degree of wall displacement or flexibility required to reduce retained earth pressures to their fully active values in cohesionless materials is only of the order of a rotation of 10-3 radians.In each of these cases, mobilized soil strengths will increase as deformations continue, so the unfavorable earth pressure conditions dill not persist as collapse approaches.The design earth pressures are derived from design soil strengths using the usual methods of plastic analysis, with earth pressure coefficients (see 1.3.9) given in this code of practice being based on Kerisel&Absi(1990). The same design earth pressures are used in the default condition for the design of structural. sections, see 3.2.7.3.2 Design method3.2.1 Equilibrium calculationsIn order to determine the geometry of the retaining wall, for exampal the depth of penetration of an embedded wall (see 1.3.10), equilibrium calculations should be carried out for care formulated design situations. The design fully calculations relate to a free-body diagram of forces and stresses for the whole retaining wall. The design calculations should demonstrate that there is global equilibrium of vertical and horizontal forces, and of moments. Separate calculations should be made for different design situations.The structural geometry of the retaining wall and the equilibrium calculations should be determined from the design earth pressures derived from the design soil strength using the appropriate earth pressure coefficients.Design earth pressures will lead to active and passive pressure diagrams of the type shown in figure 4. The earth pressure distribution should be checked for global equilibrium of the structure. Horizontal forces equilibrium and momentequilibrium will give the prop force in figure 4a and the location of the pointof reversed stress conditions near the toe in figure 4b. Vertical forces equilibrium should also be checked.3.2.2 Design situations3.2.2.1 GeneralThe specification of design situations should include the disposition and classification of the various zones of soil and rock and the elements of construction which could be involved in a limit state event. The specification of design situations should follow a consideration of all uncertainties and the risk factors involved, including thefollowing:a) the loads and their combinations, e.g. surcharge and%or external loads on the active or retained side of the wall;b) the geometry of the structure, and the neighbouring soil bodies, representing the worst credible conditions, for example over-excavation during or after construction;c) the material characteristics of the structure, e.g. following corrosion;d) effects due to the environment within which the design is set, such as: -ground water levels, including their variations due to the effects of dewateringpossible flooding or failure of any drainage system;-scour, erosion and excavation, leading to changes in the geometry of the groundsurface;-chemical corrosion;-weathering;-freezing;-the presence of gases emerging from the ground;-other effects of time and environment on thestrength and other properties of materials;e) earthquakes;f) subsidence due to mining or other causes;g) the tolerance of the structure to deformations;h) the effect of the new structure on existing structures or services and the effect of existing structures or services on the new structure;i) for structures resting on or near rock, theconsideration of:-interbedded hard and soft strata;-faults, joints and fissures;-solution cavities such as swallow holes or fissures, filled with soft material, and continuing solution processes.3.2.2.2 Minimum surcharge and minimum unplanned excavationIn checking the stable equilibrium and soil deformation all walls should be designed for a minimum design surcharge loading of 10 kN/m2 and a minimum depth of excavation in front of the wall, which should be:a)not less than 0.5 m; andb)not less than10% of the total height retained for cantilever walls, or the height retained lowest support level for propped or anchored walls. These minimum values should be reviewed for each design and more adverse values adopted in particularly critical or uncertain circumstances. The requirement for an additional or unplanned excavation as a design criterion is to provide for unforeseen and accidental events. Foreseeable excavations suet as service or drainage trenches infront of a retaining wall, which may be required at some stage in the life of the structure, should be treated as a planned excavation. Actual excavation beyond the planned depth is outside the design considerations of this code.3.2.2.3 Water pressure regimeThe water pressure regime used in the design should be the most onerous that is considered to be reasonably possible.3.2.3 Calculations based on total and effective stress parameters The changes in loading associated with the construction of a retaining wall may result in changes in the strength of the ground in the vicinity of the wall. if"here the mass permeability of the ground is low these changes of strength take place over some time and therefore the design should consider conditions in both the short- and long-term. Which condition will be critical depends on whether the changes in load applied to the soil mass cause an increase or decrease in soil strength. The long-term condition is likely to be critical where the soil mass undergoes a net reduction in load as a result of excavation, such as adjacent to a cantilever wall. Conversely where the soil mass is subject to a net increase in loading, such as beneath the foundation of a gravity or reinforced stem wall at ground level, the short-term condition is likely to be critical for stability. When considering long-term earth pressures and equilibrium, allowance should be made for changes in ground water conditions and pore water pressure regime which may result from the construction of the works or from other agencies.Calculations for long-term conditions require shear strength parameters to be in terms of effective stress and should take account of a range of water pressures based on considerations of possible seepage flow conditions within the earth mass. Effective stress methods can also be used to assess the short-term conditions provided the pore water pressures developed during construction areknown. A total stress method of analysis may be used to assess the short-term conditions in clays and soils of low permeability, but an inherent assumption of this method is that there will be no change in the soil strength as a result of the changes in load caused by the construction. For granular materials and soils of high permeability all excess pore water pressure will dissipate rapidly so that the relevant strength is always the drained strength and the earth pressures and equilibrium calculations are always in terms of effectivestresses.3.2.4 Design using total stress parametersThe retaining wall should be designed to be in equilibrium design clay when based on a mobilized undrained strength (design cu) which does not exceed the representativedivided by a mobilization undrained strength factor M. The value of M should not be less than 1.5 if wall displacements are required to be less than 0.5 % of wall height.The value of M should be larger than 1.5 for clays which require large strains to mobilize their peak strength.3.2.5 Design using effective stress parametersThe retaining wall should be designed to be in equilibrium mobilizing a soil strength the lesser or:a) the representative peak strength of the soil divided by a factor M=1.2: that is:Mmax tantiverepresenta design tan ϕϕ'='(3)Mcc' ='tive representadesign (4) orb) the representative critical state strength of the soil.This will ensure that for soils which are medium dense or firm the wall displacements in service will be limited to 0.5 % of the wall height. The mobilization factor of 1.2 should be used in conjunction with the front of the wall, the 'unplanned' excavation inminimum surcharge loading and the water pressure regime, see 3.2.2.2 and 3.2.2.3.A more detailed analysis of displacement should be are to be applied or for soft or loose soils. The criteria a) and b), taken together, should provide a sufficient reserve of safety against small unforeseen loads and adverse conditions.In stiff clays subject to cycles of strain, such as through seasonal variation of pore water pressure, the long-term peak strength may deteriorate to the critical state strength. The requirements of a) and b) above are sufficiently cautious to accommodate this possibility.3.2.6 Design values of wall friction, base friction and undrained wall adhesionThese should be derived from the representative strength determined in accordance with 2.2.8,using the same mobilization actors as for the adjacent soil.The design value of the friction or adhesion mobilized at an interface with the structure be the lesser of:a) the representative value determined by described in 2.2.8 if such test results are available; orb) 75% of the design shear strength to be mobilized in the soil itself, that is using:ϕδ'⨯= design tan 75.0 design tan(5)u w design 75.0design c c ⨯=(6)Since for the soil mass: 1.2tan tive representa design tan ϕϕ'=' (7)this is equivalent to:32 tive representa design ≈'ϕδ (8)similarly, in total stress analysis:5.1 ng after taki ,5.0 tive representa design uw ==M c c (9) The friction or adhesion, which can be mobilized in practice, is generally less than the value deduced on the basis of soil sliding against the relevant surface. It is unlikely for example, that a cantilever wall will remain at constant elevation while the active soil zone subsides creating full downward wall friction on the retained side, and the passive zone heaves creating full upward wall friction on the excavated side. It is more likely that the wall would move vertically with one or other soil zone,reducing friction on that side, and thereby attaining vertical force equilibrium. The 25% reduction in the design shear strength in b) above makes an allowance for this possibility. Further reductions, and even the elimination of wall friction or its reversal, may be necessary when soil structure interaction is taken into account. Wall friction on the retained or active side should be excluded when the wall is capable of penetrating deeper, due to the vertical thrust imparted by inclined anchors on an embedded wall, by structural loads on a basement wall, or where a clay soil may heave due to swelling during outward movement of the wall. Wall friction on the passive side should be excluded when the wall is prevented from sinking but the adjacent soil may fail to heave, due for example to settlement of loose granular soils induced by cyclic loads, or when the wall is free to move upwards with the passive soil zone, as may happen with buried anchor blocks.3.2.7 Design to structural codesThe earth pressures to be used in structural design calculations are the most severe earth pressures determined for serviceability limit state, see 3.1.9. These are the most severe that can credibly occur under the design situations, see 3.2.2. Accordingly the application of partial load factors to the bending moments and internal forces derived from these earth pressures, is not normally required. Hacking determined the earth pressures using design thestructure increases it should be assumed that loads and design soil strengths, the structural load affects (bending moments, and shears) can be calculated using equilibrium principles in the usual way without applying any further factors. Finally, the material properties and sections should be derived from the load effects according to the structural codes. Reference should be made to the documentary source for the loadings, such as BS 5400:Part 4 for guidance on the respective design values.Structural design calculations based upon ultimate limit state assume that the moments and forces applicable at ultimate larger than limit state are significantly at serviceability limit state. BS8110: Part 1 and Part; BS 5400:Part 4 and BS 5950:Part 1 and Part 5 make this assumption. At ultimate limit state, the earth active or retained side are not pressures on the a maximum.Because the structural forces and bending moments due to earth pressures reduce as deformation of the most severe earth pressures, which are usually determined for the serviceability limit state, also apply to the ultimate limit state structural design calculations. The design at serviceability limit state for flexible structures such as steel or reinforced and prestressed undertaken in a like concrete may be manner to the analysis in 3.1 to 3.4 of BS 8110:Part 2:1985.For gravity mass walls such as masonry structures, which are relatively rigid, the earth pressures on the retained or active side are likely to be higher than the fully active values in the working state. The earth pressures at serviceability and ultimate limit states will be similar, because the displacement criteria will be similar.3.3 Disturbing forces3.3.1 GeneralThe disturbing forces to be taken into account in the equilibrium calculations are the earth pressures on the active or retained side of the wall, togetherwith loads due to the compaction of the fill (if any) behind the wall, surcharge loads, external loads and last, but by no means least, the water pressure.3.3.2 At-rest earth pressuresThe earth pressures which act on retaining walls, or parts of retaining walls, below existing ground, depend on the initial or at-rest state of stress in the ground. For an undisturbed soil at a state of rest, the ratio of the horizontal to vertical stress depends on the type of soil, its geological origin, the temporary loads which may have acted on the surface of the soil and the topography.Soil suction and empirical correlations with in situ tests including static cone and dilatometer. The value of K i depends on the type of soil, its geological history, the loads which may have topography, the temporary acted on the ground surface and changes in ground strain or ground water regime due to natural or artificial causes.Where there has been no lateral strain within the ground, K i can be determinable from equated with K0the coefficient one-dimensional consolidation and swelling tests conducted in a stress-path triaxial test using appropriate stress cycles. For normally consolidated soils, both granular and cohesive: ϕ'1K=sin-(10)For overconsolidated soils, K0 is larger and may approach the passive value at shallow depths in a heavily overconsolidated clay, (see for example Lambe and Whitman, quoting Hendron and Wroth 1975).K i is not used directly in earth retaining structure design because the construction process always modifies this initial value. The value of K i is however, important in assessing the degree of deformation which will be induced as the earth pressure tends towards active or passive states. In normally consolidated soil the ground deformation necessary to mobilize the active condition will be small in relation to that required to mobilize thefull passive resistance, while in heavily overconsolidated soil the required ground deformation will be of similar magnitude.Additional ground deformation is necessary for the structure to approach a failure condition with the earth pressures moving further towards their limiting active and passive values.Where a stressed support system is employed (e.g.ground anchorage) then the partial mobilization the active state on the retained side is reversed during installation of the system and,in the zone of support, the effective stress ratio in the soil may pass through the original toward the value of K0,and tend toward the value of K p.3.3.3 Active earth pressures3.3.3.1 GeneralActive earth pressures are generally assumed to increase linearly with increasing depth. However there may be variations from a linear relationship as a consequence, for example, of wall flexure. This can result in reduced bending moments in the structure, where the structure is flexible.Where deformations of the retaining structure are caused by transient loads, as encountered in highway structures, locked-in moments may remain after the load has been removed. These locked-in stresses will accumulate under repeated loading. This effect will limit the application of reduced bending moments in such structures.The design soil strength, derived in accordance with 3.1.8 should be used in evaluating the active earth pressure.3.3.3.2 Cohesionless soilThe basic formula for active pressure is applicable in the following simple situation:- uniform cohesionless soil;- no water pressure;- mode of deformation such that earth pressure increases linearly with。

毕业设计外文翻译例文

毕业设计外文翻译例文

大连科技学院毕业设计(论文)外文翻译学生姓名专业班级指导教师职称所在单位教研室主任完成日期 2016年4月15日Translation EquivalenceDespite the fact that the world is becoming a global village, translation remains a major way for languages and cultures to interact and influence each other. And name translation, especially government name translation, occupies a quite significant place in international exchange.Translation is the communication of the meaning of a source-language text by means of an equivalent target-language text. While interpreting—the facilitating of oral or sign-language communication between users of different languages—antedates writing, translation began only after the appearance of written literature. There exist partial translations of the Sumerian Epic of Gilgamesh (ca. 2000 BCE) into Southwest Asian languages of the second millennium BCE. Translators always risk inappropriate spill-over of source-language idiom and usage into the target-language translation. On the other hand, spill-overs have imported useful source-language calques and loanwords that have enriched the target languages. Indeed, translators have helped substantially to shape the languages into which they have translated. Due to the demands of business documentation consequent to the Industrial Revolution that began in the mid-18th century, some translation specialties have become formalized, with dedicated schools and professional associations. Because of the laboriousness of translation, since the 1940s engineers have sought to automate translation (machine translation) or to mechanically aid the human translator (computer-assisted translation). The rise of the Internet has fostered a world-wide market for translation services and has facilitated language localizationIt is generally accepted that translation, not as a separate entity, blooms into flower under such circumstances like culture, societal functions, politics and power relations. Nowadays, the field of translation studies is immersed with abundantly diversified translation standards, with no exception that some of them are presented by renowned figures and are rather authoritative. In the translation practice, however, how should we select the so-called translation standards to serve as our guidelines in the translation process and how should we adopt the translation standards to evaluate a translation product?In the macro - context of flourish of linguistic theories, theorists in the translation circle, keep to the golden law of the principle of equivalence. The theory of Translation Equivalence is the central issue in western translation theories. And the presentation of this theory gives great impetus to the development and improvement of translation theory. It‟s not diffi cult for us to discover that it is the theory of Translation Equivalence that serves as guidelines in government name translation in China. Name translation, as defined, is the replacement of thename in the source language by an equivalent name or other words in the target language. Translating Chinese government names into English, similarly, is replacing the Chinese government name with an equivalent in English.Metaphorically speaking, translation is often described as a moving trajectory going from A to B along a path or a container to carry something across from A to B. This view is commonly held by both translation practitioners and theorists in the West. In this view, they do not expect that this trajectory or something will change its identity as it moves or as it is carried. In China, to translate is also understood by many people normally as “to translate the whole text sentence by sentence and paragraph by paragraph, without any omission, addition, or other changes. In both views, the source text and the target text must be “the same”. This helps explain the etymological source for the term “translation equivalence”. It is in essence a word which describes the relationship between the ST and the TT.Equivalence means the state or fact or property of being equivalent. It is widely used in several scientific fields such as chemistry and mathematics. Therefore, it comes to have a strong scientific meaning that is rather absolute and concise. Influenced by this, translation equivalence also comes to have an absolute denotation though it was first applied in translation study as a general word. From a linguistic point of view, it can be divided into three sub-types, i.e., formal equivalence, semantic equivalence, and pragmatic equivalence. In actual translation, it frequently happens that they cannot be obtained at the same time, thus forming a kind of relative translation equivalence in terms of quality. In terms of quantity, sometimes the ST and TT are not equivalent too. Absolute translation equivalence both in quality and quantity, even though obtainable, is limited to a few cases.The following is a brief discussion of translation equivalence study conducted by three influential western scholars, Eugene Nida, Andrew Chesterman and Peter Newmark. It‟s expected that their studies can instruct GNT study in China and provide translators with insightful methods.Nida‟s definition of translation is: “Translation consists in reproducing in the receptor language the closest natural equivalent of the source language message, first in terms of meaning and secondly in terms of style.” It i s a replacement of textual material in one language〔SL〕by equivalent textual material in another language(TL). The translator must strive for equivalence rather than identity. In a sense, this is just another way of emphasizing the reproducing of the message rather than the conservation of the form of the utterance. The message in the receptor language should match as closely as possible the different elements in the source language to reproduce as literally and meaningfully as possible the form and content of the original. Translation equivalence is an empirical phenomenon discovered bycomparing SL and TL texts and it‟s a useful operational concept like the term “unit of translati on”.Nida argues that there are two different types of equivalence, namely formal equivalence and dynamic equivalence. Formal correspondence focuses attention on the message itself, in both form and content, whereas dynamic equivalence is based upon “the principle of equivalent effect”.Formal correspondence consists of a TL item which represents the closest equivalent of a ST word or phrase. Nida and Taber make it clear that there are not always formal equivalents between language pairs. Therefore, formal equivalents should be used wherever possible if the translation aims at achieving formal rather than dynamic equivalence. The use of formal equivalents might at times have serious implications in the TT since the translation will not be easily understood by the target readership. According to Nida and Taber, formal correspondence distorts the grammatical and stylistic patterns of the receptor language, and hence distorts the message, so as to cause the receptor to misunderstand or to labor unduly hard.Dyn amic equivalence is based on what Nida calls “the principle of equivalent effect” where the relationship between receptor and message should be substantially the same as that which existed between the original receptors and the message. The message has to be modified to the receptor‟s linguistic needs and cultural expectation and aims at complete naturalness of expression. Naturalness is a key requirement for Nida. He defines the goal of dynamic equivalence as seeking the closest natural equivalent to the SL message. This receptor-oriented approach considers adaptations of grammar, of lexicon and of cultural references to be essential in order to achieve naturalness; the TL should not show interference from the SL, and the …foreignness …of the ST setting is minimized.Nida is in favor of the application of dynamic equivalence, as a more effective translation procedure. Thus, the product of the translation process, that is the text in the TL, must have the same impact on the different readers it was addressing. Only in Nida and Taber's edition is it clearly stated that dynamic equivalence in translation is far more than mere correct communication of information.As Andrew Chesterman points out in his recent book Memes of Translation, equivalence is one of the five element of translation theory, standing shoulder to shoulder with source-target, untranslatability, free-vs-literal, All-writing-is-translating in importance. Pragmatically speaking, observed Chesterman, “the only true examples of equivalence (i.e., absolute equivalence) are those in which an ST item X is invariably translated into a given TL as Y, and vice versa. Typical examples would be words denoting numbers (with the exceptionof contexts in which they have culture-bound connotations, such as “magic” or “unlucky”), certain technical terms (oxygen, molecule) and the like. From this point of view, the only true test of equivalence would be invariable back-translation. This, of course, is unlikely to occur except in the case of a small set of lexical items, or perhaps simple isolated syntactic structure”.Peter Newmark. Departing from Nida‟s receptor-oriented line, Newmark argues that the success of equivalent effect is “illusory “and that the conflict of loyalties and the gap between emphasis on source and target language will always remain as the overriding problem in translation theory and practice. He suggests narrowing the gap by replacing the old terms with those of semantic and communicative translation. The former attempts to render, as closely as the semantic and syntactic structures of the second language allow, the exact contextual meaning of the original, while the latter “attempts to produce on its readers an effect as close as possible to that obtained on the readers of the original.” Newmark‟s description of communicative translation resembles Nida‟s dynamic equivalence in the effect it is trying to create on the TT reader, while semantic translation has similarities to Nida‟s formal equivalence.Meanwhile, Newmark points out that only by combining both semantic and communicative translation can we achieve the goal of keeping the …spirit‟ of the original. Semantic translation requires the translator retain the aesthetic value of the original, trying his best to keep the linguistic feature and characteristic style of the author. According to semantic translation, the translator should always retain the semantic and syntactic structures of the original. Deletion and abridgement lead to distortion of the author‟s intention and his writing style.翻译对等尽管全世界正在渐渐成为一个地球村,但翻译仍然是语言和和文化之间的交流互动和相互影响的主要方式之一。

毕业设计外文翻译英文

毕业设计外文翻译英文

Bid Compensation Decision Model for Projectswith Costly Bid PreparationS.Ping Ho,A.M.ASCE 1Abstract:For projects with high bid preparation cost,it is often suggested that the owner should consider paying bid compensation to the most highly ranked unsuccessful bidders to stimulate extra effort or inputs in bid preparation.Whereas the underlying idea of using bid compensation is intuitively sound,there is no theoretical basis or empirical evidence for such suggestion.Because costly bid preparation often implies a larger project scale,the issue of bid compensation strategy is important to practitioners and an interest of study.This paper aims to study the impacts of bid compensation and to develop appropriate bid compensation strategies.Game theory is applied to analyze the behavioral dynamics between competing bidders and project owners.A bid compensation model based on game theoretic analysis is developed in this study.The model provides equilibrium solutions under bid compensation,quantitative formula,and quali-tative implications for the formation of bid compensation strategies.DOI:10.1061/(ASCE )0733-9364(2005)131:2(151)CE Database subject headings:Bids;Project management;Contracts;Decision making;Design/build;Build/Operate/Transfer;Construction industry .IntroductionAn often seen suggestion in practice for projects with high bid preparation cost is that the owner should consider paying bid compensation,also called a stipend or honorarium,to the unsuc-cessful bidders.For example,according to the Design–build Manual of Practice Document Number 201by Design–Build In-stitute of America (DBIA )(1996a ),it is suggested that that “the owner should consider paying a stipend or honorarium to the unsuccessful proposers”because “excessive submittal require-ments without some compensation is abusive to the design–build industry and discourages quality teams from participating.”In another publication by DBIA (1995),it is also stated that “it is strongly recommended that honorariums be offered to the unsuc-cessful proposers”and that “the provision of reasonable compen-sation will encourage the more sought-after design–build teams to apply and,if short listed,to make an extra effort in the prepara-tion of their proposal.”Whereas bid preparation costs depend on project scale,delivery method,and other factors,the cost of pre-paring a proposal is often relatively high in some particular project delivery schemes,such as design–build or build–operate–transfer (BOT )contracting.Plus,costly bid preparation often im-plying a large project scale,the issue of bid compensation strat-egy should be important to practitioners and of great interest of study.Existing research on the procurement process in constructionhas addressed the selection of projects that are appropriate for certain project delivery methods (Molenaar and Songer 1998;Molenaar and Gransberg 2001),the design–build project procure-ment processes (Songer et al.1994;Gransberg and Senadheera 1999;Palaneeswaran and Kumaraswamy 2000),and the BOT project procurement process (United Nations Industrial Develop-ment Organization 1996).However,the bid compensation strat-egy for projects with a relatively high bid preparation cost has not been studied.Among the issues over the bidder’s response to the owner’s procurement or bid compensation strategy,it is in own-er’s interest to understand how the owner can stimulate high-quality inputs or extra effort from the bidder during bid prepara-tion.Whereas the argument for using bid compensation is intuitively sound,there is no theoretical basis or empirical evi-dence for such an argument.Therefore,it is crucial to study under what conditions the bid compensation is effective,and how much compensation is adequate with respect to different bidding situa-tions.This paper focuses on theoretically studying the impacts of bid compensation and tries to develop appropriate compensation strategies for projects with a costly bid preparation.Game theory will be applied to analyze the behavioral dynamics between com-peting bidders.Based on the game theoretic analysis and numeric trials,a bid compensation model is developed.The model pro-vides a quantitative framework,as well as qualitative implica-tions,on bid compensation strategies.Research Methodology:Game TheoryGame theory can be defined as “the study of mathematical models of conflict and cooperation between intelligent rational decision-makers”(Myerson 1991).Among economic theories,game theory has been successfully applied to many important issues such as negotiations,finance,and imperfect markets.Game theory has also been applied to construction management in two areas.Ho (2001)applied game theory to analyze the information asymme-try problem during the procurement of a BOT project and its1Assistant Professor,Dept.of Civil Engineering,National Taiwan Univ.,Taipei 10617,Taiwan.E-mail:spingho@.twNote.Discussion open until July 1,2005.Separate discussions must be submitted for individual papers.To extend the closing date by one month,a written request must be filed with the ASCE Managing Editor.The manuscript for this paper was submitted for review and possible publication on March 5,2003;approved on March 1,2004.This paper is part of the Journal of Construction Engineering and Management ,V ol.131,No.2,February 1,2005.©ASCE,ISSN 0733-9364/2005/2-151–159/$25.00.D o w n l o a d e d f r o m a s c e l i b r a r y .o r g b y N A N J I N G U N I VE R S I T Y OF o n 01/06/14. C o p y r i g h t A S C E . F o r p e r s o n a l u s e o n l y ; a l l r i g h t s r e s e r v e d .implication in project financing and government policy.Ho and Liu (2004)develop a game theoretic model for analyzing the behavioral dynamics of builders and owners in construction claims.In competitive bidding,the strategic interactions among competing bidders and that between bidders and owners are com-mon,and thus game theory is a natural tool to analyze the prob-lem of concern.A well-known example of a game is the “prisoner’s dilemma”shown in Fig.1.Two suspects are arrested and held in separate cells.If both of them confess,then they will be sentenced to jail for 6years.If neither confesses,each will be sentenced for only 1year.However,if one of them confesses and the other does not,then the honest one will be rewarded by being released (in jail for 0year )and the other will be punished for 9years in jail.Note that in each cell,the first number represents player No.1’s payoff and the second one represents player No.2’s.The prisoner’s dilemma is called a “static game,”in which they act simultaneously;i.e.,each player does not know the other player’s decision before the player makes the decision.If the payoff matrix shown in Fig.1is known to all players,then the payoff matrix is a “common knowledge”to all players and this game is called a game of “complete information.”Note that the players of a game are assumed to be rational;i.e.,to maximize their payoffs.To answer what each prisoner will play/behave in this game,we will introduce the concept of “Nash equilibrium ,”one of the most important concepts in game theory.Nash equilibrium is a set of actions that will be chosen by each player.In a Nash equilib-rium,each player’s strategy should be the best response to the other player’s strategy,and no player wants to deviate from the equilibrium solution.Thus,the equilibrium or solution is “strate-gically stable”or “self-enforcing”(Gibbons 1992).Conversely,a nonequilibrium solution is not stable since at least one of the players can be better off by deviating from the nonequilibrium solution.In the prisoner’s dilemma,only the (confess,confess )solution where both players choose to confess,satisfies the stabil-ity test or requirement of Nash equilibrium.Note that although the (not confess,not confess )solution seems better off for both players compared to Nash equilibrium;however,this solution is unstable since either player can obtain extra benefit by deviating from this solution.Interested readers can refer to Gibbons (1992),Fudenberg and Tirole (1992),and Myerson (1991).Bid Compensation ModelIn this section,the bid compensation model is developed on the basis of game theoretic analysis.The model could help the ownerform bid compensation strategies under various competition situ-ations and project characteristics.Illustrative examples with nu-merical results are given when necessary to show how the model can be used in various scenarios.Assumptions and Model SetupTo perform a game theoretic study,it is critical to make necessary simplifications so that one can focus on the issues of concern and obtain insightful results.Then,the setup of a model will follow.The assumptions made in this model are summarized as follows.Note that these assumptions can be relaxed in future studies for more general purposes.1.Average bidders:The bidders are equally good,in terms oftheir technical and managerial capabilities.Since the design–build and BOT focus on quality issues,the prequalification process imposed during procurement reduces the variation of the quality of bidders.As a result,it is not unreasonable to make the “average bidders”assumption.plete information:If all players consider each other tobe an average bidder as suggested in the first assumption,it is natural to assume that the payoffs of each player in each potential solution are known to all players.3.Bid compensation for the second best bidder:Since DBIA’s(1996b )manual,document number 103,suggests that “the stipend is paid only to the most highly ranked unsuccessful offerors to prevent proposals being submitted simply to ob-tain a stipend,”we shall assume that the bid compensation will be offered to the second best bidder.4.Two levels of efforts:It is assumed that there are two levelsof efforts in preparing a proposal,high and average,denoted by H and A ,respectively.The effort A is defined as the level of effort that does not incur extra cost to improve quality.Contrarily,the effort H is defined as the level of effort that will incur extra cost,denoted as E ,to improve the quality of a proposal,where the improvement is detectable by an effec-tive proposal evaluation system.Typically,the standard of quality would be transformed to the evaluation criteria and their respective weights specified in the Request for Pro-posal.5.Fixed amount of bid compensation,S :The fixed amount canbe expressed by a certain percentage of the average profit,denoted as P ,assumed during the procurement by an average bidder.6.Absorption of extra cost,E :For convenience,it is assumedthat E will not be included in the bid price so that the high effort bidder will win the contract under the price–quality competition,such as best-value approach.This assumption simplifies the tradeoff between quality improvement and bid price increase.Two-Bidder GameIn this game,there are only two qualified bidders.The possible payoffs for each bidder in the game are shown in a normal form in Fig.2.If both bidders choose “H ,”denoted by ͑H ,H ͒,both bidders will have a 50%probability of wining the contract,and at the same time,have another 50%probability of losing the con-tract but being rewarded with the bid compensation,S .As a re-sult,the expected payoffs for the bidders in ͑H ,H ͒solution are ͑S /2+P /2−E ,S /2+P /2−E ͒.Note that the computation of the expected payoff is based on the assumption of the average bidder.Similarly,if the bidders choose ͑A ,A ͒,the expected payoffswillFig.1.Prisoner’s dilemmaD o w n l o a d e d f r o m a s c e l i b r a r y .o r g b y N A N J I N G U N I VE R S I T Y OF o n 01/06/14. C o p y r i g h t A S C E . F o r p e r s o n a l u s e o n l y ; a l l r i g h t s r e s e r v e d .be ͑S /2+P /2,S /2+P /2͒.If the bidders choose ͑H ,A ͒,bidder No.1will have a 100%probability of winning the contract,and thus the expected payoffs are ͑P −E ,S ͒.Similarly,if the bidders choose ͑A ,H ͒,the expected payoffs will be ͑S ,P −E ͒.Payoffs of an n -bidder game can be obtained by the same reasoning.Nash EquilibriumSince the payoffs in each equilibrium are expressed as functions of S ,P ,and E ,instead of a particular number,the model will focus on the conditions for each possible Nash equilibrium of the game.Here,the approach to solving for Nash equilibrium is to find conditions that ensure the stability or self-enforcing require-ment of Nash equilibrium.This technique will be applied throughout this paper.First,check the payoffs of ͑H ,H ͒solution.For bidder No.1or 2not to deviate from this solution,we must haveS /2+P /2−E ϾS →S ϽP −2E͑1͒Therefore,condition (1)guarantees ͑H ,H ͒to be a Nash equilib-rium.Second,check the payoffs of ͑A ,A ͒solution.For bidder No.1or 2not to deviate from ͑A ,A ͒,condition (2)must be satisfiedS /2+P /2ϾP −E →S ϾP −2E͑2͒Thus,condition (2)guarantees ͑A ,A ͒to be a Nash equilibrium.Note that the condition “S =P −2E ”will be ignored since the con-dition can become (1)or (2)by adding or subtracting an infinitely small positive number.Thus,since S must satisfy either condition (1)or condition (2),either ͑H ,H ͒or ͑A ,A ͒must be a unique Nash equilibrium.Third,check the payoffs of ͑H ,A ͒solution.For bid-der No.1not to deviate from H to A ,we must have P −E ϾS /2+P /2;i.e.,S ϽP −2E .For bidder No.2not to deviate from A to H ,we must have S ϾS /2+P /2−E ;i.e.,S ϾP −2E .Since S cannot be greater than and less than P −2E at the same time,͑H ,A ͒solution cannot exist.Similarly,͑A ,H ͒solution cannot exist either.This also confirms the previous conclusion that either ͑H ,H ͒or ͑A ,A ͒must be a unique Nash equilibrium.Impacts of Bid CompensationBid compensation is designed to serve as an incentive to induce bidders to make high effort.Therefore,the concerns of bid com-pensation strategy should focus on whether S can induce high effort and how effective it is.According to the equilibrium solu-tions,the bid compensation decision should depend on the mag-nitude of P −2E or the relative magnitude of E compared to P .If E is relatively small such that P Ͼ2E ,then P −2E will be positive and condition (1)will be satisfied even when S =0.This means that bid compensation is not an incentive for high effort when the extra cost of high effort is relatively low.Moreover,surprisingly,S can be damaging when S is high enough such that S ϾP −2E .On the other hand,if E is relatively large so that P −2E is negative,then condition (2)will always be satisfied since S can-not be negative.In this case,͑A ,A ͒will be a unique Nash equi-librium.In other words,when E is relatively large,it is not in the bidder’s interest to incur extra cost for improving the quality of proposal,and therefore,S cannot provide any incentives for high effort.To summarize,when E is relatively low,it is in the bidder’s interest to make high effort even if there is no bid compensation.When E is relatively high,the bidder will be better off by making average effort.In other words,bid compensation cannot promote extra effort in a two-bidder game,and ironically,bid compensa-tion may discourage high effort if the compensation is too much.Thus,in the two-bidder procurement,the owner should not use bid compensation as an incentive to induce high effort.Three-Bidder GameNash EquilibriumFig.3shows all the combinations of actions and their respective payoffs in a three-bidder game.Similar to the two-bidder game,here the Nash equilibrium can be solved by ensuring the stability of the solution.For equilibrium ͑H ,H ,H ͒,condition (3)must be satisfied for stability requirementS /3+P /3−E Ͼ0→S Ͼ3E −P͑3͒For equilibrium ͑A ,A ,A ͒,condition (4)must be satisfied so that no one has any incentives to choose HS /3+P /3ϾP −E →S Ͼ2P −3E͑4͒In a three-bidder game,it is possible that S will satisfy conditions (3)and (4)at the same time.This is different from the two-bidder game,where S can only satisfy either condition (1)or (2).Thus,there will be two pure strategy Nash equilibria when S satisfies conditions (3)and (4).However,since the payoff of ͑A ,A ,A ͒,S /3+P /3,is greater than the payoff of ͑H ,H ,H ͒,S /3+P /3−E ,for all bidders,the bidder will choose ͑A ,A ,A ͒eventually,pro-vided that a consensus between bidders of making effort A can be reached.The process of reaching such consensus is called “cheap talk,”where the agreement is beneficial to all players,and no player will want to deviate from such an agreement.In the design–build or BOT procurement,it is reasonable to believe that cheap talk can occur.Therefore,as long as condition (4)is satis-fied,͑A ,A ,A ͒will be a unique Nash equilibrium.An important implication is that the cheap talk condition must not be satisfied for any equilibrium solution other than ͑A ,A ,A ͒.In other words,condition (5)must be satisfied for all equilibrium solution except ͑A ,A ,A͒Fig.2.Two-biddergameFig.3.Three-bidder gameD o w n l o a d e d f r o m a s c e l i b r a r y .o r g b y N A N J I N G U N I VE R S I T Y OF o n 01/06/14. C o p y r i g h t A S C E . F o r p e r s o n a l u s e o n l y ; a l l r i g h t s r e s e r v e d .S Ͻ2P −3E ͑5͒Following this result,for ͑H ,H ,H ͒to be unique,conditions (3)and (5)must be satisfied;i.e.,we must have3E −P ϽS Ͻ2P −3E͑6͒Note that by definition S is a non-negative number;thus,if one cannot find a non-negative number to satisfy the equilibrium con-dition,then the respective equilibrium does not exist and the equi-librium condition will be marked as “N/A”in the illustrative fig-ures and tables.Next,check the solution where two bidders make high efforts and one bidder makes average effort,e.g.,͑H ,H ,A ͒.The ex-pected payoffs for ͑H ,H ,A ͒are ͑S /2+P /2−E ,S /2+P /2−E ,0͒.For ͑H ,H ,A ͒to be a Nash equilibrium,S /3+P /3−E Ͻ0must be satisfied so that the bidder with average effort will not deviate from A to H ,S /2+P /2−E ϾS /2must be satisfied so that the bidder with high effort will not deviate from H to A ,and condi-tion (5)must be satisfied as argued previously.The three condi-tions can be rewritten asS Ͻmin ͓3E −P ,2P −3E ͔andP −2E Ͼ0͑7͒Note that because of the average bidder assumption,if ͑H ,H ,A ͒is a Nash equilibrium,then ͑H ,A ,H ͒and ͑A ,H ,H ͒will also be the Nash equilibria.The three Nash equilibria will constitute a so-called mixed strategy Nash equilibrium,denoted by 2H +1A ,where each bidder randomizes actions between H and A with certain probabilities.The concept of mixed strategy Nash equilib-rium shall be explained in more detail in next section.Similarly,we can obtain the requirements for solution 1H +2A ,condition (5)and S /2+P /2−E ϽS /2must be satisfied.The requirements can be reorganized asS Ͻ2P −3EandP −2E Ͻ0͑8͒Note that the conflicting relationship between “P −2E Ͼ0”in condition (7)and “P −2E Ͻ0”in condition (8)seems to show that the two types of Nash equilibria are exclusive.Nevertheless,the only difference between 2H +1A and 1H +2A is that the bidder in 2H +1A equilibrium has a higher probability of playing H ,whereas the bidder in 1H +2A also mixes actions H and A but with lower probability of playing H .From this perspective,the difference between 2H +1A and 1H +2A is not very distinctive.In other words,one should not consider,for example,2H +1A ,to be two bidders playing H and one bidder playing A ;instead,one should consider each bidder to be playing H with higher probabil-ity.Similarly,1H +2A means that the bidder has a lower probabil-ity of playing H ,compared to 2H +1A .Illustrative Example:Effectiveness of Bid Compensation The equilibrium conditions for a three-bidder game is numerically illustrated and shown in Table 1,where P is arbitrarily assumed as 10%for numerical computation purposes and E varies to rep-resent different costs for higher efforts.The “*”in Table 1indi-cates that the zero compensation is the best strategy;i.e.,bid compensation is ineffective in terms of stimulating extra effort.According to the numerical results,Table 1shows that bid com-pensation can promote higher effort only when E is within the range of P /3ϽE ϽP /2,where zero compensation is not neces-sarily the best strategy.The question is that whether it is benefi-cial to the owner by incurring the cost of bid compensation when P /3ϽE ϽP /2.The answer to this question lies in the concept and definition of the mix strategy Nash equilibrium,2H +1A ,as explained previously.Since 2H +1A indicates that each bidderwill play H with significantly higher probability,2H +1A may already be good enough,knowing that we only need one bidder out of three to actually play H .We shall elaborate on this concept later in a more general setting.As a result,if the 2H +1A equilib-rium is good enough,the use of bid compensation in a three-bidder game will not be recommended.Four-Bidder Game and n-Bidder GameNash Equilibrium of Four-Bidder GameThe equilibrium of the four-bidder procurement can also be ob-tained.As the number of bidders increases,the number of poten-tial equilibria increases as well.Due to the length limitation,we shall only show the major equilibria and their conditions,which are derived following the same technique applied previously.The condition for pure strategy equilibrium 4H ,is4E −P ϽS Ͻ3P −4E͑9͒The condition for another pure strategy equilibrium,4A ,isS Ͼ3P −4E͑10͒Other potential equilibria are mainly mixed strategies,such as 3H +1A ,2H +2A ,and 1H +3A ,where the numeric number asso-ciated with H or A represents the number of bidders with effort H or A in a equilibrium.The condition for the 3H +1A equilibrium is3E −P ϽS Ͻmin ͓4E −P ,3P −4E ͔͑11͒For the 2H +2A equilibrium the condition is6E −3P ϽS Ͻmin ͓3E −P ,3P −4E ͔͑12͒The condition for the 1H +3A equilibrium isS Ͻmin ͓6E −3P ,3P −4E ͔͑13͒Illustrative Example of Four-Bidder GameTable 2numerically illustrates the impacts of bid compensation on the four-bidder procurement under different relative magni-tudes of E .When E is very small,bid compensation is not needed for promoting effort H .However,when E grows gradually,bid compensation becomes more effective.As E grows to a larger magnitude,greater than P /2,the 4H equilibrium would become impossible,no matter how large S is.In fact,if S is too large,bidders will be encouraged to take effort A .When E is extremely large,e.g.,E Ͼ0.6P ,the best strategy is to set S =0.The “*”in Table 2also indicates the cases that bid compensation is ineffec-Table pensation Impacts on a Three-Bidder GameEquilibriumE ;P =10%3H 2H +1A 1H +2A 3A E ϽP /3e.g.,E =2%S Ͻ14%*N/A N/N 14%ϽS P /3ϽE ϽP /2e.g.,E =4%2%ϽS Ͻ8%S Ͻ2%N/A 8%ϽS P /2ϽE Ͻ͑2/3͒P e.g.,E =5.5%N/AN/AS Ͻ3.5%*3.5%ϽS͑2/3͒P ϽEe.g.,E =7%N/A N/A N/A Always*Note:*denotes that zero compensation is the best strategy;and N/A =the respective equilibrium does not exist.D o w n l o a d e d f r o m a s c e l i b r a r y .o r g b y N A N J I N G U N I VE R S I T Y OF o n 01/06/14. C o p y r i g h t A S C E . F o r p e r s o n a l u s e o n l y ; a l l r i g h t s r e s e r v e d .tive.To conclude,in a four-bidder procurement,bid compensation is not effective when E is relatively small or large.Again,similar to the three-bidder game,when bid compensation becomes more effective,it does not mean that offering bid compensation is the best strategy,since more variables need to be considered.Further analysis shall be performed later.Nash Equilibrium of n -Bidder GameIt is desirable to generalize our model to the n -bidder game,al-though only very limited qualified bidders will be involved in most design–build or BOT procurements,since for other project delivery methods it is possible to have many bidders.Interested readers can follow the numerical illustrations for three-and four-bidder games to obtain the numerical solutions of n -bidder game.Here,only analytical equilibrium solutions will be solved.For “nA ”to be the Nash equilibrium,we must have P −E ϽS /n +P /n for bidder A not to deviate.In other words,condition (14)must be satisfiedS Ͼ͑n −1͒P −nE͑14͒Note that condition (14)can be rewritten as S Ͼn ͑P −E ͒−P ,which implies that it is not likely for nA to be the Nash equilib-rium when there are many bidders,unless E is very close to or larger than P .Similar to previous analysis,for “nH ”to be the equilibrium,we must have S /n +P /n −E Ͼ0for stability requirement,and condition (15)for excluding the possibility of cheap talk or nA equilibrium.The condition for the nH equilibrium can be reorga-nized as condition (16).S Ͻ͑n −1͒P −nE ͑15͒nE −P ϽS Ͻ͑n −1͒P −nE͑16͒Note that if E ϽP /n ,condition (16)will always be satisfied and nH will be a unique equilibrium even when S =0.In other words,nH will not be the Nash equilibrium when there are many bidders,unless E is extremely small,i.e.,E ϽP /n .For “aH +͑n −a ͒A ,where 2Ͻa Ͻn ”to be the equilibrium so-lution,we must have S /a +P /a −E Ͼ0for bidder H not to devi-ate,S /͑a +1͒+P /͑a +1͒−E Ͻ0for bidder A not to deviate,and condition (15).These requirements can be rewritten asaE −P ϽS Ͻmin ͓͑a +1͒E −P ,͑n −1͒P −nE ͔͑17͒Similarly,for “2H +͑n −2͒A ,”the stability requirements for bidder H and A are S /͑n −1͒ϽS /2+P /2−E and S /3+P /3−E Ͻ0,re-spectively,and thus the equilibrium condition can be written as ͓͑n −1͒/͑n −3͔͒͑2E −P ͒ϽS Ͻmin ͓3E −P ,͑n −1͒P −nE ͔͑18͒For the “1H +͑n −1͒A ”equilibrium,we must haveS Ͻmin ͕͓͑n −1͒/͑n −3͔͒͑2E −P ͒,͑n −1͒P −nE ͖͑19͒An interesting question is:“What conditions would warrant that the only possible equilibrium of the game is either “1H +͑n −1͒A ”or nA ,no matter how large S is?”A logical response to the question is:when equilibria “aH +͑n −a ͒A ,where a Ͼ2”and equilibrium 2H +͑n −2͒A are not possible solutions.Thus,a suf-ficient condition here is that for any S Ͼ͓͑n −1͒/͑n −3͔͒͑2E −P ͒,the “S Ͻ͑n −1͒P −nE ”is not satisfied.This can be guaranteed if we have͑n −1͒P −nE Ͻ͓͑n −1͒/͑n −3͔͒͑2E −P ͒→E Ͼ͓͑n −1͒/͑n +1͔͒P͑20͒Conditions (19)and (20)show that when E is greater than ͓͑n −1͒/͑n +1͔͒P ,the only possible equilibrium of the game is either 1H +͑n −1͒A or nA ,no matter how large S is.Two important practical implications can be drawn from this finding.First,when n is small in a design–build contract,it is not unusual that E will be greater than ͓͑n −1͒/͑n +1͔͒P ,and in that case,bid compensa-tion cannot help to promote higher effort.For example,for a three-bidder procurement,bid compensation will not be effective when E is greater than ͑2/4͒P .Second,when the number of bidders increases,bid compensation will become more effective since it will be more unlikely that E is greater than ͓͑n −1͒/͑n +1͔͒P .The two implications confirm the previous analyses of two-,three-,and four-bidder game.After the game equilibria and the effective range of bid compensation have been solved,the next important task is to develop the bid compensation strategy with respect to various procurement situations.Table pensation Impacts on a Four-Bidder GameEquilibriumE ;P =10%4H 3H +1A 2H +2A 1H +3A 4A E ϽP /4e.g.,E =2%S Ͻ22%*N/A N/A N/A S Ͼ22%P /4ϽE ϽP /3e.g.,E =3%2%ϽS Ͻ18%S Ͻ2%N/A N/A S Ͼ18%P /3ϽE ϽP /2e.g.,E =4%6%ϽS Ͻ14%2%ϽS Ͻ6%S Ͻ2%N/A S Ͼ14%P /2ϽE Ͻ͑3/5͒P e.g.,E =5.5%N/A 6.5%ϽS Ͻ8%3%ϽS Ͻ6.5%S Ͻ3%S Ͼ8%͑3/5͒P ϽE Ͻ͑3/4͒P e.g.,E =6.5%N/AN/AN/AS Ͻ4%*S Ͼ4%͑3/4͒P ϽEe.g.,E =8%N/A N/A N/A N/AAlways*Note:*denotes that zero compensation is the best strategy;and N/A=respective equilibrium does not exist.D o w n l o a d e d f r o m a s c e l i b r a r y .o r g b y N A N J I N G U N I VE R S I T Y OF o n 01/06/14. C o p y r i g h t A S C E . F o r p e r s o n a l u s e o n l y ; a l l r i g h t s r e s e r v e d .。

毕业设计外文文献翻译

毕业设计外文文献翻译

毕业设计外文文献翻译Graduation Design Foreign Literature Translation (700 words) Title: The Impact of Artificial Intelligence on the Job Market Introduction:Artificial Intelligence (AI) is a rapidly growing field that has the potential to revolutionize various industries and job markets. With advancements in technologies such as machine learning and natural language processing, AI has become capable of performing tasks traditionally done by humans. This has raised concerns about the future of jobs and the impact AI will have on the job market. This literature review aims to explore the implications of AI on employment and job opportunities.AI in the Workplace:AI technologies are increasingly being integrated into the workplace, with the aim of automating routine and repetitive tasks. For example, automated chatbots are being used to handle customer service queries, while machine learning algorithms are being employed to analyze large data sets. This has resulted in increased efficiency and productivity in many industries. However, it has also led to concerns about job displacement and unemployment.Job Displacement:The rise of AI has raised concerns about job displacement, as AI technologies are becoming increasingly capable of performing tasks previously done by humans. For example, automated machines can now perform complex surgeries with greaterprecision than human surgeons. This has led to fears that certain jobs will become obsolete, leading to unemployment for those who were previously employed in these industries.New Job Opportunities:While AI might potentially replace certain jobs, it also creates new job opportunities. As AI technologies continue to evolve, there will be a greater demand for individuals with technical skills in AI development and programming. Additionally, jobs that require human interaction and emotional intelligence, such as social work or counseling, may become even more in demand, as they cannot be easily automated.Job Transformation:Another potential impact of AI on the job market is job transformation. AI technologies can augment human abilities rather than replacing them entirely. For example, AI-powered tools can assist professionals in making decisions, augmenting their expertise and productivity. This may result in changes in job roles and the need for individuals to adapt their skills to work alongside AI technologies.Conclusion:The impact of AI on the job market is still being studied and debated. While AI has the potential to automate certain tasks and potentially lead to job displacement, it also presents opportunities for new jobs and job transformation. It is essential for individuals and organizations to adapt and acquire the necessary skills to navigate these changes in order to stay competitive in the evolvingjob market. Further research is needed to fully understand the implications of AI on employment and job opportunities.。

(完整版)_毕业设计(论文)外文翻译_(原文)

(完整版)_毕业设计(论文)外文翻译_(原文)

毕业设计(论文)——外文翻译(原文)NEW APPLICATION OF DATABASERelational databases in use for over two decades. A large portion of the applications of relational databases in the commercial world, supporting such tasks as transaction processing for banks and stock exchanges, sales and reservations for a variety of businesses, and inventory and payroll for almost of all companies. We study several new applications, which recent years.First. Decision-support systemAs the online availability of data , businesses to exploit the available data to make better decisions about increase sales. We can extract much information for decision support by using simple SQL queries. Recently support based on data analysis and data mining, or knowledge discovery, using data from a variety of sources.Database applications can be broadly classified into transaction processing and decision support. Transaction-processing systems are widely used today, and companies generated by these systems.The term data mining refers loosely to finding relevant information, or “discovering knowledge,” from a large volume of data. Like knowledge discovery in artificial intelligence, data mining attempts to discover statistical rules and patterns automatically from data. However, data mining differs from machine learning in that it deals with large volumes of data, stored primarily on disk.Knowledge discovered from a database can be represented by a set of rules. We can discover rules from database using one of two models:In the first model, the user is involved directly in the process of knowledge discovery.In the second model, the system is responsible for automatically discovering knowledgefrom the database, by detecting patterns and correlations in the data.Work on automatic discovery of rules influenced strongly by work in the artificial-intelligence community on machine learning. The main differences lie in the volume of data databases, and in the need to access disk. Specialized data-mining algorithms developed to which rules are discovered depends on the class of data-mining application. We illustrate rule discovery using two application classes: classification and associations.Second. Spatial and Geographic DatabasesSpatial databases store information related to spatial locations, and provide support for efficient querying and indexing based on spatial locations. Two types of spatial databases are particularly important:Design databases, or computer-aided-design (CAD) databases, are spatial databases used to store design information about databases are integrated-circuit and electronic-device layouts.Geographic databases are spatial databases used to store geographic information, such as maps. Geographic databases are often called geographic information systems.Geographic data are spatial in nature, but differ from design data in certain ways. Maps and satellite images are typical examples of geographic data. Maps may provide not only location information -such as boundaries, rivers and roads---but also much more detailed information associated with locations, such as elevation, soil type, land usage, and annual rainfall.Geographic data can be categorized into two types: raster data (such data consist a bit maps or pixel maps, in two or more dimensions.), vector data (vector data are constructed from basic geographic objects). Map data are often represented in vector format.Third. Multimedia DatabasesRecently, there much interest in databases that store multimedia data, such as images, audio, and video. Today multimedia data typically are stored outside the database, in files systems. When the number of multimedia objects is relatively small, features provided by databases are usually not important. Database functionality becomes important when the number of multimedia objects stored is large. Issues such as transactional updates, querying facilities, and indexing then become important. Multimedia objects often they were created, who created them, and to what category they belong. One approach to building a database for such multimedia objects is to use database for storing the descriptive attributes, and for keeping track of the files in which the multimedia objects are stored.However, storing multimedia outside the database makes it the basis of actual multimedia data content. It can also lead to inconsistencies, such a file that is noted in the database, but whose contents are missing, or vice versa. It is therefore desirable to store the data themselves in the database.Forth. Mobility and Personal DatabasesLarge-scale commercial databases stored in central computing facilities. In the case of distributed database applications, there strong central database and network administration. Two technology trends which this assumption of central control and administration is not entirely correct:1.The increasingly widespread use of personal computers, and, more important, of laptop or “notebook” computers.2.The development of a relatively low-cost wireless digital communication infrastructure, base on wireless local-area networks, cellular digital packet networks, and other technologies.Wireless computing creates a situation where machines no longer at which to materialize the result of a query. In some cases, the location of the user is a parameter of the query. A example is a traveler’s information system that provides data on the current route must be processed based on knowledge of the user’s location, direction of motion, and speed.Energy (battery power) is a scarce resource for mobile computers. This limitation influences many aspects of system design. Among the more interesting consequences of the need for energy efficiency is the use of scheduled data broadcasts to reduce the need for mobile system to transmit queries. Increasingly amounts of data may reside on machines administered by users, rather than by database administrators. Furthermore, these machines may, at times, be disconnected from the network.SummaryDecision-support systems are gaining importance, as companies realize the value of the on-line data collected by their on-line transaction-processing systems. Proposed extensions to SQL, such as the cube operation, of summary data. Data mining seeks to discover knowledge automatically, in the form of statistical rules and patterns from large databases. Data visualization systems data as well as geographic data. Design data are stored primarily as vector data; geographic data consist of a combination of vector and raster data.Multimedia databases are growing in importance. Issues such as similarity-based retrieval and delivery of data at guaranteed rates are topics of current research.Mobile computing systems , leading to interest in database systems that can run on such systems. Query processing in such systems may involve lookups on server database.毕业设计(论文)——外文翻译(译文)数据库的新应用我们使用关系数据库已经有20多年了,关系数据库应用中有很大一部分都用于商业领域支持诸如银行和证券交易所的事务处理、各种业务的销售和预约,以及几乎所有公司都需要的财产目录和工资单管理。

毕业设计外文翻译原文

毕业设计外文翻译原文

Int J Adv Manuf Technol (2014) 72:277–288DOI 10.1007/s00170-014-5664-3Workpiece roundness profile in the frequency domain: an application in cylindrical plunge grindingAndre D. L. Batako & Siew Y. GohReceived: 21 August 2013 / Accepted: 21 January 2014 / Published online: 14 February 2014# Springer-Verlag London 2014Abstract In grinding, most control strategies are based on the spindle power measurement, but recently, acoustic emission has been widely used for wheel wear and gap elimination. This paper explores a potential use of acoustic emission (AE) to detect workpiece lobes. This was achieved by sectioning and analysing the AE signal in the frequency domain. For the first time, the profile of the ground workpiece was predicted mathematically using key frequencies extracted from the AE signals. The results were validated against actual workpiece profile measurements. The relative shift of the wave formed on the surface of the part was expressed using the wheel- workpiece frequency ratio. A comparative study showed that the workpiece roundness profile could be monitored in the frequency domain using the AE signal during grinding. Keywords Plunge grinding . Roundness . Waviness . Frequency . Acoustic emission1IntroductionGrinding is mostly used as the last stage of a manufacturing process for fine finishing. However, recently, high efficiency deep grinding (HEDG) was introduced as a process that achieves high material removal rates exceeding 1,100 mm3/ mm/s [1–5]. Grinding is mainly used to achieve high dimen- sional and geometrical accuracy. However, in cylindrical plunge grinding, vibration is a key problem in keeping tight tolerances and form accuracy (roundness) of ground parts.Machine tools are designed and installed to have minimum vibration (with anti-vibration pad when required). Neverthe- less, in grinding, the interaction between the wheel and the workpiece generates persistent vibration. This leads to varia- tion of the forces acting in the contact zone, which in turn causes a variation in the depth of cut on the ground workpiece. Consequently, this creates waviness on the circumference of the workpiece. The engendered uneven profile on the work- piece surface leads to a modulation of the grinding conditions of the following successive rotations; this is called workpiece regenerative effect. The building up of this effect can take place in grinding cycles with longer duration. Similar effects occur on the grinding wheel surface; however, the process of the build up is slow [6–9].It is generally difficult to get a grinding wheel perfectly balanced manually, which is acceptable for general purpose grinding. For precision grinding, automatic dynamic wheel balancing devices are used. Though current grinding ma- chines have automatic balancing systems to reduce the out- of-balance of grinding wheels, in actual grinding, forced vi- bration is still caused by the dynamically unbalanced grinding wheels [10]. This is because any eccentricity in the rotating grinding wheel generates a vibratory motion.The stiffness of the wheel spindle and the tailstock also affects the wheel-workpiece-tailstock subsystem, which oscil- lates due to the interaction of the wheel with the workpiece. In practice, the generated force vibration is hard to eliminate completely. This type of vibration has greater influence on the formation of the workpiece profile. During the grinding process, the out-of-balance of the wheel behaves as a sinusoi- dal waveform that is i mprinted on t he workpiece s urface. T his, as in a previous case, leads to the variation of depth of cut andA.D.L.Ba t ako(*):S.Y.GohAMTReL, The General Engineering Research Institute, Liverpool John Moores University (LJMU), Byrom Street, Liverpool L3 3AF, UKe-mail: a.d.batako@ creates low-frequency lobes around the workpiece, and this is the key target of the study presented here.Other factors such as grinding parameters have to be taken into consideration in the study of grinding vibration becausethese aspects affect the stability of the process. This is because the resulting workpiece profile is the combined effect of different type of vibration in grinding [7, 11]. The studies carried out by Inasaki, Tonou and Yonetsu showed that the grinding parameters have a strong influence on the amplitude and growth rate of the workpiece and wheel regenerative vibration [12].The actual measurement of the workpiece profile is an integral part of the manufacturing process due to the uncertain-ty in wheel wear and the complexity of the grinding process. Contactless measurement and contact stylus systems were developed to record the variations of the workpiece size and roundness. However, these techniques can be used as post-process checking as it is limited to a particular set-up and must be used without the disturbance of the cutting fluid in a clean air-conditioned environment with stable t emperature [13–16].In the industry, random samples from batches are usually inspected after the grinding process. Any rejection of parts or sometimes batches increases the manufacturing time and cost. Therefore, it becomes important to develop online monitoring systems to cut down inspection time and to minimise rejected parts in grinding. Some of the existing monitoring systems in grinding are based on the wheel spindle power. However, sen-sors such as acoustic emission and accelerometers are also used to gather information of the grinding process for different appli-cation. Dornfeld has given a comprehensive view of the appli-cation of acoustic emission (AE) sensors in manufacturing [17]. Most reported applications of AE in grinding are for gap elim-ination, touch dressing and thermal burn detection [18–21].In cylindrical grinding processes, the generated chatter vibration causes the loss of form and dimensional accuracy of ground workpieces. The effect of vibration induces the formation of lobes on the workpiece surface, which are usu- ally detected using roundness measurement equipment. High- precision parts with tight tolerance are increasingly in demand and short cycle times put pressure on manufacturing process- es. This leads to the need for developing in-process roundness monitoring systems for cylindrical grinding processes.The potential of using acoustic emission to detect the formation of lobes on a workpiece during a cylindrical plunge grinding process is investigated in this work. The aim is to extract the workpiece roundness profile from the acoustic emission signal in the frequency domain. The extracted fre- quencies are compared with actual measurement in frequency domain, i.e. harmonic components. T he key frequencies o f the harmonic content are used to predict the expected profile on the ground p art.2The study of acoustic emission plunge grindingAE is an elastic wave that is generated when the workpiece is under the loading action of the cutting grits due to the interfacial and internal frictional and structural modification. The wave generated is transmitted from the contact zone through the components of the machine structure [22, 23]. In grinding processes, the main source of the AE signal is the mechanical stress applied by the wheel on the workpiece in the grinding zone [24]. The chipping action of the abrasive grits on the workpiece surface generates a multitude of acous- tic waves, which are transmitted to the sensor through the centres and the tailstock of machine. The machining condition is reflected in the signal through the magnitude of the acoustic emission, which varies with the intensity of the cutting, e.g. rough, medium or fine grinding. The key information of the machining process and its condition is buried in the AE signal. To extract any information of interest from the AE signals, it is important to identify the frequency bandwidth and study the signal in details.Susic and Grabec showed that intensive changes of AE signal relate to the grinding condition, thus the ground surface roughness could be estimated based on the measured signal with a profile correlation function [25]. A strong chatter vibration in grinding is also reflected in the recorded RMS AE signal. As vibration could generate the waviness on the workpiece, hence, the AE signal was also used to study the roundness profile [26]. A comprehensive study of the chatter vibration, wheel surface and workpiece quality in cylindrical plunge grinding based on the AE signal was carried out recently [27].In roundness measurement systems, the roundness of the part is also given as harmonic components. Generally, the frequency span given by the measurement machine is of low frequency—500 Hz and below. This is because the roundness profile deals with the waviness but not with the surface roughness that is always of higher frequency. Fricker [8] and Li and Shin [28] also indicated parts profile of frequency below 300 Hz. Part roundness profile is expressed in undula- tion per revolution. Therefore, lower frequency components are mainly targeted by t he measurement equipment, b ut higher frequency components tends to ride on top of lower carriers. In most cases, the provided frequency profile is in the range of 300 Hz [8, 28]. Therefore, this work studies the AE signal along the grinding process using the fast Fourier transform (FFT) with a particular focus on frequencies below 300 Hz. This allowed for a direct comparison between the results from this investigation and the actual roundness measurements.Figure 1 illustrates the equipment used in this study, where (a) is the configuration of the grinding machine with the location of the sensors and (b) is the roundness measurement machine. To improve signal transmission, the coating of the tailstock was removed from the location of the sensors as shown in this figure.During this study, observations of the shape of the recorded AE and the signal of spindle power indicated that there are three main phases in a typical cylindrical plunge grindingFig. 1 Experimental equipment: a grinding machine and sensors config- uration, b Talyrond 210 roundness measurement systemcycle, i.e. before grinding, actual grinding and dwell. In this work, the words ―dwell‖and ―dwelling‖are used to describe the ―spark out‖phase where the infeed stops and the grinding wheel enters a dwelling stage. For short notation, ―dwell‖is used in most figures.First phase (before grinding): at the beginning of the process, the grinding wheel approaches the workpiece in a fast infeed without any physical contact between the wheel and the workpiece.Second phase (actual grinding): when the grinding wheel gets very close to the workpiece, the rapid feed changes to the programmed infeed value then the grinding wheel gradually gets into contact with the workpiece. The phase starts with the first contact of the wheel with the part and runs until the targeted diameter is reached.Third phase (dwell or spark out): when the target diam- eter is reached, the infeed stops and the wheel stays in contact with the part. The duration of the dwelling pro- cess varies depending on the grinding conditions and is intended to remove the leftover material on the part due to mechanical and thermal deflection and to reduce the outof roundness. The grinding wheel retracts from the work- piece at the end of the programmed spark out (dwell).In this study, the power and AE signals were recorded simultaneously; however, the acceleration of the tailstock was also recorded for further investigation. The recorded signals are illustrated in Fig. 2 with a delimitation of the three phases.In addition, the actual grinding phase was subdivided to introduce the notion of ―grinding-in‖, ―steady grinding‖and ―pre-dwell‖ as depicted in Fig. 3. The steady grinding ends with a pre-dwell period. There is a transition state between the grinding in and the steady grinding states; this is where the cutting process starts entering the steady state. This is illus- trated by an ellipse in Fig. 2. During the grinding-in, the depth of cut increases from zero to a constant value per revolution, then the steady-state grinding runs under a constant depth of cut. The pre-dwell section is not an obvious technological phase, rather it is a tool used in this study.To aid the signal processing techniques, especially the fast Fourier transform and Yule Walker methods, a referenced sampling was introduced using an RPM pickup (see work- piece rotation in Fig. 3). Recording the workpiece rotation simultaneously with the AE signal helped portioning the signal to reduce processing time and to study time-varying process in the grinding.3Simulation and modellingWorkpiece responseIn this investigation, it was necessary to filter out from the recorded signals the frequencies of other parts of the grinding machine, especially the natural frequency of the workpiece. Fig. 2 Recorded power and acoustic emission signals with process phasesφ ¼ δ.2π.βð2ÞConsequently, the dynamics of the waviness Ω formed with time t at the surface of the part can be expressed as follows: Ω ¼ sin ðωt þ φÞð3ÞTherefore, the equation of the wave generated by the wheel at the surface of the workpiece was derived as follows:. Ω ¼ sin ωt þ 2πδ.βð4Þ3.3 Simulation of the workpiece profileFig. 3 Typical AE signal for one full grinding cycle with RPM outputTherefore, the workpiece response was studied using finite element analysis (FEA) and an experimental impact test to identify its natural frequency. The result of this study is depicted in Fig. 4, where it is seen that the natural frequency of the workpiece is 1,252 Hz. The outputs of the impact test and the FEA are in good agreement and show that the natural frequency of the workpiece is over 1 kHz; consequently, it will not appear in the range of low frequencies of interest.Process modellingDesignating the wheel rotational frequency by fs and the workpiece rotational frequency by fw , the ratio of these two entities was expressed as follows: β ¼ f s = f wð1ÞThe notion of frequency ratio (β) helps understanding the generation of the workpiece profile as it relates key process During the roundness measurement process, the machine uses a single trace of the stylus on the workpiece circumference to generate the profile of the workpiece. Here, the stylus is in direct contact with the measured part.However, in this study, an attempt is made for the first time to predict the final workpiece profile using process signatures extracted from the recorded signals. The link between the prediction model and the grinding process is the sensor, which collects the signal from the entire process. Therefore, the model predicts an average workpiece profile in contrary to measuring machine which gives only a single trace on the part. The procedure of capturing and extracting process sig- nature is schematically illustrated in Fig. 5.The procedure works as follows: throughout the grinding process, the acoustic emission, vibration and RPM sensor record the signals. The signals are processed using various techniques (e.g. FFT) to obtain the system response in the frequency domain. The model extracts process-inherent key dominant frequencies, and uses these frequencies and their respective amplitudes to generate the expected profile of the workpiece.The following expression in Eq. (5) is used to predict the final profile of the ground part. parameters and defines the fundamental harmonic, which naffects the part profile. In this study, it was found that the wheel-workpiece frequency ratio has a direct effect on the ∏ ¼ X i ¼1 ½αi cos ð2π t f i Þ] þ rand ðt Þ ð5Þ workpiece roundness as it constitutes the fundamental har- monic for this specific machining configuration.During the grinding process, there is a relative lag between the grinding wheel and the workpiece due to the difference in their rotational frequencies. This difference (δ) is numerically equal to the decimal part of the frequency ratio. This causes the currently forming wave to creep, with reference to the wave formed in the previous revolution of the part. By ex- pressing the wheel angular speed as ω, and the decimal part of the frequency ratio in Eq. 1 as δ, the relative shift φ of the wave on the workpiece surface was defined as follows: Where f i is the i th dominant frequency with an amplitude ofαi , and t is the time. rand (t ) is the added random noise to incorporate the randomness of grits cutting actions.4 Experimental workIn this investigation, the response of the machine tool was studied at different stages, namely idle, running by switching its components one by one and recording the signal from oneFig. 4 Workpiece response: a experiment and b FEAsingle location, and finally in operation conditions while grinding. This allowed identifying and discriminating fre- quency components belonging to the machine tools structure and those frequencies induced by noise and interference from nearby operating machineries.An analogue to digital (A/D) converter (NI 6110) was used to record the analogue signals from the power of the motor, the acceleration and the acoustic emission sensors through the tailstock. This A/D device had four channels with a sampling rate up to 5 MS/s per channel, providing a total sampling rate of 20 MS/s. This device allowed for a simultaneous four-channel sampling of analogue inputs voltage in the range of ±5 mV to42 V. The LabView software was used to control the data acquisition process during the experiments. To iden- tify the most suitable sampling, the signals were recorded at various sampling rates. The recorded signal was proc- essed using MATLAB.Sets of workpiece batches were ground using rough, medium and fine infeed. The grinding wheel speed was 35 m/s, and the workpiece was rotating at 100 rpm. In this experiment, a dwell or spark out of 10 s was applied to all grinding cycles. In total, 220 μm of material was removed from each part.The ground parts were allowed to settle down for 24 h at 19±1 °C in a temperature controlled room before the measure- ments were taken. The roundness profile of the ground parts were measured using a Talyrond 210 illustrated in Fig. 1. AFig. 5 Modelling pseudo- algorithmtypical measured workpiece roundness is illustrated in Fig. 6, where (a) is the roundness profile and (b) is the corresponding linear profile obtained by dissecting the round profile and expanding it in a line.5ResultsIn this work, various signal processing techniques were used to study the recorded signals as described above. In order to extract t he i nformation o f t he workpiece r oundness p rofile, t he acoustic emission signal was scrutinised using the above- mentioned partitioning technique. Each portion of the signal was analysed using the FFT and Yule Walker (YW) methods. However, short-time Fourier transform (STFT) and the continuous wavelet transform (CWT) were able to handle full grinding cycle signals.Figures 7 and 8 illustrate a typical power spectrum using the STFT and CWT of a full grinding cycle. Similar outputs were obtained with the FFT and YW methods; however, these last two methods required signal partitioning due to the com- putational window s ize.Figure 8 illustrates the three phases of a grinding cycle, where the frequency spectrum is given with time span along the grinding cycle. It is seen in this picture as in Fig. 7 that process-inherent f requencies a ppeared o nly d uring t he ―actual grinding‖ phase and partially in the ―dwell or spark out‖ period. Comparing STFT and CWT, it is observed that the STFT (Fig. 7) provided an aggregate frequency spectrum, when the CWT resolved each individual frequency (Fig. 8). This improved resolution allows identifying the birth of lobesFig. 6 Typical workpiece roundness measurement using Talyrond 210: a roundness profile, b corresponding linear profilein time within the grinding cycle. It opens an opportunity in studying the workpiece profile in the frequency domain. It is seen in both figures that the parasitic 50 Hz can be well discriminated f rom t he p rocess f requency. V arious f requencies up to 500 Hz that characterise the workpiece prolife are picked up in the actual grinding phase. Frequencies that dominate the spectrum towards the end of the actual grinding will poten- tially form and reside on the final part profile.Figure 9 presents the results of AE, where the sectioned signal was analysed in the frequency domain using FFT toextract the frequencies of interest. This picture displays a waterfall plot of the frequency spectrum in each phase of the grinding cycle. This study focused on the detection of process - inherent frequencies, with less attention to the actual value of the magnitude as it is subject of another investigation. It is observed in the ―before grinding ‖ section of the signal that nothing happens in the frequency domain; hence, no frequen- cy peaks were detected. In the ―grinding -in ‖ section, once the wheel hits the workpiece, several frequency peaks appear in the signal, characterising special events in the grindingFig. 7 Full grinding cycle AE signal frequency spectrum using STFTFig. 8 Full grinding cycle AE signal frequency spectrum using CWTFig. 9 Waterfall plot offrequency spectrum of a full grinding cycleprocess. The amplitudes of these frequencies increase as the process evolves into ―actual grinding ‖ due to the cutting intensity and diminish towards the end of the grinding cycle (spark out ). In this figure, the transition between the grinding phases can be observed from the variations in the frequencies and their amplitudes along the progression of the process. During the actual grinding, due to the generated vibration and the increase of the material removal, high peaks were detected. Less frequency components and small amplitude in the dwell period is due to reduced grains activities because there is no actual infeed of the wheel and the workpiece enters a relaxation stage while the wheel removes only leftover material c aused b y wheel/workpiece d eflection. Comparing the detected frequencies in the AE signal with the off-line measurement, it was identified that there is a factor of 0.6 between the two set of results in this particular test. This factor varies depending on machining configuration. Using this factor, all the detected harmonics from the frequency analysis were correlated to those from roundness measure- ment. Figure 12 gives a sample of comparative results for fine infeed grinding, showing the detected frequencies and their corresponding measured harmonics.For example, multiplying the frequency 54 in Fig. 10 (fre- quency analysis) by this factor (0.6) provides the value 32.4 (33), which is the harmonic detected by the measurement machine in Fig. 11. It is seen that the major lobes (33) formedFig. 10 Frequency content of the signal in dwell phase (fine infeed))Fig. 11 Harmonic profile from the actual measurement (fine infeed)Fig. 12 Extracted harmonics and measured values (fine infeed)on the ground workpiece in Fig. 12 as well as the other components, i.e. 48, 82, 115 and 148, were clearly detected in the AE signal as 78, 136,190 and 248 Hz.It is worth mentioning that the actual magnitude of the power spectrum of detected frequencies is not in the scope of the work presented here. This is because this work focused on the detection of process-inherent frequencies in order to develop a control strategy to improve the round- ness of the part. The control strategy and the actual magni- tude are considered in the next phase, where the system will be calibrated.The study of the frequencies in fine infeed showed that during the dwelling period (spark out), the amplitude of the detected harmonics decreases drastically. It is observed that in spark out (dwell), the amplitude of 243 Hz which was dom- inating throughout the cycle dropped and led to 54 Hz to become the dominate in the last phase in Fig. 10. This section carries important information of the final workpiece profile. The number of lobes formed on the workpiece is now defined as a product of the extracted frequencies with the defined factor. This holds true for any frequency detected and for given machining parameters configuration. The factor of 0.6 given here is adequate for this specific experimental set up used in these particular tests. However, it was identified that this factor varies as a function of process settings. The origin of this factor was identified but not stated here, as it is commercially viable for the companies pursuing further de- velopment of this work.This study confirmed that the final profile of the work- piece is the result of overlapping waves of different fre- quencies as an additive process. This is schematically illus- trated in Fig. 13. In addition, these waves have a relative translation with reference to each other due to a shifting effect caused by the relative creep of the grinding wheel with reference to the rotating workpiece as described in Eqs. (2–4). It is worthwhile stating that this work does not study the roughness which is characterised by high frequen- cy; however, it focused on the formation of lobes which are of lower frequency.This is evidenced in Fig. 14 for rough infeed grinding, where the AE signal was analysed per workpiece revo- lution in the actual steady -state grinding. This figureFig. 13 Additive effect of key harmonics forming a profile on a workpieceFig. 14 Process frequency content in steady-state grinding with rough infeedshows how different frequencies appear or disappear from revolution to revolution due to the shifting and overlapping effect. Also, the amplitude of these frequen- cies varies along the process. However, in fine infeed, it was observed that the process is dominated by two high peaks of 54 and 243 Hz (second and ninth harmonic in Fig. 12), which appeared throughout the full grinding cycle.Using the wave additive property and applying Eq. (5) allowed predicting the expected workpiece profile using the information extracted from the signal in the frequency domain. One of the examples is shown in Fig. 15 in a form of linear profile, which is obtained by dissecting the circu- lar profile and extending it along a line of 360°. The measurement machine chooses an arbitrary point and dis- sects the profile. Here, Fig. 15a is a linear roundness profile of the workpiece obtained from the actual round- ness measurement system, and Fig. 15b is the predicted (simulated) workpiece profile using the frequency compo- nents extracted from the AE signal. The point where the measuring machine cuts the profile and sets the origin of the axis 0.0°is unknown to the machine operator; there- fore, the results in Fig. 15a are shifted relative to Fig. 15b by an unknown value. However, there is a good agreement between these two profiles in terms of the surface undula- tion per revolution. This prediction will be used in the control strategy to improve the part profile well before the process enters the spark out phase.a)Simulate signal54321 0 -1-2-3-4 050 100 150200 250 300 350DegreeFig. 15 Linear workpiece profile: (a) actual measurement; (b) predicted (simulated ) profile6 DiscussionThe results show that the detected major dominant frequency in the AE signal is of importance because it indicates the number of major lobes formed on the workpiece. The other frequency components represent the small peaks on the work- piece surface. Actual workpiece measurements supported this finding. The extracted information of workpiece profile using the techniques presented here provides the room for the de- velopment of a control strategy to improve the workpiece roundness.This study showed that the formation of the workpiece profile is a function of the process parameters where the wheel and the workpiece play the key role. This is because at high rotating speeds, a slight unbalance of the wheel leads to high eccentric force, hence uneven stock removal. This is magni- fied by the effect of regular or irregular imprints on the workpiece surface. The shifting of the wheel relative to the workpiece leads to the generation of various waves on the workpiece. It was observed in fine infeed that the machining conditions are relatively stable; therefore, there were no drastic changes in the AE signal in terms of frequencies and amplitudes. However, in rough infeed with increased depth of cut and longer contact length, there is a tendency to have vibrations at high amplitude. This leads to radical changes in the cutting intensity at a regular pattern and at the pace of the fundamental frequency. An example is observed in Fig. 14 where certain frequencies appear constantly in the last three rotations (see sixth, seventh and eight rotations). If there is no shift between successive rotations, the matching of dominant frequenciesmay cause a beating effect as the wheel and the workpiece would make their contacts at the same points. Consequently, the formed lobes would become more apparent around the workpiece. An example of the beating effect is seen in the AE signal in Fig. 3 where the amplitude of the signal is modulated.Low infeed rate has a small depth of cut, short contact length and provides an increased number of lobes. However, the opposite is true in high infeed rate, where a small number of lobes is generated with higher amplitude. The higher the number of lobes, the smaller the interval between the lobes and the smoother is the profile formed. Thus, the workpiece produced using the rough infeed has higher peak with lower number of lobes, while with the fine infeed, the number of lobes is higher and the peaks are of small heights.7 Conclusions This paper provides some key relations between the process and the acoustic sounds emitted during machining. Process- inherent frequencies were successfully extracted from the AE signal and compared with the information of the measured workpiece profile. The obtained results were verified using data from the actual roundness measurement. A range of grinding parameters was covered and the outcomes correlated well with the measurements. A fundamental frequency ratio was established. A mathematical expression was derived to predict the expected profile of the machined part. The rela- tionship between the frequencies buried in the AE and those A m p l i t u d eb)。

毕业设计外文及翻译4

毕业设计外文及翻译4

What is circulating fluidized bed boiler?∙Fluidized Bed Reactors : High temperature systems for labs: Excellent mass and heat transfer∙Fluidized Bed Dryer for porboiled, paddy,corn,coffee,beans,jusmin etc.∙Fluid Bed Processor: Powder Coater Granulator Drying/Granulating/Coating Answer:CIRCULATING FLUIDIZED BED (CFB) BOILERTECHNOLOGY...a unique type of technology that converts various sources of fuel into energyDuring normal operation CFB technology does not utilize higher temperature gas, coal or oil burners in its furnace; instead it utilizes fluidization technology to circulate the fuel source as they burn in a low-temperature combustion process. The low burningtemperature minimizes the formation of nitrogen. The fuel is recycled over and over which results in high efficiency for fuel burning, capturing certain gaseous emissions, andtransferring the fuel's heat energy into high-quality steam used to produce power. The vigorous mixing, long burning time, and low-temperature combustion process allow CFBs to cleanly burn virtually any combustible material. CFBs capture and control gaseous emissions as required by the EPA during the conversion process generally eliminating the need to add additional emission control equipment.CFB technology has proven to be very capable of converting fuels with substantially lower BTU (British Thermal Unit) heating values such as waste coal.Simply put, by suspending (circulating) low quality fuel in air, it could be ignited and swirl inside the boiler like a fluid --- hence the "fluidized bed" part of the name. By circulating the burning fuel in a tall boiler-furnace until all of the available carbon is converted to energy, even a low BTU source such as coal refuse can be effectively and efficiently utilized.Accordingly even coal refuse that had been randomly discarded and unused for decades could now be used and converted into viable alternative energy....coal that had never been considered as "useful fuel" prior to the development of CFB Technology.CFB units are inherently designed and have proven over time to cleanly convert low BTU fuels into viable alternative energy.。

毕设外文原文及译文

毕设外文原文及译文

北京联合大学毕业设计(论文)任务书题目:OFDM调制解调技术的设计与仿真实现专业:通信工程指导教师:张雪芬学院:信息学院学号:2011080331132班级:1101B姓名:徐嘉明一、外文原文Evolution Towards 5G Multi-tier Cellular WirelessNetworks:An Interference ManagementPerspectiveEkram Hossain, Mehdi Rasti, Hina Tabassum, and Amr AbdelnasserAbstract—The evolving fifth generation (5G) cellular wireless networks are envisioned to overcome the fundamental challenges of existing cellular networks, e.g., higher data rates, excellent end-to-end performance and user-coverage in hot-spots and crowded areas with lower latency, energy consumption and cost per information transfer. To address these challenges, 5G systems will adopt a multi-tier architecture consisting of macrocells, different types of licensed small cells, relays, and device-to-device (D2D) networks to serve users with different quality-of-service (QoS) requirements in a spectrum and energy-efficient manner. Starting with the visions and requirements of 5G multi-tier networks, this article outlines the challenges of interference management (e.g., power control, cell association) in these networks with shared spectrum access (i.e., when the different network tiers share the same licensed spectrum). It is argued that the existing interference management schemes will not be able to address the interference management problem in prioritized 5G multitier networks where users in different tiers have different priorities for channel access. In this context, a survey and qualitative comparison of the existing cell association and power control schemes is provided to demonstrate their limitations for interference management in 5G networks. Open challenges are highlighted and guidelines are provided to modify the existing schemes in order to overcome these limitations and make them suitable for the emerging 5G systems.Index Terms—5G cellular wireless, multi-tier networks, interference management, cell association, power control.I. INTRODUCTIONTo satisfy the ever-increasing demand for mobile broadband communications, the IMT-Advanced (IMT-A) standards have been ratified by the International Telecommunications Union (ITU) in November 2010 and the fourth generation (4G) wireless communication systems are currently being deployed worldwide. The standardization for LTE Rel-12, also known as LTE-B, is also ongoing and expected to be finalized in 2014. Nonetheless, existing wireless systems will not be able to deal with the thousand-fold increase in total mobile broadband data [1] contributed by new applications and services such as pervasive 3D multimedia, HDTV, VoIP, gaming, e-Health, and Car2x communication. In this context, the fifth generation (5G) wireless communication technologies are expected to attain 1000 times higher mobile data volume per unit area,10-100 times higher number of connecting devices and user data rate, 10 times longer battery life and 5 times reduced latency [2]. While for 4G networks the single-user average data rate is expected to be 1 Gbps, it is postulated that cell data rate of theorder of 10 Gbps will be a key attribute of 5G networks.5G wireless networks are expected to be a mixture of network tiers of different sizes, transmit powers, backhaul connections, different radio access technologies (RATs) that are accessed by an unprecedented numbers of smart and heterogeneous wireless devices. This architectural enhancement along with the advanced physical communications technology such as high-order spatial multiplexing multiple-input multiple-output (MIMO) communications will provide higher aggregate capacity for more simultaneous users, or higher level spectral efficiency, when compared to the 4G networks. Radio resource and interference management will be a key research challenge in multi-tier and heterogeneous 5G cellular networks. The traditional methods for radio resource and interference management (e.g., channel allocation, power control, cell association or load balancing) in single-tier networks (even some of those developed for two-tier networks) may not be efficient in this environment and a new look into the interference management problem will be required.First, the article outlines the visions and requirements of 5G cellular wireless systems. Major research challenges are then highlighted from the perspective of interference management when the different network tiers share the same radio spectrum. A comparative analysis of the existing approaches for distributed cell association and power control (CAPC) is then provided followed by a discussion on their limitations for5G multi-tier cellular networks. Finally, a number of suggestions are provided to modifythe existing CAPC schemes to overcome these limitations.II. VISIONS AND REQUIREMENTS FOR 5G MULTI-TIERCELLULAR NETWORKS5G mobile and wireless communication systems will require a mix of new system concepts to boost the spectral and energy efficiency. The visions and requirements for 5G wireless systems are outlined below.·Data rate and latency: For dense urban areas, 5G networks are envisioned to enable an experienced data rate of 300 Mbps and 60 Mbps in downlink and uplink, respectively, in 95% of locations and time [2]. The end-to- end latencies are expected to be in the order of 2 to 5 milliseconds. The detailed requirements for different scenarios are listed in [2].·Machine-type Communication (MTC) devices: The number of traditional human-centric wireless devices with Internet connectivity (e.g., smart phones, super-phones, tablets) may be outnumbered by MTC devices which can be used in vehicles, home appliances, surveillance devices, and sensors.·Millimeter-wave communication: To satisfy the exponential increase in traffic and the addition of different devices and services, additional spectrum beyond what was previously allocated to 4G standard is sought for. The use of millimeter-wave frequency bands (e.g., 28 GHz and 38 GHz bands) is a potential candidate to overcome the problem of scarce spectrum resources since it allows transmission at wider bandwidths than conventional 20 MHz channels for 4G systems.·Multiple RATs: 5G is not about replacing the existing technologies, but it is about enhancing and supporting them with new technologies [1]. In 5G systems, the existing RATs, including GSM (Global System for Mobile Communications), HSPA+ (Evolved High-Speed Packet Access), and LTE, will continue to evolve to provide a superior system performance. They will also be accompanied by some new technologies (e.g., beyondLTE-Advanced).·Base station (BS) densification: BS densification is an effective methodology to meet the requirements of 5G wireless networks. Specifically, in 5G networks, there will be deployments of a large number of low power nodes, relays, and device-to-device (D2D) communication links with much higher density than today’s macrocell networks.Fig. 1 shows such a multi-tier network with a macrocell overlaid by relays, picocells, femtocells, and D2D links. The adoption of multiple tiers in the cellular networkarchitecture will result in better performance in terms of capacity, coverage, spectral efficiency, and total power consumption, provided that the inter-tier and intratier interferences are well managed.·Prioritized spectrum access: The notions of both trafficbased and tier-based Prioriti -es will exist in 5G networks. Traffic-based priority arises from the different requirements of the users (e.g., reliability and latency requirements, energy constraints), whereas the tier-based priority is for users belonging to different network tiers. For example, with shared spectrum access among macrocells and femtocells in a two-tier network, femtocells create ―dead zones‖ around them in the downlink for macro users. Protection should, thus, be guaranteed for the macro users. Consequently, the macro and femtousers play the role of high-priority users (HPUEs) and lowpriority users (LPUEs), respectively. In the uplink direction, the macrocell users at the cell edge typically transmit with high powers which generates high uplink interference to nearby femtocells. Therefore, in this case, the user priorities should get reversed. Another example is a D2D transmission where different devices may opportunistically access the spectrum to establish a communication link between them provided that the interference introduced to the cellular users remains below a given threshold. In this case, the D2D users play the role of LPUEs whereas the cellular users play the role of HPUEs.·Network-assisted D2D communication: In the LTE Rel- 12 and beyond, focus will be on network controlled D2D communications, where the macrocell BS performs control signaling in terms of synchronization, beacon signal configuration and providing identity and security management [3]. This feature will extend in 5G networks to allow other nodes, rather than the macrocell BS, to have the control. For example, consider a D2D link at the cell edge and the direct link between the D2D transmitter UE to the macrocell is in deep fade, then the relay node can be responsible for the control signaling of the D2Dlink (i.e., relay-aided D2D communication).·Energy harvesting for energy-efficient communication: One of the main challenges in 5G wireless networks is to improve the energy efficiency of the battery-constrained wireless devices. To prolong the battery lifetime as well as to improve the energy efficiency, an appealing solution is to harvest energy from environmental energy sources (e.g., solar and wind energy). Also, energy can be harvested from ambient radio signals (i.e., RF energy harvesting) with reasonable efficiency over small distances. The havested energy could be used for D2D communication or communication within a small cell. Inthis context, simultaneous wireless information and power transfer (SWIPT) is a promising technology for 5G wireless networks. However, practical circuits for harvesting energy are not yet available since the conventional receiver architecture is designed for information transfer only and, thus, may not be optimal for SWIPT. This is due to the fact that both information and power transfer operate with different power sensitivities at the receiver (e.g., -10dBm and -60dBm for energy and information receivers, respectively) [4]. Also, due to the potentially low efficiency of energy harvesting from ambient radio signals, a combination of different energy harvesting technologies may be required for macrocell communication.III. INTERFERENCE MANAGEMENT CHALLENGES IN 5GMULTI-TIER NETWORKSThe key challenges for interference management in 5G multi-tier networks will arise due to the following reasons which affect the interference dynamics in the uplink and downlink of the network: (i) heterogeneity and dense deployment of wireless devices, (ii) coverage and traffic load imbalance due to varying transmit powers of different BSs in the downlink, (iii) public or private access restrictions in different tiers that lead to diverse interference levels, and (iv) the priorities in accessing channels of different frequencies and resource allocation strategies. Moreover, the introduction of carrier aggregation, cooperation among BSs (e.g., by using coordinated multi-point transmission (CoMP)) as well as direct communication among users (e.g., D2D communication) may further complicate the dynamics of the interference. The above factors translate into the following key challenges.·Designing optimized cell association and power control (CAPC) methods for multi-tier networks: Optimizing the cell associations and transmit powers of users in the uplink or the transmit powers of BSs in the downlink are classical techniques to simultaneously enhance the system performance in various aspects such as interference mitigation, throughput maximization, and reduction in power consumption. Typically, the former is needed to maximize spectral efficiency, whereas the latter is required to minimize the power (and hence minimize the interference to other links) while keeping theFig. 1. A multi-tier network composed of macrocells, picocells, femtocells, relays, and D2D links.Arrows indicate wireless links, whereas the dashed lines denote the backhaul connections. desired link quality. Since it is not efficient to connect to a congested BS despite its high achieved signal-to-interference ratio (SIR), cell association should also consider the status of each BS (load) and the channel state of each UE. The increase in the number of available BSs along with multi-point transmissions and carrier aggregation provide multiple degrees of freedom for resource allocation and cell-selection strategies. For power control, the priority of different tiers need also be maintained by incorporating the quality constraints of HPUEs. Unlike downlink, the transmission power in the uplink depends on the user’s batt ery power irrespective of the type of BS with which users are connected. The battery power does not vary significantly from user to user; therefore, the problems of coverage and traffic load imbalance may not exist in the uplink. This leads to considerable asymmetries between the uplink and downlink user association policies. Consequently, the optimal solutions for downlink CAPC problems may not be optimal for the uplink. It is therefore necessary to develop joint optimization frameworks that can provide near-optimal, if not optimal, solutions for both uplink and downlink. Moreover, to deal with this issue of asymmetry, separate uplink and downlink optimal solutions are also useful as far as mobile users can connect with two different BSs for uplink and downlink transmissions which is expected to be the case in 5G multi-tier cellular networks [3].·Designing efficient methods to support simultaneous association to multiple BSs: Compared to existing CAPC schemes in which each user can associate to a singleBS, simultaneous connectivity to several BSs could be possible in 5G multi-tier network. This would enhance the system throughput and reduce the outage ratio by effectively utilizing the available resources, particularly for cell edge users. Thus the existing CAPCschemes should be extended to efficiently support simultaneous association of a user to multiple BSs and determine under which conditions a given UE is associated to which BSs in the uplink and/or downlink.·Designing efficient methods for cooperation and coordination among multiple tiers: Cooperation and coordination among different tiers will be a key requirement to mitigate interference in 5G networks. Cooperation between the macrocell and small cells was proposed for LTE Rel-12 in the context of soft cell, where the UEs are allowed to have dual connectivity by simultaneously connecting to the macrocell and the small cell for uplink and downlink communications or vice versa [3]. As has been mentioned before in the context of asymmetry of transmission power in uplink and downlink, a UE may experience the highest downlink power transmission from the macrocell, whereas the highest uplink path gain may be from a nearby small cell. In this case, the UE can associate to the macrocell in the downlink and to the small cell in the uplink. CoMP schemes based on cooperation among BSs in different tiers (e.g., cooperation between macrocells and small cells) can be developed to mitigate interference in the network. Such schemes need to be adaptive and consider user locations as well as channel conditions to maximize the spectral and energy efficiency of the network. This cooperation however, requires tight integration of low power nodes into the network through the use of reliable, fast andlow latency backhaul connections which will be a major technical issue for upcoming multi-tier 5G networks. In the remaining of this article, we will focus on the review of existing power control and cell association strategies to demonstrate their limitations for interference management in 5G multi-tier prioritized cellular networks (i.e., where users in different tiers have different priorities depending on the location, application requirements and so on). Design guidelines will then be provided to overcome these limitations. Note that issues such as channel scheduling in frequency domain, timedomain interference coordination techniques (e.g., based on almost blank subframes), coordinated multi-point transmission, and spatial domain techniques (e.g., based on smart antenna techniques) are not considered in this article.IV. DISTRIBUTED CELL ASSOCIATION AND POWERCONTROL SCHEMES: CURRENT STATE OF THE ARTA. Distributed Cell Association SchemesThe state-of-the-art cell association schemes that are currently under investigation formulti-tier cellular networks are reviewed and their limitations are explained below.·Reference Signal Received Power (RSRP)-based scheme [5]: A user is associated with the BS whose signal is received with the largest average strength. A variant of RSRP, i.e., Reference Signal Received Quality (RSRQ) is also used for cell selection in LTE single-tier networks which is similar to the signal-to-interference (SIR)-based cell selection where a user selects a BS communicating with which gives the highest SIR. In single-tier networks with uniform traffic, such a criterion may maximize the network throughput. However, due to varying transmit powers of different BSs in the downlink of multi-tier networks, such cell association policies can create a huge traffic load imbalance. This phenomenon leads to overloading of high power tiers while leaving low power tiers underutilized.·Bias-based Cell Range Expansion (CRE) [6]: The idea of CRE has been emerged as a remedy to the problem of load imbalance in the downlink. It aims to increase the downlink coverage footprint of low power BSs by adding a positive bias to their signal strengths (i.e., RSRP or RSRQ). Such BSs are referred to as biased BSs. This biasing allows more users to associate with low power or biased BSs and thereby achieve a better cell load balancing. Nevertheless, such off-loaded users may experience unfavorable channel from the biased BSs and strong interference from the unbiased high-power BSs. The trade-off between cell load balancing and system throughput therefore strictly depends on the selected bias values which need to be optimized in order to maximize the system utility. In this context, a baseline approach in LTE-Advanced is to ―orthogonalize‖ the transmissions of the biased and unbiased BSs in time/frequency domain such that an interference-free zone is created.·Association based on Almost Blank Sub-frame (ABS) ratio [7]: The ABS technique uses time domain orthogonalization in which specific sub-frames are left blank by the unbiased BS and off-loaded users are scheduled within these sub-frames to avoid inter-tier interference. This improves the overall throughput of the off-loaded users by sacrificing the time sub-frames and throughput of the unbiased BS. The larger bias values result in higher degree of offloading and thus require more blank subframes to protect the offloaded users. Given a specific number of ABSs or the ratio of blank over total number of sub-frames (i.e., ABS ratio) that ensures the minimum throughput of the unbiased BSs, this criterion allows a user to select a cell with maximum ABS ratio and may even associate with the unbiased BS if ABS ratio decreases significantly. A qualitative comparison amongthese cell association schemes is given in Table I. The specific key terms used in Table I are defined as follows: channel-aware schemes depend on the knowledge of instantaneous channel and transmit power at the receiver. The interference-aware schemes depend on the knowledge of instantaneous interference at the receiver. The load-aware schemes depend on the traffic load information (e.g., number of users). The resource-aware schemes require the resource allocation information (i.e., the chance of getting a channel or the proportion of resources available in a cell). The priority-aware schemes require the information regarding the priority of different tiers and allow a protection to HPUEs. All of the above mentioned schemes are independent, distributed, and can be incorporated with any type of power control scheme. Although simple and tractable, the standard cell association schemes, i.e., RSRP, RSRQ, and CRE are unable to guarantee the optimum performance in multi-tier networks unless critical parameters, such as bias values, transmit power of the users in the uplink and BSs in the downlink, resource partitioning, etc. are optimized.B. Distributed Power Control SchemesFrom a user’s point of view, the objective of power control is to support a user with its minimum acceptable throughput, whereas from a system’s point of view it is t o maximize the aggregate throughput. In the former case, it is required to compensate for the near-far effect by allocating higher power levels to users with poor channels as compared to UEs with good channels. In the latter case, high power levels are allocated to users with best channels and very low (even zero) power levels are allocated to others. The aggregate transmit power, the outage ratio, and the aggregate throughput (i.e., the sum of achievable rates by the UEs) are the most important measures to compare the performance of different power control schemes. The outage ratio of a particular tier can be expressed as the ratio of the number of UEs supported by a tier with their minimum target SIRs and the total number of UEs in that tier. Numerous power control schemes have been proposed in the literature for single-tier cellular wireless networks. According to the corresponding objective functions and assumptions, the schemes can be classified into the following four types.·Target-SIR-tracking power control (TPC) [8]: In the TPC, each UE tracks its own predefined fixed target-SIR. The TPC enables the UEs to achieve their fixed target-TABLE IQUALITATIVE COMPARISON OF EXISTING CELL ASSOCIATION SCHEMESFOR MULTI-TIER NETWORKSSIRs at minimal aggregate transmit power, assuming thatthe target-SIRs are feasible. However, when the system is infeasible, all non-supported UEs (those who cannot obtain their target-SIRs) transmit at their maximum power, which causes unnecessary power consumption and interference to other users, and therefore, increases the number of non-supported UEs.·TPC with gradual removal (TPC-GR) [9], [10], and [11]:To decrease the outage ra -tio of the TPC in an infeasiblesystem, a number of TPC-GR algorithms were proposedin which non-supported users reduce their transmit power[10] or are gradually removed [9], [11].·Opportunistic power control (OPC) [12]: From the system’s point of view, OPC allocates high power levels to users with good channels (experiencing high path-gains and low interference levels) and very low power to users with poor channels. In this algorithm, a small difference in path-gains between two users may lead to a large difference in their actual throughputs [12]. OPC improves the system performance at the cost of reduced fairness among users.·Dynamic-SIR tracking power control (DTPC) [13]: When the target-SIR requirements for users are feasible, TPC causes users to exactly hit their fixed target-SIRs even if additional resources are still available that can otherwise be used to achieve higher SIRs (and thus better throughputs). Besides, the fixed-target-SIR assignment is suitable only for voice service for which reaching a SIR value higher than the given target value does not affect the service quality significantly. In contrast, for data services, a higher SIR results in a better throughput, which is desirable. The DTPC algorithm was proposed in [13] to address the problem of system throughput maximization subject to a given feasible lower bound for the achieved SIRs of all users in cellular networks. In DTPC, each user dynamically sets its target-SIR by using TPC and OPC in a selective manner. It was shown that when the minimum acceptable target-SIRs are feasible, the actual SIRs received by some users can be dynamically increased (to a value higher than their minimum acceptabletarget-SIRs) in a distributed manner so far as the required resources are available and the system remains feasible (meaning that reaching the minimum target-SIRs for the remaining users are guaranteed). This enhances the system throughput (at the cost of higher power consumption) as compared to TPC. The aforementioned state-of-the-art distributed power control schemes for satisfying various objectives in single-tier wireless cellular networks are unable to address the interference management problem in prioritized 5G multi-tier networks. This is due to the fact that they do not guarantee that the total interference caused by the LPUEs to the HPUEs remain within tolerable limits, which can lead to the SIR outage of some HPUEs. Thus there is a need to modify the existing schemes such that LPUEs track their objectives while limiting their transmit power to maintain a given interference threshold at HPUEs. A qualitative comparison among various state-of-the-art power control problems with different objectives and constraints and their corresponding existing distributed solutions are shown in Table II. This table also shows how these schemes can be modified and generalized for designing CAPC schemes for prioritized 5G multi-tier networks.C. Joint Cell Association and Power Control SchemesA very few work in the literature have considered the problem of distributed CAPC jointly (e.g., [14]) with guaranteed convergence. For single-tier networks, a distributed framework for uplink was developed [14], which performs cell selection based on the effective-interference (ratio of instantaneous interference to channel gain) at the BSs and minimizes the aggregate uplink transmit power while attaining users’ desire d SIR targets. Following this approach, a unified distributed algorithm was designed in [15] for two-tier networks. The cell association is based on the effective-interference metric and is integrated with a hybrid power control (HPC) scheme which is a combination of TPC and OPC power control algorithms.Although the above frameworks are distributed and optimal/ suboptimal with guaranteed convergence in conventional networks, they may not be directly compatible to the 5G multi-tier networks. The interference dynamics in multi-tier networks depends significantly on the channel access protocols (or scheduling), QoS requirements and priorities at different tiers. Thus, the existing CAPC optimization problems should be modified to include various types of cell selection methods (some examples are provided in Table I) and power control methods with different objectives and interference constraints (e.g., interference constraints for macro cell UEs, picocell UEs, or D2Dreceiver UEs). A qualitative comparison among the existing CAPC schemes along with the open research areas are highlighted in Table II. A discussion on how these open problems can be addressed is provided in the next section.V. DESIGN GUIDELINES FOR DISTRIBUTED CAPCSCHEMES IN 5G MULTI-TIER NETWORKSInterference management in 5G networks requires efficient distributed CAPC schemes such that each user can possibly connect simultaneously to multiple BSs (can be different for uplink and downlink), while achieving load balancing in different cells and guaranteeing interference protection for the HPUEs. In what follows, we provide a number of suggestions to modify the existing schemes.A. Prioritized Power ControlTo guarantee interference protection for HPUEs, a possible strategy is to modify the existing power control schemes listed in the first column of Table II such that the LPUEs limit their transmit power to keep the interference caused to the HPUEs below a predefined threshold, while tracking their own objectives. In other words, as long as the HPUEs are protected against existence of LPUEs, the LPUEs could employ an existing distributed power control algorithm to satisfy a predefined goal. This offers some fruitful direction for future research and investigation as stated in Table II. To address these open problems in a distributed manner, the existing schemes should be modified so that the LPUEs in addition to setting their transmit power for tracking their objectives, limit their transmit power to keep their interference on receivers of HPUEs below a given threshold. This could be implemented by sending a command from HPUEs to its nearby LPUEs (like a closed-loop power control command used to address the near-far problem), when the interference caused by the LPUEs to the HPUEs exceeds a given threshold. We refer to this type of power control as prioritized power control. Note that the notion of priority and thus the need of prioritized power control exists implicitly in different scenarios of 5G networks, as briefly discussed in Section II. Along this line, some modified power control optimization problems are formulated for 5G multi-tier networks in second column of Table II.To compare the performance of existing distributed power control algorithms, let us consider a prioritized multi-tier cellular wireless network where a high-priority tier consisting of 3×3 macro cells, each of which covers an area of 1000 m×1000 m, coexists with a low-priority tier consisting of n small-cells per each high-priority macro cell, each。

毕设外文翻译电子版

毕设外文翻译电子版

7.1 INTRODUCTIONAfter lathes, milling machines are the most widely used for manufacturing applications. In milling, the workpiece is fed into a rotating milling cutter, which is a multi-point tool as shown in Fig. 7.1, unlike a lathe, which uses a single point cutting tool. The tool used in milling is called the milling cutter.Fig. 7.1Schematic diagram of a milling operationThe milling process is characterised by:(i)Interrupted cutting Each of the cutting edges removes materialfor only a part of the rotation of the milling cutter. As a result, the cutting edge has time to cool before it again removes material.Thus the milling operation is much more cooler compared to the turning operation. This allows for a much larger material rates.(ii)Small size of chips Though the size of the chips is small, in view of the multiple cutting edges in contact a large amount of material is removed and as a result the component is generally completed ina single pass unlike the turning process which requires a largenumber of cuts for finishing.(iii)Variation in chip thickness This contributes to the non-steady state cyclic conditions of varying cutting forces during the contact of the cutting edge with the chip thickness varying from zero to maximum size or vice versa. This cyclic variation of the force can excite any of the natural frequencies of the machine tool system and is harmful to the tool life and surface finish generatedA milling machine is one of the most versatile machine tools. It is adaptable for quantity production as well as in job shops and tool rooms. The versatility of milling is because of the large variety of accessories and tools available with milling machines. The typical tolerance expected from the process is about ±0.050 mm.7.2 TYPES OF MILLING MACHINESTo satisfy various requirements milling machines come in a number of sizes and varieties. In view of the large material removal ratesmilling machines come with a very rigid spindle and large power. The varieties of milling machines available are:(i) Knee and Column type(a) horizontal(b) vertical(c) universal(d) turret typeThese are the general purpose milling machines, which have a high degree of flexibility and are employed for all types of works including batch manufacturing. A large variety of attachments to improve the flexibility are available for this class of milling machines.(ii) Production (Bed) type(a) simplex(b) duplex(c) triplexThese machines are generally meant for regular production involving large batch sizes. The flexibility is relatively less in these machines which is suitable for productivity enhancement.(iii) Plano millersThese machines are used only for very large workpieces involving table travels in meters.(iv) Special type(a) Rotary table(b) Drum type(c) Copy milling (Die sinking machines)(d) Key way milling machines(e) Spline shaft milling machinesThese machines provide special facilities to suit specific applications that are not catered to by the other classes of milling machines.7.2.1 Knee and Column Milling MachinesThe knee(升降台) and column type is the most commonly used machine in view of its flexibility and easier setup. A typical machine construction is shown in Fig. 7.2 for the horizontal axis. The knee houses the feed mechanism and mounts the saddle and table. The table basically has the T-slots running along the X-axis for the purpose of work holding. The table moves along the X-axis on the saddle while the saddle moves along the Y-axis on the guide ways provided on the knee.The feed is provided either manually with a hand wheel or connected for automatic by the lead screw, which in turn is coupled to the main spindle drive. The knee can move up and down (Z-axis) on a dovetail provided on the column.Fig. 7.2 Horizontal knee and column type milling machineThe massive column at the back of the machine houses all the power train including the motor and the spindle gearbox. The power for feeding the table lead screw is taken from the main motor through a separate feed gearbox. Sometimes a separate feed motor is provided for the feed gearbox as well.While the longitudinal and traverse motions are provided with automatic motion, the raising of the knee is generally made manually.The spindle is located at the top end of the column. The arbour used to mount the milling cutters is mounted in the spindle and is provided with a support on the other end to take care of the heavy cutting forces by means of an overarm with bearing. As shown in Fig.7.2 the overarm extends from the column with a rigid design. The spindle nose has the standard Morse taper of the suitable sizedepending upon the machine size.The milling cutters are mounted on the arbour at any desired position, the rest of the length being filled by standard hardened collars of varying widths to fix the position of the cutter. The arbour is clamped in the spindle with the help of a draw bar and then fixed with nuts.Milling machines are generally specified on the following basis:(i) Size of the table, which specifies the actual working area on the table and relates to the maximum size of the workpiece that can be accommodated.(ii) Amount of table travel, which gives the maximum axis movement that is possible.(iii) Horse power of the spindle, which actually specifies the power of the spindle motor used. Smaller machines may come with 1 to 3 hp while the production machines may go from 10 to 50 hp.Another type of knee and column milling machine is the vertical axis type. Its construction is very similar to the horizontal axis type, except for the spindle type and location.The vertical axis milling machine is relatively more flexible (Fig. 7.4) and suitable for machining complex cavities such as die cavities in tool rooms. The vertical head is provided with a swiveling facility in horizontal direction whereby the cutter axis can be swivelled. This isuseful for tool rooms where more complex milling operations are carried out.The spindle is located in the vertical direction and is suitable for using the shank mounted milling cutters such as end mills, In view of the location of the tool, the setting up of the workpiece and observing the machining operation is more convenient.Fig, 7.3 Vertical knee and column type milling machineFig.7.4 Some of the milling operations normally carried out on vertical axis machinesThe universal machine has the table which can be swivelled in a horizontal plane at about 45o to either the left or right. This makes the universal machine suitable for milling spur and helical gears as well as worm gears and cams.7.2.2 Bed Type Milling MachineIn production milling machines it is desirable to increase the metal removal rates. If it is done on conventional machines by increasingthe depth of cut, there is possibility of chatter. Hence another varietyof milling machines named as bed type machines are used which are made more rugged and are capable of removing more material. The ruggedness is obtained as a consequence of the reduction in versatility.The table in the case of bed type machines is directly mounted on the bed and is provided with only longitudinal motion.The spindle moves along with the column to provide the cutting action. Simplex machines (Fig. 7.5) are the ones with only one spindle head while duplex machines have two spindles (Fig. 7.6). The two spindles are located on either side of a heavy workpiece and remove material from both sides simultaneously.Fig. 7.5 Simplex bed type milling machineFig. 7.6 Duplex bed type milling machine7.3 MILLING CUTTERSThere are a large variety of milling cutters available to suit specific requirements. The versatility of the milling machine is contributed toa great extent by the variety of milling cutters that are available.7.3.1 Types of Milling CuttersMilling cutters are classified into various types based on a variety of methods.(i) Based on construction:(a) Solid(b) Inserted tooth typeBased on mounting:(a) Arbor mounted(b) Shank mounted(c) Nose mountedBase on rotation:(a) Right hand rotation (counter clockwise)(b) Left hand rotation (clockwise)Based on helix:(a) Right hand helix(b) Left hand helixMilling cutters are generally made of high speed steel or cemented carbides. The cemented carbide cutters can be of a brazed tip variety or with indexable tips. The indexable variety is more common since it is normally less expensive to replace the worn out cutting edges than to regrind them.Plain milling cutters These are also called slab milling cutters and are basically cylindrical with the cutting teeth on the periphery as shown in Fig. 7.7. These are generally used for machining flat surfaces.Fig. 7.7 Arbor mounted milling cutters for general purposeLight duty slab milling cutters generally have a face width, which is small of the order of 25 mm. They generally have straight teeth and large number of teeth.Heavy duty slab milling cutters come with a smaller number of teeth to allow for more chip space. This allows taking deeper cuts and consequently high material removal rates.Helical milling cutters have a very small number of teeth but a large helix angle. This type of cutter cuts with a shearing action, which can produce a very fine finish. The large helix angle allows the cutter to absorb most of the end load and therefore the cutter enters and leaves the workpiece very smoothly.Side and face milling cutters These have the cutting edges not only onthe face like the slab milling cutters, but also on both the sides. As aresult, these cutters become more versatile since they can be used for side milling as well as for slot milling.Staggered tooth side milling cutters are a variation where the teeth are arranged in an alternate helix pattern. This type is generally used for milling deep slots, since the staggering of teeth provides for greater chip space.Another variation of the side and face cutter is the half side milling cutter, which has cutting edges only on one side. This arrangement provides a positive rake angle and is useful for machining on only one side. These have a much smoother cutting action and a long tool life. The power consumed is also less for these cutters.Fig. 7.8Special forms of arbor mounted milling cuttersSlitting saws The other common form of milling cutters in the arbor mounted category is the slitting saw. This is very similar to a saw blade inappearance as well as function. Most of these have teeth around the circumference while some have side teeth as well. The thickness of these cutters is generally very small and is used for cutting off operations or for deep slots.Special form cutters In addition to the general type of milling cutters described above, there are a large number of special form milling cutters available which are used for machining specific profiles.Angular milling cutters are made in single or double angle cutters for milling any angle such as 30, 45 or 60o Form relieved cutters are made of various shapes such as circular, corner rounding, convex or concave shapes.T-slot milling cutters are used for milling T-slots such as those in the milling machine table. The central slot is to be milled first using an end mill before using the T-slot milling cutter. Woodruff key seat milling cutters are used for milling as the name suggests, woodruff key seats Some other special form cutters are dovetail milling cutters and gear milling cutters.End mills These are shank mounted as shown in Fig. 7.9 and are generally used in vertical axis milling machines. They are used for milling slots, key ways and pockets where other type of milling cutters cannot be used. A depth of cut of almost half the diameter can be taken with the end mills.The end mills have the cutting edge running through the length of the cutting portion as well as on the face radially up to a certain length. The helix angle of the cutting edge promotes smooth and efficient cutting even at high cutting speeds and feed rates. High cutting speeds(转速?) are generally recommended for this type of milling cutters.Fig. 7.9 Shank mounted milling cutters and various types of end mills There are a large variety of end mills. One of the distinctions is based on the method of holding, i.e., the end mill shank can be straight or tapered. The straight shank is used on end mills of small size and held in the milling machine spindle with the help of a suitable collet. The tapered shank can be directly mounted in the spindle with the help of the selfholding taper. If the taper is small compared to the spindle taper, then an adopter accommodating both the tapers is used.The end teeth of the end mills may be terminated at a distance from the cutter center or may proceed till the center (Fig. 7.9 f). Those with the cutting edge up to the center are called slot drills or end cutting end mills since they have the ability to cut into the solid material (Fig. 7.9 g). The other type of end mills which have a larger number of teeth cannot cut into solid material and hence require a pilot hole drilled before a pocket is machined.The cutting edge along the side of an end mill is generally straight and sometimes can be tapered by grinding on a tool and cutter grinder such that the draft required for mould and die cavities can be automatically generated.。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

本科毕业设计外文翻译现在和未来的发展趋势
院(系、部)名称:机电工程学院
专业名称:电气工程及其自动化学生姓名:
学生学号:0413090208
指导教师:
2013年4月21日
现在和未来的发展趋势
目前的趋势表明,随着从直接使用化石燃料依赖的转变,美国正变的越来越电气化。

电力行业推动了经济增长,促进业务发展及扩张,提供了坚实的就业机会,提高了用户的生活质量以及为世界提供动力。

在美国,提高电气化被部分证明是不断发展的数字革命。

根据Edison Electric Institute,研究人员使用叫做“电力强度”的术语来联系用电量和国内生产总值(GDP),自1960年以来,以实际国内生产总值(GDP)的每一美元电力消费量衡量的电力强度在美国的使用,增加了25%以上。

通过比较,在同一时期,总体的电力强度使用(包括电力和化石燃料的直接使用)已经减少了40%以上[4]。

正如图表1.2所示,美国用电量的增长率计划从2004年到2015年每年增长约2%[3,5]。

即使未来十年的电力预测建立在容易波动的经济和社会因素,在这段时期生产预期的GDP,每年2%的增长率是可以被认为是必要的。

在从2004年到2030年的1.5到2.5%年增长率的长期预测中,用电量的变化是基于经济增长由高到低的变化范围。

图表1.2 美国电能消费增长[1,2,3,5]
由于天然气和煤炭价格下跌的原因,平均交付的电力价格预计开始下降,从2004年7.6美分每千瓦时(2004年美元)到2015年的低于7.1美分每千瓦时,2015年以后,平均交付的电力价格预计在2030年增至7.5美分没千瓦时[5]。

图表1.4表明,2004年满足美国电能需求的各种燃料所占百分率和这些燃料预计在2015年和2030所占的百分率。

在该图表中有些趋势是很明显的。

一处是煤炭的使用量增长。

该增长大部分归因于美国的大规模煤炭储藏量,根据有关方面的评估,美国的储藏量足以满足美国未来500年的能源需求。

公共政策的实施也可能会扭转这种局势,该政策已提出减少二
氧化碳的排放和大气污染。

另一个趋势是燃气涡轮机的天然气消费量开始增长,这种涡轮机是安全、清洁的,比其他竞争技术更加高效。

旨在减少温室气体排放的监管政策可能会加快煤炭改换天然气的速度,但是这将需要增加交付天然气的供给[10]。

核燃料消费量的减少率也是显而易见的。

到目前为止25年来,美国没有筹建新的核电站。

核能发电机从2004年计划增长0.7×1012kwh 到2030年的0.87×1012kwh 基于现存的电厂和一些成本更高的新建核能电厂的更新速度。

安全问题将需要被动或以标准化,模块化的核能单元设计的固有安全反应堆。

图表1.4也示出,从水电和可再生能源所发的电占一小部分,可再生资源包括
地热能,木材和废物燃烧能,太阳能。

图表1.4 美国,由主要燃料类型发电[3,5]
图表1.5示出2004年以及预计2005年美国以主要燃料类型的发电能力。

如表所示,到2015年,美国总的发电能力预计达到1,002GW(1GW=1000MW)。

这代表了发电能力的0.4%年预计增长量,这是小于2%的年度预测在电力能源生产和消费增长。

图表1.5 美国,由主要燃料类型发电[3,5]
因此,发电量的储备空间正在萎缩。

假设新的发电设备如预期筹建,可发电能源,在短
期内(2005-2009)有足够的发电能力储备利润足以满足在整个北美地区客户的需求。

可发电能源的供电裕度在较长时期内(2010-2014)更加不确定,而且取决于下列因素:是否及时增长到美国总线172,932英里(230kV及以上)。

北美电力可靠性委员会(NERC)普遍预计,在2005至14年期间在北美的传输系统性能可靠。

尽管该传输系统已经收到越来越大的压力,在过去数年,因为缺乏输电投资,老化的传输基础设施,增加负荷的需求,和更严格的传输操作利润。

NERC认为,如果电力工业遵守NERC的可靠性标准,可靠性标准可以得到保持,并且不应该受到威胁[6]。

然而,具体的领域和问题,已经载每个NERC的季节性和长期性可靠性评估中的到确定。

输电阻碍和输电约束阻碍了地理区域内部或相互之间的电力传输。

随着电力行业的继续转型,传输系统作为输送系统正适应商业能源销售模式。

随着新发电机的安装和以市场为导向的能源交易向发电模式的转变,新的传输限制可能出现在意想不到的地方。

在传输点网方面鼓励更多投资的重组政策将提高传输的可靠性。

拥有和维持可靠的,区域性的,无阻塞的传输系统也将使市场竞争变得更为激烈[6,7]。

配电建设的增长大致与电力能源建设的增长相关。

在过去的20年里,美国许多公用事业将旧的2.4-,4.1-和5-kV的主要分配系统转换为12或15kV。

美国公用事业新设备广泛应用15kV电压等级。

25kV,34.5kV以及更高等级的主配电电压也受到广泛应用。

二次分配为商业及住宅用户降低电压。

在美国,常见的二次分配电压有240/120V,单相,3线制;208Y/120V,三相,4线制;480Y/277V,三相,4线制。

电力科学研究院在2003年度选举的公用事业主管估计,在美国,50%的电应用技术劳动力在未来的五年到十年内将退休,以及根据IEEE,美国电力系统工程专业毕业生的数量已经从20世纪80年代的每年将近2,000下降至2006年的500。

合格的电力系统工程师的持续可用性是一个重要的资源,以确保高效可靠地输电和配电系统的维护和操作。

参考文献
1.G. W. Stagg and A. H. El-Abiad, Computer Methods in Power Systems (New Y ork;McGraw-Hill, 1968).
2.O. I. Elgerd,Electric Energy Systems Theory, 2d ed. (New Y ork:McGraw-Hill,1982).
3. C. A. Gross, Power System Analysis (New York: Wiley,1979).
4.W. D. Stevensom, Jr,Elements of Power System Analysis,4th ed. (New Y ork:McGraw-Hill,1982).
5.E. W. Kimbark,“Suppression of Ground-Fault Ares on Single-Pole Switched EHV Lines by Shunt Reactors,”IEEET Trans PAS,83 (March 1964), pp. 285-290.
6.K. R. McClymont et al, “Experience with High Speed Rectifier Excitation Systems,”IEEE Trans PAS,vol.PAS-87 (June 1986). pp. 1464-1470.
7. E. W. Cushing et al,”Fast V alving as an Aid to Power System Transient Stability and Prompt Resynchronization and Rapid Reload after Full Load Rejection,”IEEE Trans PAS, vol. PAS-90 (November/December 1971), pp. 2517-2527
M. L. Shelton et al.”Bonneville Power Administration 1400 MW Braking Resistor,”IEEE Trans PAS, vol. PAS-94 (March/April 1975), pp. 602,611.
---------------------from POWER SYSTEM ANALYSIS AND DESIGN(FOURTH EDITH) J.D U N C A N G L O V E R F A I L U R E E L E C T R I C A L,L L C M U L U K U T L A S.S A R M A N O R T H E A S T E R N U N I V E R S I T Y T H O M A S J.O V E R B Y E U N I V E R S I T Y O F I L L I N O L S。

相关文档
最新文档