毕设外文文献+翻译1
毕业设计外文文献翻译
毕业设计(论文)外文资料翻译系别:专业:班级:姓名:学号:外文出处:附件: 1. 原文; 2. 译文2013年03月附件一:A Rapidly Deployable Manipulator SystemChristiaan J.J. Paredis, H. Benjamin Brown, Pradeep K. KhoslaAbstract:A rapidly deployable manipulator system combines the flexibility of reconfigurable modular hardware with modular programming tools, allowing the user to rapidly create a manipulator which is custom-tailored for a given task. This article describes two main aspects of such a system, namely, the Reconfigurable Modular Manipulator System (RMMS)hardware and the corresponding control software.1 IntroductionRobot manipulators can be easily reprogrammed to perform different tasks, yet the range of tasks that can be performed by a manipulator is limited by mechanicalstructure.Forexample, a manipulator well-suited for precise movement across the top of a table would probably no be capable of lifting heavy objects in the vertical direction. Therefore, to perform a given task,one needs to choose a manipulator with an appropriate mechanical structure.We propose the concept of a rapidly deployable manipulator system to address the above mentioned shortcomings of fixed configuration manipulators. As is illustrated in Figure 1, a rapidly deployable manipulator system consists of software and hardware that allow the user to rapidly build and program a manipulator which is customtailored for a given task.The central building block of a rapidly deployable system is a Reconfigurable Modular Manipulator System (RMMS). The RMMS utilizes a stock of interchangeable link and joint modules of various sizes and performance specifications. One such module is shown in Figure 2. By combining these general purpose modules, a wide range of special purpose manipulators can be assembled. Recently, there has been considerable interest in the idea of modular manipulators [2, 4, 5, 7, 9, 10, 14], for research applications as well as for industrial applications. However, most of these systems lack the property of reconfigurability, which is key to the concept of rapidly deployable systems. The RMMS is particularly easy toreconfigure thanks to its integrated quick-coupling connectors described in Section 3.Effective use of the RMMS requires, Task Based Design software. This software takes as input descriptions of the task and of the available manipulator modules; it generates as output a modular assembly configuration optimally suited to perform the given task. Several different approaches have been used successfully to solve simpli-fied instances of this complicated problem.A third important building block of a rapidly deployable manipulator system is a framework for the generation of control software. To reduce the complexity of softwaregeneration for real-time sensor-based control systems, a software paradigm called software assembly has been proposed in the Advanced Manipulators Laboratory at CMU.This paradigm combines the concept of reusable and reconfigurable software components, as is supported by the Chimera real-time operating system [15], with a graphical user interface and a visual programming language, implemented in OnikaA lthough the software assembly paradigm provides thesoftware infrastructure for rapidly programming manipulator systems, it does not solve the programming problem itself. Explicit programming of sensor-based manipulator systems is cumbersome due to the extensive amount of detail which must be specified for the robot to perform the task. The software synthesis problem for sensor-based robots can be simplified dramatically, by providing robust robotic skills, that is, encapsulated strategies for accomplishing common tasks in the robots task domain [11]. Such robotic skills can then be used at the task level planning stage without having to consider any of the low-level detailsAs an example of the use of a rapidly deployable system,consider a manipulator in a nuclear environment where it must inspect material and space for radioactive contamination, or assemble and repair equipment. In such an environment, widely varied kinematic (e.g., workspace) and dynamic (e.g., speed, payload) performance is required, and these requirements may not be known a priori. Instead of preparing a large set of different manipulators to accomplish these tasks—an expensive solution—one can use a rapidly deployable manipulator system. Consider the following scenario: as soon as a specific task is identified, the task based design software determinesthe task. This optimal configuration is thenassembled from the RMMS modules by a human or, in the future, possibly by anothermanipulator. The resulting manipulator is rapidly programmed by using the software assembly paradigm and our library of robotic skills. Finally,the manipulator is deployed to perform its task.Although such a scenario is still futuristic, the development of the reconfigurable modular manipulator system, described in this paper, is a major step forward towards our goal of a rapidly deployable manipulator system.Our approach could form the basis for the next generation of autonomous manipulators, in which the traditional notion of sensor-based autonomy is extended to configuration-based autonomy. Indeed, although a deployed system can have all the sensory and planning information it needs, it may still not be able to accomplish its task because the task is beyond the system’s physical capabilities. A rapidly deployable system, on the other hand, could adapt its physical capabilities based on task specifications and, with advanced sensing, control, and planning strategies, accomplish the task autonomously.2 Design of self-contained hardware modulesIn most industrial manipulators, the controller is a separate unit housing the sensor interfaces, power amplifiers, and control processors for all the joints of the manipulator.A large number of wires is necessary to connect this control unit with the sensors, actuators and brakes located in each of the joints of the manipulator. The large number of electrical connections and the non-extensible nature of such a system layout make it infeasible for modular manipulators. The solution we propose is to distribute the control hardware to each individual module of the manipulator. These modules then become self-contained units which include sensors, an actuator, a brake, a transmission, a sensor interface, a motor amplifier, and a communication interface, as is illustrated in Figure 3. As a result, only six wires are requiredfor power distribution and data communication.2.1 Mechanical designThe goal of the RMMS project is to have a wide variety of hardware modules available. So far, we have built four kinds of modules: the manipulator base, a link module, three pivot joint modules (one of which is shown in Figure 2), and one rotate joint module. The base module and the link module have no degrees-of-freedom; the joint modules have onedegree-of-freedom each. The mechanical design of the joint modules compactly fits aDC-motor, a fail-safe brake, a tachometer, a harmonic drive and a resolver.The pivot and rotate joint modules use different outside housings to provide the right-angle or in-line configuration respectively, but are identical internally. Figure 4 shows in cross-section the internal structure of a pivot joint. Each joint module includes a DC torque motor and 100:1 harmonic-drive speed reducer, and is rated at a maximum speed of 1.5rad/s and maximum torque of 270Nm. Each module has a mass of approximately 10.7kg. A single, compact, X-type bearing connects the two joint halves and provides the needed overturning rigidity. A hollow motor shaft passes through all the rotary components, and provides achannel for passage of cabling with minimal flexing.2.2 Electronic designThe custom-designed on-board electronics are also designed according to the principle of modularity. Each RMMS module contains a motherboard which provides the basic functionality and onto which daughtercards can be stacked to add module specific functionality.The motherboard consists of a Siemens 80C166 microcontroller, 64K of ROM, 64K of RAM, an SMC COM20020 universal local area network controller with an RS-485 driver, and an RS-232 driver. The function of the motherboard is to establish communication with the host interface via an RS-485 bus and to perform the lowlevel control of the module, as is explained in more detail in Section 4. The RS-232 serial bus driver allows for simple diagnostics and software prototyping.A stacking connector permits the addition of an indefinite number of daughtercards with various functions, such as sensor interfaces, motor controllers, RAM expansion etc. In our current implementation, only modules with actuators include a daughtercard. This card contains a 16 bit resolver to digital converter, a 12 bit A/D converter to interface with the tachometer, and a 12 bit D/A converter to control the motor amplifier; we have used an ofthe-shelf motor amplifier (Galil Motion Control model SSA-8/80) to drive the DC-motor. For modules with more than one degree-of-freedom, for instance a wrist module, more than one such daughtercard can be stacked onto the same motherboard.3 Integrated quick-coupling connectorsTo make a modular manipulator be reconfigurable, it is necessary that the modules can be easily connected with each other. We have developed a quick-coupling mechanism with which a secure mechanical connection between modules can be achieved by simply turning a ring handtight; no tools are required. As shown in Figure 5, keyed flanges provide precise registration of the two modules. Turning of the locking collar on the male end produces two distinct motions: first the fingers of the locking ring rotate (with the collar) about 22.5 degrees and capture the fingers on the flanges; second, the collar rotates relative to the locking ring, while a cam mechanism forces the fingers inward to securely grip the mating flanges. A ball- transfer mechanism between the collar and locking ring automatically produces this sequence of motions.At the same time the mechanical connection is made,pneumatic and electronic connections are also established. Inside the locking ring is a modular connector that has 30 male electrical pins plus a pneumatic coupler in the middle. These correspond to matching female components on the mating connector. Sets of pins are wired in parallel to carry the 72V-25A power for motors and brakes, and 48V–6A power for the electronics. Additional pins carry signals for two RS-485 serial communication busses and four video busses. A plastic guide collar plus six alignment pins prevent damage to the connector pins and assure proper alignment. The plastic block holding the female pins can rotate in the housing to accommodate the eight different possible connection orientations (8@45 degrees). The relative orientation is automatically registered by means of an infrared LED in the female connector and eight photodetectors in the male connector.4 ARMbus communication systemEach of the modules of the RMMS communicates with a VME-based host interface over a local area network called the ARMbus; each module is a node of the network. The communication is done in a serial fashion over an RS-485 bus which runs through the length of the manipulator. We use the ARCNET protocol [1] implemented on a dedicated IC (SMC COM20020). ARCNET is a deterministic token-passing network scheme which avoids network collisions and guarantees each node its time to access the network. Blocks ofinformation called packets may be sent from any node on the network to any one of the other nodes, or to all nodes simultaneously (broadcast). Each node may send one packet each time it gets the token. The maximum network throughput is 5Mb/s.The first node of the network resides on the host interface card, as is depicted in Figure 6. In addition to a VME address decoder, this card contains essentially the same hardware one can find on a module motherboard. The communication between the VME side of the card and the ARCNET side occurs through dual-port RAM.There are two kinds of data passed over the local area network. During the manipulator initialization phase, the modules connect to the network one by one, starting at the base and ending at the end-effector. On joining the network, each module sends a data-packet to the host interface containing its serial number and its relative orientation with respect to the previous module. This information allows us to automatically determine the current manipulator configuration.During the operation phase, the host interface communicates with each of the nodes at 400Hz. The data that is exchanged depends on the control mode—centralized or distributed. In centralized control mode, the torques for all the joints are computed on the VME-based real-time processing unit (RTPU), assembled into a data-packet by the microcontroller on the host interface card and broadcast over the ARMbus to all the nodes of the network. Each node extracts its torque value from the packet and replies by sending a data-packet containing the resolver and tachometer readings. In distributed control mode, on the other hand, the host computer broadcasts the desired joint values and feed-forward torques. Locally, in each module, the control loop can then be closed at a frequency much higher than 400Hz. The modules still send sensor readings back to the host interface to be used in the computation of the subsequent feed-forward torque.5 Modular and reconfigurable control softwareThe control software for the RMMS has been developed using the Chimera real-time operating system, which supports reconfigurable and reusable software components [15]. The software components used to control the RMMS are listed in Table 1. The trjjline, dls, and grav_comp components require the knowledge of certain configuration dependent parametersof the RMMS, such as the number of degrees-of-freedom, the Denavit-Hartenberg parameters etc. During the initialization phase, the RMMS interface establishes contact with each of the hardware modules to determine automatically which modules are being used and in which order and orientation they have been assembled. For each module, a data file with a parametric model is read. By combining this information for all the modules, kinematic and dynamic models of the entire manipulator are built.After the initialization, the rmms software component operates in a distributed control mode in which the microcontrollers of each of the RMMS modules perform PID control locally at 1900Hz. The communication between the modules and the host interface is at 400Hz, which can differ from the cycle frequency of the rmms software component. Since we use a triple buffer mechanism [16] for the communication through the dual-port RAM on the ARMbus host interface, no synchronization or handshaking is necessary.Because closed form inverse kinematics do not exist for all possible RMMS configurations, we use a damped least-squares kinematic controller to do the inverse kinematics computation numerically..6 Seamless integration of simulationTo assist the user in evaluating whether an RMMS con- figuration can successfully complete a given task, we have built a simulator. The simulator is based on the TeleGrip robot simulation software from Deneb Inc., and runs on an SGI Crimson which is connected with the real-time processing unit through a Bit3 VME-to-VME adaptor, as is shown in Figure 6.A graphical user interface allows the user to assemble simulated RMMS configurations very much like assembling the real hardware. Completed configurations can be tested and programmed using the TeleGrip functions for robot devices. The configurations can also be interfaced with the Chimera real-time softwarerunning on the same RTPUs used to control the actual hardware. As a result, it is possible to evaluate not only the movements of the manipulator but also the realtime CPU usage and load balancing. Figure 7 shows an RMMS simulation compared with the actual task execution.7 SummaryWe have developed a Reconfigurable Modular Manipulator System which currently consists of six hardware modules, with a total of four degrees-of-freedom. These modules can be assembled in a large number of different configurations to tailor the kinematic and dynamic properties of the manipulator to the task at hand. The control software for the RMMS automatically adapts to the assembly configuration by building kinematic and dynamic models of the manipulator; this is totally transparent to the user. To assist the user in evaluating whether a manipulator configuration is well suited for a given task, we have also built a simulator.AcknowledgmentThis research was funded in part by DOE under grant DE-F902-89ER14042, by Sandia National Laboratories under contract AL-3020, by the Department of Electrical and Computer Engineering, and by The Robotics Institute, Carnegie Mellon University.The authors would also like to thank Randy Casciola, Mark DeLouis, Eric Hoffman, and Jim Moody for their valuable contributions to the design of the RMMS system.附件二:可迅速布置的机械手系统作者:Christiaan J.J. Paredis, H. Benjamin Brown, Pradeep K. Khosla摘要:一个迅速可部署的机械手系统,可以使再组合的标准化的硬件的灵活性用标准化的编程工具结合,允许用户迅速建立为一项规定的任务来通常地控制机械手。
毕业论文(设计)外文文献翻译及原文
金融体制、融资约束与投资——来自OECD的实证分析R.SemenovDepartment of Economics,University of Nijmegen,Nijmegen(荷兰内梅亨大学,经济学院)这篇论文考查了OECD的11个国家中现金流量对企业投资的影响.我们发现不同国家之间投资对企业内部可获取资金的敏感性具有显著差异,并且银企之间具有明显的紧密关系的国家的敏感性比银企之间具有公平关系的国家的低.同时,我们发现融资约束与整体金融发展指标不存在关系.我们的结论与资本市场信息和激励问题对企业投资具有重要作用这种观点一致,并且紧密的银企关系会减少这些问题从而增加企业获取外部融资的渠道。
一、引言各个国家的企业在显著不同的金融体制下运行。
金融发展水平的差别(例如,相对GDP的信用额度和相对GDP的相应股票市场的资本化程度),在所有者和管理者关系、企业和债权人的模式中,企业控制的市场活动水平可以很好地被记录.在完美资本市场,对于具有正的净现值投资机会的企业将一直获得资金。
然而,经济理论表明市场摩擦,诸如信息不对称和激励问题会使获得外部资本更加昂贵,并且具有盈利投资机会的企业不一定能够获取所需资本.这表明融资要素,例如内部产生资金数量、新债务和权益的可得性,共同决定了企业的投资决策.现今已经有大量考查外部资金可得性对投资决策的影响的实证资料(可参考,例如Fazzari(1998)、 Hoshi(1991)、 Chapman(1996)、Samuel(1998)).大多数研究结果表明金融变量例如现金流量有助于解释企业的投资水平。
这项研究结果解释表明企业投资受限于外部资金的可得性。
很多模型强调运行正常的金融中介和金融市场有助于改善信息不对称和交易成本,减缓不对称问题,从而促使储蓄资金投着长期和高回报的项目,并且提高资源的有效配置(参看Levine(1997)的评论文章)。
因而我们预期用于更加发达的金融体制的国家的企业将更容易获得外部融资.几位学者已经指出建立企业和金融中介机构可进一步缓解金融市场摩擦。
毕业设计论文外文文献翻译
毕业设计(论文)外文文献翻译院系:财务与会计学院年级专业:201*级财务管理姓名:学号:132148***附件: 财务风险管理【Abstract】Although financial risk has increased significantly in recent years risk and risk management are not contemporary issues。
The result of increasingly global markets is that risk may originate with events thousands of miles away that have nothing to do with the domestic market。
Information is available instantaneously which means that change and subsequent market reactions occur very quickly。
The economic climate and markets can be affected very quickly by changes in exchange rates interest rates and commodity prices。
Counterparties can rapidly become problematic。
As a result it is important to ensure financial risks are identified and managed appropriately. Preparation is a key component of risk management。
【Key Words】Financial risk,Risk management,YieldsI. Financial risks arising1.1What Is Risk1.1.1The concept of riskRisk provides the basis for opportunity. The terms risk and exposure have subtle differences in their meaning. Risk refers to the probability of loss while exposure is the possibility of loss although they are often used interchangeably。
毕设外文文献+翻译1
毕设外文文献+翻译1外文翻译外文原文CHANGING ROLES OF THE CLIENTS、ARCHITECTSAND CONTRACTORS THROUGH BIMAbstract:Purpose –This paper aims to present a general review of the practical implications of building information modelling (BIM) based on literature and case studies. It seeks to address the necessity for applying BIM and re-organising the processes and roles in hospital building projects. This type of project is complex due to complicated functional and technical requirements, decision making involving a large number of stakeholders, and long-term development processes.Design/methodology/approach–Through desk research and referring to the ongoing European research project InPro, the framework for integrated collaboration and the use of BIM are analysed.Findings –One of the main findings is the identification of the main factors for a successful collaboration using BIM, which can be recognised as “POWER”: product information sharing (P),organisational roles synergy (O), work processes coordination (W), environment for teamwork (E), and reference data consolidation (R).Originality/value –This paper contributes to the actual discussion in science and practice on the changing roles and processes that are required to develop and operate sustainable buildings with the support of integrated ICT frameworks and tools. It presents the state-of-the-art of European research projects and some of the first real cases of BIM application inhospital building projects.Keywords:Europe, Hospitals, The Netherlands, Construction works, Response flexibility, Project planningPaper type :General review1. IntroductionHospital building projects, are of key importance, and involve significant investment, and usually take a long-term development period. Hospital building projects are also very complex due to the complicated requirements regarding hygiene, safety, special equipments, and handling of a large amount of data. The building process is very dynamic and comprises iterative phases and intermediate changes. Many actors with shifting agendas, roles and responsibilities are actively involved, such as: the healthcare institutions, national and local governments, project developers, financial institutions, architects, contractors, advisors, facility managers, and equipment manufacturers and suppliers. Such building projects are very much influenced, by the healthcare policy, which changes rapidly in response to the medical, societal and technological developments, and varies greatly between countries (World Health Organization, 2000). In The Netherlands, for example, the way a building project in the healthcare sector is organised is undergoing a major reform due to a fundamental change in the Dutch health policy that was introduced in 2008.The rapidly changing context posts a need for a building with flexibility over its lifecycle. In order to incorporate life-cycle considerations in the building design, construction technique, and facility management strategy, a multidisciplinary collaboration is required. Despite the attempt for establishing integrated collaboration, healthcare building projects still facesserious problems in practice, such as: budget overrun, delay, and sub-optimal quality in terms of flexibility, end-user?s dissatisfaction, and energy inefficiency. It is evident that the lack of communication and coordination between the actors involved in the different phases of a building project is among the most important reasons behind these problems. The communication between different stakeholders becomes critical, as each stakeholder possesses different setof skills. As a result, the processes for extraction, interpretation, and communication of complex design information from drawings and documents are often time-consuming and difficult. Advanced visualisation technologies, like 4D planning have tremendous potential to increase the communication efficiency and interpretation ability of the project team members. However, their use as an effective communication tool is still limited and not fully explored. There are also other barriers in the information transfer and integration, for instance: many existing ICT systems do not support the openness of the data and structure that is prerequisite for an effective collaboration between different building actors or disciplines.Building information modelling (BIM) offers an integrated solution to the previously mentioned problems. Therefore, BIM is increasingly used as an ICT support in complex building projects. An effective multidisciplinary collaboration supported by an optimal use of BIM require changing roles of the clients, architects, and contractors; new contractual relationships; and re-organised collaborative processes. Unfortunately, there are still gaps in the practical knowledge on how to manage the building actors to collaborate effectively in their changing roles, and todevelop and utilise BIM as an optimal ICT support of the collaboration.This paper presents a general review of the practical implications of building information modelling (BIM) based on literature review and case studies. In the next sections, based on literature and recent findings from European research project InPro, the framework for integrated collaboration and the use of BIM are analysed. Subsequently, through the observation of two ongoing pilot projects in The Netherlands, the changing roles of clients, architects, and contractors through BIM application are investigated. In conclusion, the critical success factors as well as the main barriers of a successful integrated collaboration using BIM are identified.2. Changing roles through integrated collaboration and life-cycle design approachesA hospital building project involves various actors, roles, and knowledge domains. In The Netherlands, the changing roles of clients, architects, and contractors in hospital building projects are inevitable due the new healthcare policy. Previously under the Healthcare Institutions Act (WTZi), healthcare institutions were required to obtain both a license and a building permit for new construction projects and major renovations. The permit was issued by the Dutch Ministry of Health. The healthcare institutions were then eligible to receive financial support from the government. Since 2008, new legislation on the management of hospital building projects and real estate has come into force. In this new legislation, a permit for hospital building project under the WTZi is no longer obligatory, nor obtainable (Dutch Ministry of Health, Welfare and Sport, 2008). This change allows more freedom from the state-directed policy, and respectively,allocates more responsibilities to the healthcare organisations to deal with the financing and management of their real estate. The new policy implies that the healthcare institutions are fully responsible to man age and finance their building projects and real estate. The government?s support for the costs of healthcare facilities will no longer be given separately, but will be included in the fee for healthcare services. This means that healthcare institutions must earn back their investment on real estate through their services. This new policy intends to stimulate sustainable innovations in the design, procurement and management of healthcare buildings, which will contribute to effective and efficient primary healthcare services.The new strategy for building projects and real estate management endorses an integrated collaboration approach. In order to assure the sustainability during construction, use, and maintenance, the end-users, facility managers, contractors and specialist contractors need to be involved in the planning and design processes. The implications of the new strategy are reflected in the changing roles of the building actors and in the new procurement method.In the traditional procurement method, the design, and its details, are developed by the architect, and design engineers. Then, the client (the healthcare institution) sends an application to the Ministry of Healthto obtain an approval on the building permit and the financial support from the government. Following this, a contractor is selected through a tender process that emphasises the search for the lowest-price bidder. During the construction period, changes often take place due to constructability problems of the design and new requirements from the client.Because of the high level of technical complexity, and moreover, decision-making complexities, the whole process from initiation until delivery of a hospital building project can take up to ten years time. After the delivery, the healthcare institution is fully in charge of the operation of the facilities. Redesigns and changes also take place in the use phase to cope with new functions and developments in the medical world.The integrated procurement pictures a new contractual relationship between the parties involved in a building project. Instead of a relationship between the client and architect for design, and the client and contractor for construction, in an integrated procurement the client only holds a contractual relationship with the main party that is responsible for both design and construction. The traditional borders between tasks and occupational groups become blurred since architects, consulting firms, contractors, subcontractors, and suppliers all stand on the supply side in the building process while the client on the demand side. Such configuration puts the architect, engineer and contractor in a very different position that influences not only their roles, but also their responsibilities, tasks and communication with the client, the users, the team and other stakeholders.The transition from traditional to integrated procurement method requires a shift of mindset of the parties on both the demand and supply sides. It is essential for the client and contractor to have a fair and open collaboration in which both can optimally use their competencies. The effectiveness of integrated collaboration is also determined by the client?s capacity and strategy to organize innovative tendering procedures.A new challenge emerges in case of positioning an architect in a partnership with the contractor instead of with the client. In case of the architect enters a partnership with the contractor, an important issues is how to ensure the realisation of the architectural values as well as innovative engineering through an efficient construction process. In another case, the architect can stand at the client?s side in a strategic advisory role instead of being the designer. In this case, the architect?s responsibility is translating client?s requirements and wishes into the architectural values to be included in the design specification, and evaluating the contractor?s proposal against this. In any of this new role, the architect holds the responsibilities as stakeholder interest facilitator, custodian of customer value and custodian of design models.The transition from traditional to integrated procurement method also brings consequences in the payment schemes. In the traditional building process, the honorarium for the architect is usually based on a percentage of the project costs; this may simply mean that the more expensive the building is, the higher the honorarium will be. The engineer receives the honorarium based on the complexity of the design and the intensity of the assignment. A highly complex building, which takes a number of redesigns, is usually favourable for the engineers in terms of honorarium. A traditional contractor usually receives the commission based on the tender to construct the building at the lowest price by meeting the minimum specifications given by the client. Extra work due to modifications is charged separately to the client. After the delivery, the contractor is no longer responsible for the long-term use of the building. In the traditional procurement method, all risks are placed with theclient.In integrated procurement method, the payment is based on the achieved building performance; thus, the payment is non-adversarial. Since the architect, engineer and contractor have a wider responsibility on the quality of the design and the building, the payment is linked to a measurement system of the functional and technical performance of the building over a certain period of time. The honorarium becomes an incentive to achieve the optimal quality. If the building actors succeed to deliver a higher added-value thatexceed the minimum client?s requirements, they will receive a bonus in accordance to the client?s extra gain. The level of transparency is also improved. Open book accounting is an excellent instrument provided that the stakeholders agree on the information to be shared and to its level of detail (InPro, 2009).Next to the adoption of integrated procurement method, the new real estate strategy for hospital building projects addresses an innovative product development and life-cycle design approaches. A sustainable business case for the investment and exploitation of hospital buildings relies on dynamic life-cycle management that includes considerations and analysis of the market development over time next to the building life-cycle costs (investment/initial cost, operational cost, and logistic cost). Compared to the conventional life-cycle costing method, the dynamic life-cycle management encompasses a shift from focusing only on minimizing the costs to focusing on maximizing the total benefit that can be gained. One of the determining factors for a successful implementation of dynamic life-cycle management is the sustainable design of the building and building components, which means that the design carriessufficient flexibility to accommodate possible changes in the long term (Prins, 1992).Designing based on the principles of life-cycle management affects the role of the architect, as he needs to be well informed about the usage scenarios and related financial arrangements, the changing social and physical environments, and new technologies. Design needs to integrate people activities and business strategies over time. In this context, the architect is required to align the design strategies with the organisational, local and global policies on finance, business operations, health and safety, environment, etc.The combination of process and product innovation, and the changing roles of the building actors can be accommodated by integrated project delivery or IPD (AIA California Council, 2007). IPD is an approach that integrates people, systems, business structures and practices into a process that collaboratively harnesses the talents and insights of all participants to reduce waste and optimize efficiency through all phases of design, fabrication and construction. IPD principles can be applied to a variety of contractual arrangements. IPD teams will usually include members well beyond the basic triad of client, architect, and contractor. At a minimum, though, an Integrated Project should include a tight collaboration between the client, the architect, and the main contractor ultimately responsible for construction of the project, from the early design until the project handover. The key to a successful IPD is assembling a team that is committed to collaborative processes and is capable of working together effectively. IPD is built on collaboration. As a result, it can only be successful if the participants share and apply common values and goals.3. Changing roles through BIM applicationBuilding information model (BIM) comprises ICT frameworks and tools that can support the integrated collaboration based on life-cycle design approach. BIM is a digital representation of physical and functional characteristics of a facility. As such it serves as a shared knowledge resource for information about a facility forming a reliable basis for decisions during its lifecycle from inception onward (National Institute of Building Sciences NIBS, 2007). BIM facilitates time and place independent collaborative working. A basic premise of BIM is collaboration by different stakeholders at different phases of the life cycle of a facility to insert, extract, update or modify information in the BIM to support and reflect the roles of that stakeholder. BIM in its ultimate form, as a shared digital representation founded on open standards for interoperability, can become a virtual information model to be handed from the design team to the contractor and subcontractors and then to the client.BIM is not the same as the earlier known computer aided design (CAD). BIM goes further than an application to generate digital (2D or 3D) drawings. BIM is an integrated model in which all process and product information is combined, stored, elaborated, and interactively distributed to all relevant building actors. As a central model for all involved actors throughout the project lifecycle, BIM develops andevolves as the project progresses. Using BIM, the proposed design and engineering solutions can be measured against the client?s requirements and expected building performance. The functionalities of BIM to support the design process extend to multidimensional (nD), including: three-dimensional visualisation and detailing, clash detection, material schedule, planning, costestimate, production and logistic information, and as-built documents. During the construction process, BIM can support the communication between the building site, the factory and the design office– which is crucial for an effective and efficient prefabrication and assembly processes as well as to prevent or solve problems related to unforeseen errors or modifications. When the building is in use, BIM can be used in combination with the intelligent building systems to provide and maintain up-to-date information of the building performance, including the life-cycle cost.To unleash the full potential of more efficient information exchange in the AEC/FM industry in collaborative working using BIM, both high quality open international standards and high quality implementations of these standards must be in place. The IFC open standard is generally agreed to be of high quality and is widely implemented in software. Unfortunately, the certification process allows poor quality implementations to be certified and essentially renders the certified software useless for any practical usage with IFC. IFC compliant BIM is actually used less than manual drafting for architects and contractors, and show about the same usage for engineers. A recent survey shows that CAD (as a closed-system) is still the major form of technique used in design work (over 60 per cent) while BIM is used in around 20 percent of projects for architects and in around 10 per cent of projects for engineers and contractors.The application of BIM to support an optimal cross-disciplinary and cross-phase collaboration opens a new dimension in the roles and relationships between the building actors. Several most relevant issues are: the new role of a model manager; the agreement on the access right and IntellectualProperty Right (IPR); the liability and payment arrangement according to the type of contract and in relation to the integrated procurement; and the use of open international standards.Collaborative working using BIM demands a new expert role of a model manager who possesses ICT as well as construction process know-how (InPro, 2009). The model manager deals with the system as well as with the actors. He provides and maintains technological solutions required for BIM functionalities, manages the information flow, and improves the ICT skills of the stakeholders. The model manager does not take decisions on design and engineering solutions, nor the organisational processes, but his roles in the chain of decision making are focused on:the development of BIM, the definition of the structure and detail level of the model, and the deployment of relevant BIM tools, such as for models checking, merging, and clash detections;the contribution to collaboration methods, especially decision making and communication protocols, task planning, and risk management;and the management of information, in terms of data flow and storage, identification of communication errors, and decision or process (re-)tracking.Regarding the legal and organisational issues, one of the actual questions is: “In what way does the intellectual property right (IPR) in collaborative working using BIM differ from the IPR in a traditional teamwork?”. In terms of combine d work, the IPR of each element is at tached to its creator. Although it seems to be a fully integrated design, BIM actually resulted from a combination of works/elements; for instance: the outline of the building design, is created by the architect, the design for theelectrical system, is created by the electrical contractor, etc. Thus, in case of BIM as a combined work, the IPR is similar to traditional teamwork. Working with BIM with authorship registration functionalities may actually make it easier to keep track of the IPR.How does collaborative working, using BIM, effect the contractual relationship? On the one hand,collaborative working using BIM does not necessarily change the liability position in the contract nor does it obligate an alliance contract. The General Principles of BIM A ddendum confirms: …This does not effectuate or require a restructuring of contractual relationships or shifting of risks between or among the Project Participants other than as specifically required per the Protocol Addendum and its Attachments? (ConsensusDOCS, 2008). On the other hand, changes in terms of payment schemes can be anticipated. Collaborative processes using BIM will lead to the shifting of activities from to the early design phase. Much, if not all, activities in the detailed engineering and specification phase will be done in the earlier phases. It means that significant payment for the engineering phase, which may count up to 40 per cent of the design cost, can no longer be expected. As engineering work is done concurrently with the design, a new proportion of the payment in the early design phase is necessary.4. Review of ongoing hospital building projects using BIMIn The Netherlands, the changing roles in hospital building projects are part of the strategy, which aims at achieving a sustainable real estate in response to the changing healthcare policy. Referring to literature and previous research, the main factors that influence the success of the changing roles can be concluded as: the implementation of an integrated procurementmethod and a life-cycle design approach for a sustainable collaborative process; the agreement on the BIM structure and the intellectual rights; and the integration of the role of a model manager. The preceding sections have discussed the conceptual thinking on how to deal with these factors effectively. This current section observes two actual projects and compares the actual practice with the conceptual view respectively.The main issues, which are observed in the case studies, are: the selected procurement method and the roles of the involved parties within this method;the implementation of the life-cycle design approach;the type, structure, and functionalities of BIM used in the project;the openness in data sharing and transfer of the model, and the intended use of BIM in the future; and the roles and tasks of the model manager.The pilot experience of hospital building projects using BIM in the Netherlands can be observed at University Medical Centre St Radboud (further referred as UMC) and Maxima Medical Centre (further referred as MMC). At UMC, the new building project for the Faculty of Dentistry in the city of Nijmegen has been dedicated as a BIM pilot project. At MMC, BIM is used in designing new buildings for Medical Simulation and Mother-and-Child Centre in the city of Veldhoven.The first case is a project at the University Medical Centre (UMC) St Radboud. UMC is more than just a hospital. UMC combines medical services, education and research. More than 8500 staff and 3000 students work at UMC. As a part of the innovative real estate strategy, UMC has considered to use BIM for its building projects. The new development of the Faculty ofDentistry and the surrounding buildings on the Kapittelweg in Nijmegen has been chosen as a pilot project to gather practical knowledge and experience on collaborative processes with BIM support.The main ambition to be achieved through the use of BIM in the building projects at UMC can be summarised as follows: using 3D visualisation to enhance the coordination and communication among the building actors, and the user participation in design;integrating the architectural design with structural analysis, energy analysis, cost estimation, and planning;interactively evaluating the design solutions against the programme of requirements and specifications;reducing redesign/remake costs through clash detection during the design process; andoptimising the management of the facility through the registration of medical installations andequipments, fixed and flexible furniture, product and output specifications, and operational data.The second case is a project at the Maxima Medical Centre (MMC). MMC is a large hospital resulted from a merger between the Diaconessenhuis in Eindhoven and St Joseph Hospital in Veldhoven. Annually the 3,400 staff of MMC provides medical services to more than 450,000 visitors and patients. A large-scaled extension project of the hospital in Veldhoven is a part of its real estate strategy. A medical simulation centre and a women-and-children medical centre are among the most important new facilities within this extension project. The design has been developed using 3D modelling with several functionalities of BIM.The findings from both cases and the analysis are as follows.Both UMC and MMC opted for a traditional procurement method in which the client directly contracted an architect, a structural engineer, and a mechanical, electrical and plumbing (MEP) consultant in the design team. Once the design and detailed specifications are finished, a tender procedure will follow to select a contractor. Despite the choice for this traditional method, many attempts have been made for a closer and more effective multidisciplinary collaboration. UMC dedicated a relatively long preparation phase with the architect, structural engineer and MEP consultant before the design commenced. This preparation phase was aimed at creating a common vision on the optimal way for collaboration using BIM as an ICT support. Some results of this preparation phase are: a document that defines the common ambition for the project and the collaborative working process and a semi-formal agreement that states the commitment of the building actors for collaboration. Other than UMC, MMC selected an architecture firm with an in-house engineering department. Thus, the collaboration between the architect and structural engineer can take place within the same firm using the same software application.Regarding the life-cycle design approach, the main attention is given on life-cycle costs, maintenance needs, and facility management. Using BIM, both hospitals intend to get a much better insight in these aspects over the life-cycle period. The life-cycle sustainability criteria are included in the assignments for the design teams. Multidisciplinary designers and engineers are asked to collaborate more closely and to interact with the end-users to address life-cycle requirements. However, ensuring the building actors to engage in an integrated collaboration to generate sustainable design solutions that meet the life-cycle。
毕业设计外文文献翻译(原文+译文)
Environmental problems caused by Istanbul subway excavation and suggestionsfor remediation伊斯坦布尔地铁开挖引起的环境问题及补救建议Ibrahim Ocak Abstract:Many environmental problems caused by subway excavations have inevitably become an important point in city life. These problems can be categorized as transporting and stocking of excavated material, traffic jams, noise, vibrations, piles of dust mud and lack of supplies. Although these problems cause many difficulties,the most pressing for a big city like Istanbul is excava tion,since other listed difficulties result from it. Moreover, these problems are environmentally and regionally restricted to the period over which construction projects are underway and disappear when construction is finished. Currently, in Istanbul, there are nine subway construction projects in operation, covering approximately 73 km in length; over 200 km to be constructed in the near future. The amount of material excavated from ongoing construction projects covers approximately 12 million m3. In this study, problems—primarily, the problem with excavation waste(EW)—caused by subway excavation are analyzed and suggestions for remediation are offered.摘要:许多地铁开挖引起的环境问题不可避免地成为城市生活的重要部分。
(完整版)本科生_毕业设计说明书外文文献及翻译_
Computer networking summarizeNetworking can be defined as the linking of people, resources and ideas. Networking occurs via casual encounters, meetings, telephone conversation, and the printed words. Now the computer networking provide beings with new networking capabilities. Computer network are important for services because service tasks are information intensive. During the is transmitted between clients, coworkers, management, funding sources, and policy makers. Tools with rapidly speed up communication will dramatically affect services.Computer network growing explosively. Two decades ago, few people essential part of our infrastructure. Networking is used in every aspect of business, including advertising, production, shipping, planning, bulling, and accounting. Consequently, most corporations in on-line libraries around the world. Federal, state, and local government offices use networks, as do military organizations. In short, computer networks are everywhere.The growth in networking economic impact as well. An entire industry jobs for people with more networking expertise. Companies need workers to plan, acquire, install, operate, and manage the addition computer programming is no longer restricted to individual computers; programmers are expected to design and implement application software that can communicate with software on other computers.Computer networks link computers by communication lines and software protocols, allowing data to be exchanged rapidly and reliably. Traditionally, they split between wide area networks (WANs) and local area networks (LANs). A WAN is a network connected over long-distance telephone lines, and a LAN is a localized network usually in one building or a group of buildings close together. The distinction, computers. Today networks carry e-mail, provide access to public databases, and are beginning to be used for distributed systems. Networks also allow users in one locality to share expensive resources, such as printers and disk-systems.Distributed computer systems are built using networked computers that cooperate to perform tasks. In this environment, each part of the networked system does what it is best at. The of a personal computer or workstation provides a good user interface. The mainframe, on the other the results to the users. In a distributed environment, a user might use in a special language (e. g. Structured Query Language-SQL), to the mainframe, which then parrrses the query, returning the user only the data requested. The user might then use the data. By passing back the user’s PC only the specific information requested, network traffic is reduced. If the whole file were transmitted, the PC would then of one network to access the resources on a different type of network. For example, a gateway could be used to connect a local area network of personal computers to a mainframe computer network. For example, if a company this example, using a bridge makes more sense than joining all thepersonal computers together in one large network because the individual departments only occasionally need to access information on the other network.Computer networking technology can be divided into four major aspects.The first is the data transmission. It explains that at the lowest level electrical signals traveling across wires are used to carry information, and shows be encoded using electrical signals.The second focuses on packet transmission. It explains why computer network use packets, and shows . LANs and WANs discussed above are two basic network.The third covers internetworking—the important idea that allows system, and TCPIP, the protocol technology used in global internet.The fourth explains networking applications. It focuses on , and programs provide services such as electronic mail and Web browsing.Continued growth of the global Internet is one of most interesting and exciting phenomena in networking. A decade ago, the Internet was a research project that involved a few dozen sites. Today, the Internet into a production communication system that reaches millions of people in almost all countries on all continents around the world. In the United States, the Internet connects most corporations, colleges and universities, as well as federal, state, and local government offices. It will soon reach most elementary,junior, and senior addition, many private residences can reach the Internet through a dialup telephone connection. Evidence of the Internet’s impact on society can be seen in advertisements, in magazines and on television, which often contain a reference to an Internet Web site that provide additional information about the advertiser’s products and services.A large organization with diverse networking requirements needs multiple physical networks. More important, if the organization chooses the type network that is best for each task, the organization will network can only communicate with other computers attached to same network. The problem became evident in the 1970s as large organizations began to acquire multiple networks. Each network in the organizations formed an island. In many early installations, each computer attached to a single network and employees employees was given access to multiple svreens and keyboards, and the employee was forced to move form one computer to another to send a massage across the appropriate network. Users are neither satisfied nor productive when they must use a separate computer. Consequently, most modern computer communication syetem allow communication between any two computers analogous to the way a telephone system provides communication between any two telephones. Known as universal service, the concept is a fundamental part of networking. With universal service, a user on any computer in any part of an organization can send messages or data to any other users. Furthermore, a user does not need to change computer systems whenchanging tasks—all information is available to all computers. As a result, users are more productive.The basic component used to commect organization to choose network technologies appropriate for each need, and to use routers to connect all networks into a single internet.The goal of internetworking is universal service across an internet, routers must agree to forward information from a source on one network to a specified destination on another. The task is complex because frame formats and addressing schemes used by underlying networks can differ. As s resulrt, protocol software is needed on computers and routers make universal service possible. Internet protocols overcome differences in frame formats and physical addresses to make communication pissible among networks that use different technologies.In general, internet software provides the appeatrance of a single, seamless communication system to which many computers attach. The syetem offers universal service :each computer is assigned an address, and any computer can send a packet to any other computer. Furthermore, internet protocol software —neither users nor application programs are a ware of the underlying physical networks or the routers that connect them.We say that an internet is a virtual network system because the communication system is an abstraction. That is, although a combination of of a uniform network syetem, no such network exists.Research on internetworking modern networking. In fact,internet techmology . Most large organizations already use internetworking as primary computer communication mechanism. Smaller organizations and individuals are beginning to do so as well. More inportant, the TCPIP technology computers in schools, commercial organications, government, military sites and individuals in almost all countries around the world.电脑网络简述网络可被定义为人、资源和思想的联接。
软件工程专业毕业设计外文文献翻译
软件工程专业毕业设计外文文献翻译1000字本文将就软件工程专业毕业设计的外文文献进行翻译,能够为相关考生提供一定的参考。
外文文献1: Software Engineering Practices in Industry: A Case StudyAbstractThis paper reports a case study of software engineering practices in industry. The study was conducted with a large US software development company that produces software for aerospace and medical applications. The study investigated the company’s software development process, practices, and techniques that lead to the production of quality software. The software engineering practices were identified through a survey questionnaire and a series of interviews with the company’s software development managers, software engineers, and testers. The research found that the company has a well-defined software development process, which is based on the Capability Maturity Model Integration (CMMI). The company follows a set of software engineering practices that ensure quality, reliability, and maintainability of the software products. The findings of this study provide a valuable insight into the software engineering practices used in industry and can be used to guide software engineering education and practice in academia.IntroductionSoftware engineering is the discipline of designing, developing, testing, and maintaining software products. There are a number of software engineering practices that are used in industry to ensure that software products are of high quality, reliable, and maintainable. These practices include software development processes, software configuration management, software testing, requirements engineering, and project management. Software engineeringpractices have evolved over the years as a result of the growth of the software industry and the increasing demands for high-quality software products. The software industry has developed a number of software development models, such as the Capability Maturity Model Integration (CMMI), which provides a framework for software development organizations to improve their software development processes and practices.This paper reports a case study of software engineering practices in industry. The study was conducted with a large US software development company that produces software for aerospace and medical applications. The objective of the study was to identify the software engineering practices used by the company and to investigate how these practices contribute to the production of quality software.Research MethodologyThe case study was conducted with a large US software development company that produces software for aerospace and medical applications. The study was conducted over a period of six months, during which a survey questionnaire was administered to the company’s software development managers, software engineers, and testers. In addition, a series of interviews were conducted with the company’s software development managers, software engineers, and testers to gain a deeper understanding of the software engineering practices used by the company. The survey questionnaire and the interview questions were designed to investigate the software engineering practices used by the company in relation to software development processes, software configuration management, software testing, requirements engineering, and project management.FindingsThe research found that the company has a well-defined software development process, which is based on the Capability Maturity Model Integration (CMMI). The company’s software development process consists of five levels of maturity, starting with an ad hoc process (Level 1) and progressing to a fully defined and optimized process (Level 5). The company has achieved Level 3 maturity in its software development process. The company follows a set of software engineering practices that ensure quality, reliability, and maintainability of the software products. The software engineering practices used by the company include:Software Configuration Management (SCM): The company uses SCM tools to manage software code, documentation, and other artifacts. The company follows a branching and merging strategy to manage changes to the software code.Software Testing: The company has adopted a formal testing approach that includes unit testing, integration testing, system testing, and acceptance testing. The testing process is automated where possible, and the company uses a range of testing tools.Requirements Engineering: The company has a well-defined requirements engineering process, which includes requirements capture, analysis, specification, and validation. The company uses a range of tools, including use case modeling, to capture and analyze requirements.Project Management: The company has a well-defined project management process that includes project planning, scheduling, monitoring, and control. The company uses a range of tools to support project management, including project management software, which is used to track project progress.ConclusionThis paper has reported a case study of software engineering practices in industry. The study was conducted with a large US software development company that produces software for aerospace and medical applications. The study investigated the company’s software development process,practices, and techniques that lead to the production of quality software. The research found that the company has a well-defined software development process, which is based on the Capability Maturity Model Integration (CMMI). The company uses a set of software engineering practices that ensure quality, reliability, and maintainability of the software products. The findings of this study provide a valuable insight into the software engineering practices used in industry and can be used to guide software engineering education and practice in academia.外文文献2: Agile Software Development: Principles, Patterns, and PracticesAbstractAgile software development is a set of values, principles, and practices for developing software. The Agile Manifesto represents the values and principles of the agile approach. The manifesto emphasizes the importance of individuals and interactions, working software, customer collaboration, and responding to change. Agile software development practices include iterative development, test-driven development, continuous integration, and frequent releases. This paper presents an overview of agile software development, including its principles, patterns, and practices. The paper also discusses the benefits and challenges of agile software development.IntroductionAgile software development is a set of values, principles, and practices for developing software. Agile software development is based on the Agile Manifesto, which represents the values and principles of the agile approach. The manifesto emphasizes the importance of individuals and interactions, working software, customer collaboration, and responding to change. Agile software development practices include iterative development, test-driven development, continuous integration, and frequent releases.Agile Software Development PrinciplesAgile software development is based on a set of principles. These principles are:Customer satisfaction through early and continuous delivery of useful software.Welcome changing requirements, even late in development. Agile processes harness change for the customer's competitive advantage.Deliver working software frequently, with a preference for the shorter timescale.Collaboration between the business stakeholders and developers throughout the project.Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.Working software is the primary measure of progress.Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.Continuous attention to technical excellence and good design enhances agility.Simplicity – the art of maximizing the amount of work not done – is essential.The best architectures, requirements, and designs emerge from self-organizing teams.Agile Software Development PatternsAgile software development patterns are reusable solutions to common software development problems. The following are some typical agile software development patterns:The Single Responsibility Principle (SRP)The Open/Closed Principle (OCP)The Liskov Substitution Principle (LSP)The Dependency Inversion Principle (DIP)The Interface Segregation Principle (ISP)The Model-View-Controller (MVC) PatternThe Observer PatternThe Strategy PatternThe Factory Method PatternAgile Software Development PracticesAgile software development practices are a set ofactivities and techniques used in agile software development. The following are some typical agile software development practices:Iterative DevelopmentTest-Driven Development (TDD)Continuous IntegrationRefactoringPair ProgrammingAgile Software Development Benefits and ChallengesAgile software development has many benefits, including:Increased customer satisfactionIncreased qualityIncreased productivityIncreased flexibilityIncreased visibilityReduced riskAgile software development also has some challenges, including:Requires discipline and trainingRequires an experienced teamRequires good communicationRequires a supportive management cultureConclusionAgile software development is a set of values, principles, and practices for developing software. Agile software development is based on the Agile Manifesto, which represents the values and principles of the agile approach. Agile software development practices include iterative development, test-driven development, continuous integration, and frequent releases. Agile software development has many benefits, including increased customer satisfaction, increased quality, increased productivity, increased flexibility, increased visibility, and reduced risk. Agile software development also has some challenges, including the requirement for discipline and training, the requirement for an experienced team, the requirement for good communication, and the requirement for a supportive management culture.。
(完整版)_毕业设计外文文献翻译_69267082
毕业设计(论文)外文文献翻译专业计算机科学与技术学生姓名班级学号指导教师信息工程学院1、外文文献The History of the InternetThe Beginning - ARPAnetThe Internet started as a project by the US government. The object of the project was to create a means of communications between long distance points, in the event of a nation wide emergency or, more specifically, nuclear war. The project was called ARPAnet, and it is what the Internet started as. Funded specifically for military communication, the engineers responsible for ARPANet "Internet."By definition, an 'Internet' is four or more computers connected by a network.ARPAnet achieved its network by using a protocol called TCPIP. The basics around this protocol was that if information sent over a network failed to get through on one route, it would find another route to work with, as well as establishing a means for one computer to "talk" to another computer, regardless of whether it was a PC or a Macintosh.By the 80's ARPAnet, just years away from becoming the more well known Internet, its network. By the year 1984, it its network.In 1986 ARPAnet (supposedly) shut down, but only the organizationshutdown, and the existing networks still existed between the more than 1000 computers. It shut down due to a failied link up with NSF, who wanted to connect its 5 countywide super computers into ARPAnet.With the funding of NSF, new 1988. By that time, there were 28,174 computers on the (by then decided) Internet. In 1989 there were 80,000 computers on it. By 1989, there were.Another network was built to support the incredible number of people joining. It was constructed in 1992.Today - The InternetToday, the Internet the line' to experience the wealth of information of the Internet. Millions of people now use the Internet, and it's predicted that by the year 2003 every single person on the planet will our time and era, and is evolving so quickly its virtual Internet environments.The Internet is not a 'thing' itself. The Internet cannot just "crash." It functions the same way as the telephone system, only there is no Internet company that runs the Internet.The Internet is a collection of millioins of computers that are all connected to each other, or office network, only it thing about . How does a computer in Houston know a computer in Tokyo to view a webpage?Internet communication, communication among computers connected to the Internet, is based on a language. This language is called TCPIP. TCPIP establishes a language for a computer to access and transmit data over the Internet system.But TCPIP assumes that there is a physical connecetion between one computer and another. This is not usually the case. There would that is requireed is established by way of modems,phonelines, and other modem cable connections (like cable modems or DSL). Modems on computers read and transmit data over established lines,which could be phonelines or data lines. The actual .To explain this better, let's look at Internet Service Provider (ISP). The ISP might in turn be connected to another ISP, or a straight connection into the Internet backbone.2. The user launches a web browser like Netscape or Internet Explorer and types in an internet location to go to.3. Here's where the tricky part comes in. First, the computer sends data about it's data request to a router. A router is a very of routers in the world make what is called a "backbone," on which all the data on the Internet is transferred. The backbone presently operates at a speed of several gigabytes per-second. Such a speed compared to a normal modem is like comparing the to the ice-cube.Routers similarly to envelopes. So, when the request for the webpage goes through, it uses TCPIP protocols to tell the router what to do with the data, where it's going, and overall where the user wants to go.4. The router sends these packets to other routers, eventually leading to the target computer. It's like whisper down the lane (only the information remains intact).5. When the information reaches the target web server, the webserver then begins to send the web page back. A webserver is the computer wherethe webpage is stored that is running a program that packets, sent through routers, and arrive at the users computer where the user can view the webpage once it is assembled.The packets which contain the data also contain special information that lets routers and other computers know the right order.With millions of web pages, and millions of users, using the Internet is not always easy for a beginning user, especially for someone who is not entirely comfortale with using computers. Below you can find tips tricks and services of the Internet.Before you access webpages, you must the software they usually give to customers; you. The fact that you are viewing this page means that you be found at and MSIE can be found atThe fact that you're reading this right now means that you of instructions (like if it remark made by new web-users.Sometimes websites error on a website is not the user's fault, of course.A 404 error means that the page you tried to go to does not exist. This could be because the site is still being constructed and the page created yet, or because the site author made a typo in the page. There's nothing much to do about a 404 error except for e-mailing the site administrator (of the page you wanted to go to) an telling the Javascript code of a website. Not all websites utilize Javascript, but many do. Javascript is different from Java, and most browsers now support Javascript. If you are using an old version of a web browser (Netscape 3.0 for example), you might get Javascript errors because sites utilize Javascript versions that yourbrowser does not support. So, you can try getting a newer version of your web browser.E-mail stands for Electronic Mail, and that's what it is. E-mail enables people to send letters, and even files and pictures to each other.To use e-mail, you must e-mail client, which is just like a personal post office, since it retrieves and stores e-mail.Secondly, you must e-mail account. Most Internet Service Providers provide free e-mail account(s) for free. Some services offer free e-mail, like Hotmail, and Geocities.After configuring your e-mail client with your POP3 and SMTP server address (your e-mail provider will give you that information), you are ready to receive mail.An attachment is a file sent in a letter. If someone sends you an attachment and you don't know who it is, don't run the file, ever. It could be a virus or some other kind of nasty programs. You can't get a virus just by reading e-mail, you'll put a text graphic, your business information, anything you want.Imagine that a computer on the Internet is an island in the sea. The sea is filled with millions of islands. This is the Internet. Imagine an island communicates with other island by sending ships to other islands and receiving ships. The island the Internet a network (or the Internet). This method is similar to the islandocean symbolism above.Telnet refers to accessing ports on a server directly with a text connection. Almost every kind of Internet function, like accessing web pages,"chatting," and e-mailing is done over a Telnet connection.Telnetting requires a Telnet client. A telnet program comes with the Windows system, so Windows users can access telnet by typing in "telnet" (without the "'s) in the run dialog. Linux , chat daemon) can be accessed via telnet, although they are not usually meant to be accessed in such a manner. For instance, it is possible to connect directly to a mail server and check your mail by interfacing with the e-mail server software, but it's easier to use an e-mail client (of course).There are millions of WebPages that come from all over the world, yet database of websites. For instance, if you wanted to find a website on dogs, you'd search for "dog" or "dogs" or "dog information." Here are a few search-engines.1. Altavista (3. Excite ( possibly find. This means that a search engine can literally map out as much of the Internet as it's own time and speed allows for.An indexed collection uses Yahoo's site. You can click on Computers & the Internet. Then you can click on Hardware. Then you can click on Modems, etc., and along the way through sections, there are sites available which relate to what section you're in.Metasearch searches many search engines at the same time, finding the top choices from about 10 search engines, making searching a lot more effective.Once you are able to use search engines, you can effectively find the pages you want.With the arrival of networking and multi user systems, security on the mind of system developers and system operators. Since the dawn ofAT&T and its phone network, known by many, . Why should you be careful while making purchases via a website? Let's look at to a webpage. Looks safe, right? Not necessarily. As the user submits the information, it is being streamed through a series of computers that make up the Internet backbone. The information is in little chunks, in packages called packets. Here's the problem: While the information is being transferred through this big backbone, what is preventing a ". There are methods of enforcing security, like password protection, an most importantly, encryption.Encryption means scrambling data into a code that can only be unscrambled on the "other end." Browser's like Netscape Communicator and Internet Explorer feature encryption support for making on-line transfers. Some encryptions work better than others. The most advanced encryption system is called DES (Data Encryption Standard), and it was adopted by the US Defense Department because it was deemed so difficult to 'crack' that they considered it a security risk if it would fall into another countries to unlock an entire document. The problem is, there are 75 trillion possible keys to use, so it is a3. Excite () - Web spider & Indexed4. Lycos () - Web spider & Indexed5. Metasearch () - Multiple search网络蜘蛛是一种搜索引擎使用的程序,它随着可能找到的任何链接从一个网页到另一个网页。
毕业设计外文文献翻译
毕业设计外文文献翻译Graduation design of foreign literature translation 700 words Title: The Impact of Artificial Intelligence on the Job Market Abstract:With the rapid development of artificial intelligence (AI), concerns arise about its impact on the job market. This paper explores the potential effects of AI on various industries, including healthcare, manufacturing, and transportation, and the implications for employment. The findings suggest that while AI has the potential to automate repetitive tasks and increase productivity, it may also lead to job displacement and a shift in job requirements. The paper concludes with a discussion on the importance of upskilling and retraining for workers to adapt to the changing job market.1. IntroductionArtificial intelligence (AI) refers to the development of computer systems that can perform tasks that typically require human intelligence. AI has made significant advancements in recent years, with applications in various industries, such as healthcare, manufacturing, and transportation. As AI technology continues to evolve, concerns arise about its impact on the job market. This paper aims to explore the potential effects of AI on employment and discuss the implications for workers.2. Potential Effects of AI on the Job Market2.1 Automation of Repetitive TasksOne of the major impacts of AI on the job market is the automation of repetitive tasks. AI systems can perform tasks faster and moreaccurately than humans, particularly in industries that involve routine and predictable tasks, such as manufacturing and data entry. This automation has the potential to increase productivity and efficiency, but also poses a risk to jobs that can be easily replicated by AI.2.2 Job DisplacementAnother potential effect of AI on the job market is job displacement. As AI systems become more sophisticated and capable of performing complex tasks, there is a possibility that workers may be replaced by machines. This is particularly evident in industries such as transportation, where autonomous vehicles may replace human drivers, and customer service, where chatbots can handle customer inquiries. While job displacement may lead to short-term unemployment, it also creates opportunities for new jobs in industries related to AI.2.3 Shifting Job RequirementsWith the introduction of AI, job requirements are expected to shift. While AI may automate certain tasks, it also creates a demand for workers with the knowledge and skills to develop and maintain AI systems. This shift in job requirements may require workers to adapt and learn new skills to remain competitive in the job market.3. Implications for EmploymentThe impact of AI on employment is complex and multifaceted. On one hand, AI has the potential to increase productivity, create new jobs, and improve overall economic growth. On the other hand, it may lead to job displacement and a shift in job requirements. To mitigate the negative effects of AI on employment, it is essentialfor workers to upskill and retrain themselves to meet the changing demands of the job market.4. ConclusionIn conclusion, the rapid development of AI has significant implications for the job market. While AI has the potential to automate repetitive tasks and increase productivity, it may also lead to job displacement and a shift in job requirements. To adapt to the changing job market, workers should focus on upskilling and continuous learning to remain competitive. Overall, the impact of AI on employment will depend on how it is integrated into various industries and how workers and policymakers respond to these changes.。
_毕业设计外文文献及翻译_
_毕业设计外文文献及翻译_Graduation Thesis Foreign Literature Review and Chinese Translation1. Title: "The Impact of Artificial Intelligence on Society"Abstract:人工智能对社会的影响摘要:人工智能技术的快速发展引发了关于其对社会影响的讨论。
本文探讨了人工智能正在重塑不同行业(包括医疗保健、交通运输和教育)的各种方式。
还讨论了AI实施的潜在益处和挑战,以及伦理考量。
总体而言,本文旨在提供对人工智能对社会影响的全面概述。
2. Title: "The Future of Work: Automation and Job Displacement"Abstract:With the rise of automation technologies, there is growing concern about the potential displacement of workers in various industries. This paper examines the trends in automation and its impact on jobs, as well as the implications for workforce development and retraining programs. The ethical and social implications of automation are also discussed, along with potential strategies for mitigating job displacement effects.工作的未来:自动化和失业摘要:随着自动化技术的兴起,人们越来越担心各行业工人可能被替代的问题。
毕业设计论文外文文献翻译
xxxx大学xxx学院毕业设计(论文)外文文献翻译系部xxxx专业xxxx学生姓名xxxx 学号xxxx指导教师xxxx 职称xxxx2013年3 月Introducing the Spring FrameworkThe Spring Framework: a popular open source application framework that addresses many of the issues outlined in this book. This chapter will introduce the basic ideas of Spring and dis-cuss the central “bean factory” lightweight Inversion-of-Control (IoC) container in detail.Spring makes it particularly easy to implement lightweight, yet extensible, J2EE archi-tectures. It provides an out-of-the-box implementation of the fundamental architectural building blocks we recommend. Spring provides a consistent way of structuring your applications, and provides numerous middle tier features that can make J2EE development significantly easier and more flexible than in traditional approaches.The basic motivations for Spring are:To address areas not well served by other frameworks. There are numerous good solutions to specific areas of J2EE infrastructure: web frameworks, persistence solutions, remoting tools, and so on. However, integrating these tools into a comprehensive architecture can involve significant effort, and can become a burden. Spring aims to provide an end-to-end solution, integrating spe-cialized frameworks into a coherent overall infrastructure. Spring also addresses some areas that other frameworks don’t. For example, few frameworks address generic transaction management, data access object implementation, and gluing all those things together into an application, while still allowing for best-of-breed choice in each area. Hence we term Spring an application framework, rather than a web framework, IoC or AOP framework, or even middle tier framework.To allow for easy adoption. A framework should be cleanly layered, allowing the use of indi-vidual features without imposing a whole worldview on the application. Many Spring features, such as the JDBC abstraction layer or Hibernate integration, can be used in a library style or as part of the Spring end-to-end solution.To deliver ease of use. As we’ve noted, J2EE out of the box is relatively hard to use to solve many common problems. A good infrastructure framework should make simple tasks simple to achieve, without forcing tradeoffs for future complex requirements (like distributed transactions) on the application developer. It should allow developers to leverage J2EE services such as JTA where appropriate, but to avoid dependence on them in cases when they are unnecessarily complex.To make it easier to apply best practices. Spring aims to reduce the cost of adhering to best practices such as programming to interfaces, rather than classes, almost to zero. However, it leaves the choice of architectural style to the developer.Non-invasiveness. Application objects should have minimal dependence on the framework. If leveraging a specific Spring feature, an object should depend only on that particular feature, whether by implementing a callback interface or using the framework as a class library. IoC and AOP are the key enabling technologies for avoiding framework dependence.Consistent configuration. A good infrastructure framework should keep application configuration flexible and consistent, avoiding the need for custom singletons and factories. A single style should be applicable to all configuration needs, from the middle tier to web controllers.Ease of testing. Testing either whole applications or individual application classes in unit tests should be as easy as possible. Replacing resources or application objects with mock objects should be straightforward.To allow for extensibility. Because Spring is itself based on interfaces, rather than classes, it is easy to extend or customize it. Many Spring components use strategy interfaces, allowing easy customization.A Layered Application FrameworkChapter 6 introduced the Spring Framework as a lightweight container, competing with IoC containers such as PicoContainer. While the Spring lightweight container for JavaBeans is a core concept, this is just the foundation for a solution for all middleware layers.Basic Building Blockspring is a full-featured application framework that can be leveraged at many levels. It consists of multi-ple sub-frameworks that are fairly independent but still integrate closely into a one-stop shop, if desired. The key areas are:Bean factory. The Spring lightweight IoC container, capable of configuring and wiring up Java-Beans and most plain Java objects, removing the need for custom singletons and ad hoc configura-tion. Various out-of-the-box implementations include an XML-based bean factory. The lightweight IoC container and its Dependency Injection capabilities will be the main focus of this chapter.Application context. A Spring application context extends the bean factory concept by adding support for message sources and resource loading, and providing hooks into existing environ-ments. Various out-of-the-box implementations include standalone application contexts and an XML-based web application context.AOP framework. The Spring AOP framework provides AOP support for method interception on any class managed by a Spring lightweight container.It supports easy proxying of beans in a bean factory, seamlessly weaving in interceptors and other advice at runtime. Chapter 8 dis-cusses the Spring AOP framework in detail. The main use of the Spring AOP framework is to provide declarative enterprise services for POJOs.Auto-proxying. Spring provides a higher level of abstraction over the AOP framework and low-level services, which offers similar ease-of-use to .NET within a J2EE context. In particular, the provision of declarative enterprise services can be driven by source-level metadata.Transaction management. Spring provides a generic transaction management infrastructure, with pluggable transaction strategies (such as JTA and JDBC) and various means for demarcat-ing transactions in applications. Chapter 9 discusses its rationale and the power and flexibility that it offers.DAO abstraction. Spring defines a set of generic data access exceptions that can be used for cre-ating generic DAO interfaces that throw meaningful exceptions independent of the underlying persistence mechanism. Chapter 10 illustrates the Spring support for DAOs in more detail, examining JDBC, JDO, and Hibernate as implementation strategies.JDBC support. Spring offers two levels of JDBC abstraction that significantly ease the effort of writing JDBC-based DAOs: the org.springframework.jdbc.core package (a template/callback approach) and the org.springframework.jdbc.object package (modeling RDBMS operations as reusable objects). Using the Spring JDBC packages can deliver much greater pro-ductivity and eliminate the potential for common errors such as leaked connections, compared with direct use of JDBC. The Spring JDBC abstraction integrates with the transaction and DAO abstractions.Integration with O/R mapping tools. Spring provides support classesfor O/R Mapping tools like Hibernate, JDO, and iBATIS Database Layer to simplify resource setup, acquisition, and release, and to integrate with the overall transaction and DAO abstractions. These integration packages allow applications to dispense with custom ThreadLocal sessions and native transac-tion handling, regardless of the underlying O/R mapping approach they work with.Web MVC framework. Spring provides a clean implementation of web MVC, consistent with the JavaBean configuration approach. The Spring web framework enables web controllers to be configured within an IoC container, eliminating the need to write any custom code to access business layer services. It provides a generic DispatcherServlet and out-of-the-box controller classes for command and form handling. Request-to-controller mapping, view resolution, locale resolution and other important services are all pluggable, making the framework highly extensi-ble. The web framework is designed to work not only with JSP, but with any view technology, such as Velocity—without the need for additional bridges. Chapter 13 discusses web tier design and the Spring web MVC framework in detail.Remoting support. Spring provides a thin abstraction layer for accessing remote services without hard-coded lookups, and for exposing Spring-managed application beans as remote services. Out-of-the-box support is inc luded for RMI, Caucho’s Hessian and Burlap web service protocols, and WSDL Web Services via JAX-RPC. Chapter 11 discusses lightweight remoting.While Spring addresses areas as diverse as transaction management and web MVC, it uses a consistent approach everywhere. Once you have learned the basic configuration style, you will be able to apply it in many areas. Resources, middle tier objects, and web components are all set up using the same bean configuration mechanism. You can combine your entireconfiguration in one single bean definition file or split it by application modules or layers; the choice is up to you as the application developer. There is no need for diverse configuration files in a variety of formats, spread out across the application.Spring on J2EEAlthough many parts of Spring can be used in any kind of Java environment, it is primarily a J2EE application framework. For example, there are convenience classes for linking JNDI resources into a bean factory, such as JDBC DataSources and EJBs, and integration with JTA for distributed transaction management. In most cases, application objects do not need to work with J2EE APIs directly, improving reusability and meaning that there is no need to write verbose, hard-to-test, JNDI lookups.Thus Spring allows application code to seamlessly integrate into a J2EE environment without being unnecessarily tied to it. You can build upon J2EE services where it makes sense for your application, and choose lighter-weight solutions if there are no complex requirements. For example, you need to use JTA as transaction strategy only if you face distributed transaction requirements. For a single database, there are alternative strategies that do not depend on a J2EE container. Switching between those transac-tion strategies is merely a matter of configuration; Spring’s consistent abstraction avoids any need to change application code.Spring offers support for accessing EJBs. This is an important feature (and relevant even in a book on “J2EE without EJB”) because the u se of dynamic proxies as codeless client-side business delegates means that Spring can make using a local stateless session EJB an implementation-level, rather than a fundamen-tal architectural, choice.Thus if you want to use EJB, you can within a consistent architecture; however, you do not need to make EJB the cornerstone of your architecture. This Spring feature can make devel-oping EJB applications significantly faster, because there is no need to write custom code in service loca-tors or business delegates. Testing EJB client code is also much easier, because it only depends on the EJB’s Business Methods interface (which is not EJB-specific), not on JNDI or the EJB API.Spring also provides support for implementing EJBs, in the form of convenience superclasses for EJB implementation classes, which load a Spring lightweight container based on an environment variable specified in the ejb-jar.xml deployment descriptor. This is a powerful and convenient way of imple-menting SLSBs or MDBs that are facades for fine-grained POJOs: a best practice if you do choose to implement an EJB application. Using this Spring feature does not conflict with EJB in any way—it merely simplifies following good practice.Introducing the Spring FrameworkThe main aim of Spring is to make J2EE easier to use and promote good programming practice. It does not reinvent the wheel; thus you’ll find no logging packages in Spring, no connection pools, no distributed transaction coordinator. All these features are provided by other open source projects—such as Jakarta Commons Logging (which Spring uses for all its log output), Jakarta Commons DBCP (which can be used as local DataSource), and ObjectWeb JOTM (which can be used as transaction manager)—or by your J2EE application server. For the same reason, Spring doesn’t provide an O/R mapping layer: There are good solutions for this problem area, such as Hibernate and JDO.Spring does aim to make existing technologies easier to use. For example, although Spring is not in the business of low-level transactioncoordination, it does provide an abstraction layer over JTA or any other transaction strategy. Spring is also popular as middle tier infrastructure for Hibernate, because it provides solutions to many common issues like SessionFactory setup, ThreadLocal sessions, and exception handling. With the Spring HibernateTemplate class, implementation methods of Hibernate DAOs can be reduced to one-liners while properly participating in transactions.The Spring Framework does not aim to replace J2EE middle tier services as a whole. It is an application framework that makes accessing low-level J2EE container ser-vices easier. Furthermore, it offers lightweight alternatives for certain J2EE services in some scenarios, such as a JDBC-based transaction strategy instead of JTA when just working with a single database. Essentially, Spring enables you to write appli-cations that scale down as well as up.Spring for Web ApplicationsA typical usage of Spring in a J2EE environment is to serve as backbone for the logical middle tier of a J2EE web application. Spring provides a web application context concept, a powerful lightweight IoC container that seamlessly adapts to a web environment: It can be accessed from any kind of web tier, whether Struts, WebWork, Tapestry, JSF, Spring web MVC, or a custom solution.The following code shows a typical example of such a web application context. In a typical Spring web app, an applicationContext.xml file will reside in the WEB-INF directory, containing bean defini-tions according to the “spring-beans” DTD. In such a bean definition XML file, business objects and resources are defined, for example, a “myDataSource” bean, a “myInventoryManager” bean, and a “myProductManager” bean. Spring takes care of their configuration, their wiring up, and their lifecycle.<beans><bean id=”myDataSource” class=”org.springframework.jdbc. datasource.DriverManagerDataSource”><property name=”driverClassName”> <value>com.mysql.jdbc.Driver</value></property> <property name=”url”><value>jdbc:mysql:myds</value></property></bean><bean id=”myInventoryManager” class=”ebusiness.DefaultInventoryManager”> <property name=”dataSource”><ref bean=”myDataSource”/> </property></bean><bean id=”myProductManager” class=”ebusiness.DefaultProductManage r”><property name=”inventoryManager”><ref bean=”myInventoryManager”/> </property><property name=”retrieveCurrentStock”> <value>true</value></property></bean></beans>By default, all such beans have “singleton” scope: one instance per context. The “myInventoryManager” bean will automatically be wired up with the defined DataSource, while “myProductManager” will in turn receive a reference to the “myInventoryManager” bean. Those objects (traditionally called “beans” in Spring terminology) need to expos e only the corresponding bean properties or constructor arguments (as you’ll see later in this chapter); they do not have to perform any custom lookups.A root web application context will be loaded by a ContextLoaderListener that is defined in web.xml as follows:<web-app><listener> <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class></listener>...</web-app>After initialization of the web app, the root web application context will be available as a ServletContext attribute to the whole web application, in the usual manner. It can be retrieved from there easily via fetching the corresponding attribute, or via a convenience method in org.springframework.web. context.support.WebApplicationContextUtils. This means that the application context will be available in any web resource with access to the ServletContext, like a Servlet, Filter, JSP, or Struts Action, as follows:WebApplicationContext wac = WebApplicationContextUtils.getWebApplicationContext(servletContext);The Spring web MVC framework allows web controllers to be defined as JavaBeans in child application contexts, one per dispatcher servlet. Such controllers can express dependencies on beans in the root application context via simple bean references. Therefore, typical Spring web MVC applications never need to perform a manual lookup of an application context or bean factory, or do any other form of lookup.Neither do other client objects that are managed by an application context themselves: They can receive collaborating objects as bean references.The Core Bean FactoryIn the previous section, we have seen a typical usage of the Spring IoC container in a web environment: The provided convenience classes allow for seamless integration without having to worry about low-level container details. Nevertheless, it does help to look at the inner workings to understand how Spring manages the container. Therefore, we will now look at the Spring bean container in more detail, starting at the lowest building block: the bean factory. Later, we’ll continue with resource setup and details on the application context concept.One of the main incentives for a lightweight container is to dispense with the multitude of custom facto-ries and singletons often found in J2EE applications. The Spring bean factory provides one consistent way to set up any number of application objects, whether coarse-grained components or fine-grained busi-ness objects. Applying reflection and Dependency Injection, the bean factory can host components that do not need to be aware of Spring at all. Hence we call Spring a non-invasive application framework.Fundamental InterfacesThe fundamental lightweight container interface is org.springframework.beans.factory.Bean Factory. This is a simple interface, which is easy to implement directly in the unlikely case that none of the implementations provided with Spring suffices. The BeanFactory interface offers two getBean() methods for looking up bean instances by String name, with the option to check for a required type (and throw an exception if there is a type mismatch).public interface BeanFactory {Object getBean(String name) throws BeansException;Object getBean(String name, Class requiredType) throws BeansException;boolean containsBean(String name);boolean isSingleton(String name) throws NoSuchBeanDefinitionException;String[] getAliases(String name) throws NoSuchBeanDefinitionException;}The isSingleton() method allows calling code to check whether the specified name represents a sin-gleton or prototype bean definition. In the case of a singleton bean, all calls to the getBean() method will return the same object instance. In the case of a prototype bean, each call to getBean() returns an inde-pendent object instance, configured identically.The getAliases() method will return alias names defined for the given bean name, if any. This mecha-nism is used to provide more descriptive alternative names for beans than are permitted in certain bean factory storage representations, such as XML id attributes.The methods in most BeanFactory implementations are aware of a hierarchy that the implementation may be part of. If a bean is not foundin the current factory, the parent factory will be asked, up until the root factory. From the point of view of a caller, all factories in such a hierarchy will appear to be merged into one. Bean definitions in ancestor contexts are visible to descendant contexts, but not the reverse.All exceptions thrown by the BeanFactory interface and sub-interfaces extend org.springframework. beans.BeansException, and are unchecked. This reflects the fact that low-level configuration prob-lems are not usually recoverable: Hence, application developers can choose to write code to recover from such failures if they wish to, but should not be forced to write code in the majority of cases where config-uration failure is fatal.Most implementations of the BeanFactory interface do not merely provide a registry of objects by name; they provide rich support for configuring those objects using IoC. For example, they manage dependen-cies between managed objects, as well as simple properties. In the next section, we’ll look at how such configuration can be expressed in a simple and intuitive XML structure.The sub-interface org.springframework.beans.factory.ListableBeanFactory supports listing beans in a factory. It provides methods to retrieve the number of beans defined, the names of all beans, and the names of beans that are instances of a given type:public interface ListableBeanFactory extends BeanFactory {int getBeanDefinitionCount();String[] getBeanDefinitionNames();String[] getBeanDefinitionNames(Class type);boolean containsBeanDefinition(String name);Map getBeansOfType(Class type, boolean includePrototypes,boolean includeFactoryBeans) throws BeansException}The ability to obtain such information about the objects managed by a ListableBeanFactory can be used to implement objects that work with a set of other objects known only at runtime.In contrast to the BeanFactory interface, the methods in ListableBeanFactory apply to the current factory instance and do not take account of a hierarchy that the factory may be part of. The org.spring framework.beans.factory.BeanFactoryUtils class provides analogous methods that traverse an entire factory hierarchy.There are various ways to leverage a Spring bean factory, ranging from simple bean configuration to J2EE resource integration and AOP proxy generation. The bean factory is the central, consistent way of setting up any kind of application objects in Spring, whether DAOs, business objects, or web controllers. Note that application objects seldom need to work with the BeanFactory interface directly, but are usu-ally configured and wired by a factory without the need for any Spring-specific code.For standalone usage, the Spring distribution provides a tiny spring-core.jar file that can be embed-ded in any kind of application. Its only third-party dependency beyond J2SE 1.3 (plus JAXP for XML parsing) is the Jakarta Commons Logging API.The bean factory is the core of Spring and the foundation for many other services that the framework offers. Nevertheless, the bean factory can easily be used stan-dalone if no other Spring services are required.Derivative:networkSpring 框架简介Spring框架:这是一个流行的开源应用框架,它可以解决很多问题。
1.毕设外文翻译(原版)
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. AC-19, NO.
5,
OCTOBER
1974
H. P. Geering, “ O p t k l control theory for nonscalar-valued [22] L. A. Zadeh, “Optimality and non-scalar-valued performance performance criteria, Ph.D. dissertation, Dept. Elec. Eng., criteria,” I E E E Trans. Automat. Contr. (Corresp.), vol. AC-8, Mass. Inst. Technol., Cambridge, Aug. 1971 (available in the pp. 59-60, 1963. form of Microfiche Nr. AD 731213, NTIS, U. S. Dept. of Comm.). theory for H. P. Geering and M. Athans,“Optimalcontrol non-scalar-valued performance criteria,” in Proc. 5th Ann. Princeton Conf. Information Sciences and Systems, Princeton, N.J., Mar. 1971. H. Halkin, “On the necessary condition for optimal control of non-linear systems,” J . Analyse Mathhatique, vol. 12, pp. IHans P. Geering (S’7&M’71) was bornon 82, 1964. June 7, 1942. H e received the degree of H. Halkin, “Topological aspects of optimal control of dynamicalpolysystems,” Contrib. Differential Equations, vol. 3, pp. Diplomierter Elektroingenieur from the Eid377-385. 1964. genoesische Technische Hochschulein Zurich, E. B. Lee and L. Markus, Foundations of Optimal Control Switzerland, in 1966, and the M.S. and Ph.D. Theory. New York: Wiley, 1967. degrees from the Massachusetts Institute of L. W.-Neustadt, “A general theory of extremals,” J . Comput. Technology, Cambridge,in 1969 and 1971, Sci. Syst., V O ~ .3, pp. 57-92, 1969. respectively. C. Olech, “Existence theorems for optimal problems with vecIn 1967 and 1968, he worked with Sprecher tor valued cost function,” Center of Dynamical Systems, & Schuh AG in Suhr, Switzerland, and Brown University, Providence, R.I., Tech. Rep. 67-6, 1967. Oerlikon-Buehrle AG in Zurich, Switzerland, K. Ritter, “Optimization theory in linear spaces-Part I,” Math. Annul., vol. 182, pp. 189-206, 1969. From September, 1968 to August, 1971 he was a Research Assistant , “Optimization theory in linear spaces-Part 11,” Math. in the M.I.T. Electronic Systems Laboratory and a Teaching AsAnnal., vol. 183, pp. 169-180, 1969. sistant in the Department of Electrical Engineering at M.I.T. He is , “Optimization theory in linear spaces-Part 111,”Math. presently with Oerlikon-Buehrle AG. His research interests are in Annul., vol. 184, pp. 133-154, 1970. estimation and control theory. R.. T. ltockafellar, Con.vexAnalysis. Princeton, N.J.: Princeton Dr. Geel.ing is a member of Schweizerische Gesellschaft fur AutoUniv. Press, 1970. E. Tse. “On the oDtimal control of linear svstems with in- matik (IFAC) and Schweizerischer Elektrotechnischer Verein. completeinformati&,” Ph.D. dissertation, U*ep. Elec. Eng., Mass. Inst. Technol., Cambridge, Nov., 1969. [20] B. Z. Vulikh, Introduction to the Theory of Partially Ordered Spaces. Groningen, The Netherlands: Wolters-Noordhoff Sc., 1967. [21] H. S. Witsenhausen, “Minimax control of uncertain system,” Electronic Systems Lab., Mass. Inst. Technol., Cambridge, Michael Athans (S’58-M%-SM’69-F’73) for a photograph and Rep. ESGIt-269, 1966. biography see page 30 of February the issue of TRANSACTIONS. this
毕业设计外文翻译例文分析解析
大连科技学院毕业设计(论文)外文翻译学生姓名专业班级指导教师职称所在单位教研室主任完毕日期 2016年4月15日Translation EquivalenceDespite the fact that the world is becoming a global village, translation remains a major way for languages and cultures to interact and influence each other. And name translation, especially government name translation, occupies a quite significant place in international exchange.Translation is the communication of the meaning of a source-language text by means of an equivalent target-language text. While interpreting—the facilitating of oral or sign-language communication between users of different languages—antedates writing, translation began only after the appearance of written literature. There exist partial translations of the Sumerian Epic of Gilgamesh (ca. 2023 BCE) into Southwest Asian languages of the second millennium BCE. Translators always risk inappropriate spill-over of source-language idiom and usage into the target-language translation. On the other hand, spill-overs have imported useful source-language calques and loanwords that have enriched the target languages. Indeed, translators have helped substantially to shape the languages into which they have translated. Due to the demands of business documentation consequent to the Industrial Revolution that began in the mid-18th century, some translation specialties have become formalized, with dedicated schools and professional associations. Because of the laboriousness of translation, since the 1940s engineers have sought to automate translation (machine translation) or to mechanically aid the human translator (computer-assisted translation). The rise of the Internet has fostered a world-wide market for translation servicesand has facilitated language localizationIt is generally accepted that translation, not as a separate entity, blooms into flower under such circumstances like culture, societal functions, politics and power relations. Nowadays, the field of translation studies is immersed with abundantly diversified translation standards, with no exception that some of them are presented by renowned figures and are rather authoritative. In the translation practice, however, how should we select the so-called translation standards to serve as our guidelines in the translation process and how should we adopt the translation standards to evaluate a translation product?In the macro - context of flourish of linguistic theories, theorists in the translation circle, keep to the golden law of the principle of equivalence. The theory of Translation Equivalence is the central issue in western translation theories. And the presentation of this theory gives great impetus to the development and improvement of translation theory. It’s not diffi cult for us to discover that it is the theory of Translation Equivalence that serves as guidelines in government name translation in China. Name translation, as defined, is the replacement of the name in the source language by an equivalent name or other words in the target language. Translating Chinese government names into English, similarly, is replacing the Chinese government name with an equivalent in English.Metaphorically speaking, translation is often described as a moving trajectory going from A to B along a path or a container to carry something across from A to B. This view is commonly held by both translation practitioners and theorists in the West. In this view, theydo not expect that this trajectory or something will change its identity as it moves or as it is carried. In China, to translate is also understood by many people normally as “to translate the whole text sentence by sentence and paragraph by paragraph, without any omission, addition, or other changes. In both views, the source text and the target text must be “the same”. This helps explain the etymological source for the term “translation equivalence”. It is in essence a word which describes the relationship between the ST and the TT.Equivalence means the state or fact or property of being equivalent. It is widely used in several scientific fields such as chemistry and mathematics. Therefore, it comes to have a strong scientific meaning that is rather absolute and concise. Influenced by this, translation equivalence also comes to have an absolute denotation though it was first applied in translation study as a general word. From a linguistic point of view, it can be divided into three sub-types, i.e., formal equivalence, semantic equivalence, and pragmatic equivalence. In actual translation, it frequently happens that they cannot be obtained at the same time, thus forming a kind of relative translation equivalence in terms of quality. In terms of quantity, sometimes the ST and TT are not equivalent too. Absolute translation equivalence both in quality and quantity, even though obtainable, is limited to a few cases.The following is a brief discussion of translation equivalence study conducted by three influential western scholars, Eugene Nida, Andrew Chesterman and Peter Newmark. It’s expected that their studies can instruct GNT study in China and provide translators with insightful methods.Nida’s definition of translation is: “Translation consists in reproducing in the receptor language the closest natural equivalent of the source language message, first in terms of meaning and secondly in terms of style.” It i s a replacement of textual material in one language〔SL〕by equivalent textual material in another language(TL). The translator must strive for equivalence rather than identity. In a sense, this is just another way of emphasizing the reproducing of the message rather than the conservation of the form of the utterance. The message in the receptor language should match as closely as possible the different elements in the source language to reproduce as literally and meaningfully as possible the form and content of the original. Translation equivalence is an empirical phenomenon discovered by comparing SL and TL texts and it’s a useful operational concept like the term “unit of translati on”.Nida argues that there are two different types of equivalence, namely formal equivalence and dynamic equivalence. Formal correspondence focuses attention on the message itself, in both form and content, whereas dynamic equivalence is based upon “the principle of equivalent effect”.Formal correspondence consists of a TL item which represents the closest equivalent of a ST word or phrase. Nida and Taber make it clear that there are not always formal equivalents between language pairs. Therefore, formal equivalents should be used wherever possible if the translation aims at achieving formal rather than dynamic equivalence. The use of formal equivalents might at times have serious implications in the TT since the translation will not beeasily understood by the target readership. According to Nida and Taber, formal correspondence distorts the grammatical and stylistic patterns of the receptor language, and hence distorts the message, so as to cause the receptor to misunderstand or to labor unduly hard.Dyn amic equivalence is based on what Nida calls “the principle of equivalent effect” where the relationship between receptor and message should be substantially the same as that which existed between the original receptors and the message. The message has to be modified to the receptor’s linguistic needs and cultural expectation and aims at complete naturalness of expression. Naturalness is a key requirement for Nida. He defines the goal of dynamic equivalence as seeking the closest natural equivalent to the SL message. This receptor-oriented approach considers adaptations of grammar, of lexicon and of cultural references to be essential in order to achieve naturalness; the TL should not show interference from the SL, and the ‘foreignness ‘of the ST setting is minimized.Nida is in favor of the application of dynamic equivalence, as a more effective translation procedure. Thus, the product of the translation process, that is the text in the TL, must have the same impact on the different readers it was addressing. Only in Nida and Taber's edition is it clearly stated that dynamic equivalence in translation is far more than mere correct communication of information.As Andrew Chesterman points out in his recent book Memes of Translation, equivalence is one of the five element of translation theory, standing shoulder to shoulder withsource-target, untranslatability, free-vs-literal, All-writing-is-translating in importance. Pragmatically speaking, observed Chesterman, “the only true examples of equivalence (i.e., absolute equivalence) are those in which an ST item X is invariably translated into a given TL as Y, and vice versa. Typical examples would be words denoting numbers (with the exception of contexts in which they have culture-bound connotations, such as “magic” or “unlucky”), certain technical terms (oxygen, molecule) and the like. From this point of view, the only true test of equivalence would be invariable back-translation. This, of course, is unlikely to occur except in the case of a small set of lexical items, or perhaps simple isolated syntactic structure”.Peter Newmark. Departing from Nida’s receptor-oriented line, Newmark argues that the success of equivalent effect is “illusory “and that the conflict of loyalties and the gap between emphasis on source and target language will always remain as the overriding problem in translation theory and practice. He suggests narrowing the gap by replacing the old terms with those of semantic and communicative translation. The former attempts to render, as closely as the semantic and syntactic structures of the second language allow, the exact contextual meaning of the original, while the latter “attempts to produce on its readers an effect as close as possible to that obtained on the readers of the original.” Newmark’s description of communicative translation resembles Nida’s dynamic equivalence in the effect it is trying to create on the TT reader, while semantic translation has similarities to Nida’s formal equivalence.Meanwhile, Newmark points out that only by combining both semantic and communicative translation can we achieve the goal of keeping the ‘spirit’ of the original. Semantic translation requires the translator retain the aesthetic value of the original, trying his best to keep the linguistic feature and characteristic style of the author. According to semantic translation, the translator should always retain the semantic and syntactic structures of the original. Deletion and abridgement lead to distortion of the author’s intention and his writing style.翻译对等尽管全世界正在渐渐成为一种地球村,但翻译仍然是语言和和文化之间旳交流互动和互相影响旳重要方式之一。
毕业设计文献翻译
毕业设计文献翻译篇一:毕业设计外文文献翻译毕业设计文文献翻译专业学生姓名班级学号指导教师优集学院外外文资料名称: Knowledge-Based Engineeri- -ng Design Methodology外文资料出处: Int.J.Engng Ed.Vol.16.No.1 附件: 1.外文资料翻译译文基于知识工程设计方法D. E. CALKINS1. 背景复杂系统的发展需要很多工程和管理方面的知识、决策,它要满足很多竞争性的要求。
设计被认为是决定产品最终形态、成本、可靠性、市场接受程度的首要因素。
高级别的工程设计和分析过程特别重要,因为大多数的生命周期成本和整体系统的质量都在这个阶段。
产品成本的压缩最可能发生在产品设计的最初阶段。
整个生命周期阶段大约百分之七十的成本花费在概念设计阶段结束时,缩短设计周期的关键是缩短概念设计阶段,这样同时也减少了工程的重新设计工作量。
工程权衡过程中采用良好的估计和非正式的启发进行概念设计。
传统CAD工具对概念设计阶段的支持非常有限。
有必要,进行涉及多个学科的交流合作来快速进行设计分析。
最后,必须能够管理大量的特定领域的知识。
解决方案是在概念设计阶段包含进更过资源,通过消除重新设计来缩短整个产品的时间。
所有这些因素都主张采取综合设计工具和环境,以在早期的综合设计阶段提供帮助。
这种集成设计工具能够使由不同学科的工程师、设计者在面对复杂的需求和约束时能够对设计意图达成共识。
那个设计工具可以让设计团队研究在更高级别上的更多配置细节。
问题就是架构一个设计工具,以满足所有这些要求。
2. 虚拟原型模型现在需要是一种代表产品设计为得到一将允许一产品的早发展和评价的真实事实上原型的过程的方式。
虚拟样机将取代传统的物理样机,并允许设计工程师,研究“假设”的情况,同时反复更新他们的设计。
真正的虚拟原型,不仅代表形状和形式,即几何形状,它也代表如重量,材料,性能和制造工艺的非几何属性。
本科毕业设计外文文献及译文1
本科毕业设计外文文献及译文文献、资料题目:Transit Route Network Design Problem:Review文献、资料来源:网络文献、资料发表(出版)日期:2007.1院(部):xxx专业:xxx班级:xxx姓名:xxx学号:xxx指导教师:xxx翻译日期:xxx外文文献:Transit Route Network Design Problem:Review Abstract:Efficient design of public transportation networks has attracted much interest in the transport literature and practice,with manymodels and approaches for formulating the associated transit route network design problem _TRNDP_having been developed.The presentpaper systematically presents and reviews research on the TRNDP based on the three distinctive parts of the TRNDP setup:designobjectives,operating environment parameters and solution approach.IntroductionPublic transportation is largely considered as a viable option for sustainable transportation in urban areas,offering advantages such as mobility enhancement,traffic congestion and air pollution reduction,and energy conservation while still preserving social equity considerations. Nevertheless,in the past decades,factors such as socioeconomic growth,the need for personalized mobility,the increase in private vehicle ownership and urban sprawl have led to a shift towards private vehicles and a decrease in public transportation’s share in daily commuting (Sinha2003;TRB2001;EMTA2004;ECMT2002;Pucher et al.2007).Efforts for encouraging public transportation use focuses on improving provided services such as line capacity,service frequency,coverage,reliability,comfort and service quality which are among the most important parameters for an efficient public transportation system(Sinha2003;Vuchic2004.) In this context,planning and designing a cost and service efficientpublic transportation network is necessary for improving its competitiveness and market share. The problem that formally describes the design of such a public transportation network is referred to as the transit route network design problem(TRNDP);it focuses on the optimization of a number of objectives representing the efficiency of public transportation networks under operational and resource constraints such as the number and length of public transportation routes, allowable service frequencies,and number of available buses(Chakroborty2003;Fan and Machemehl2006a,b).The practical importance of designing public transportation networks has attractedconsiderable interest in the research community which has developed a variety of approaches and modelsfor the TRNDP including different levels of design detail and complexity as well as interesting algorithmic innovations.In thispaper we offer a structured review of approaches for the TRNDP;researchers will obtain a basis for evaluating existing research and identifying future research paths for further improving TRNDP models.Moreover,practitioners will acquire a detailed presentation of both the process and potential tools for automating the design of public transportation networks,their characteristics,capabilities,and strengths.Design of Public Transportation NetworksNetwork design is an important part of the public transportation operational planning process_Ceder2001_.It includes the design of route layouts and the determination of associated operational characteristics such as frequencies,rolling stock types,and so on As noted by Ceder and Wilson_1986_,network design elements are part of the overall operational planning process for public transportation networks;the process includes five steps:_1_design of routes;_2_ setting frequencies;_3_developing timetables;_4_scheduling buses;and_5_scheduling drivers. Route layout design is guided by passenger flows:routes are established to provide direct or indirect connection between locations and areas that generate and attract demand for transit travel, such as residential and activity related centers_Levinson1992_.For example,passenger flows between a central business district_CBD_and suburbs dictate the design of radial routes while demand for trips between different neighborhoods may lead to the selection of a circular route connecting them.Anticipated service coverage,transfers,desirable route shapes,and available resources usually determine the structure of the route network.Route shapes areusually constrained by their length and directness_route directness implies that route shapes are as straight as possible between connected points_,the usage of given roads,and the overlapping with other transit routes.The desirable outcome is a set of routesconnecting locations within a service area,conforming to given design criteria.For each route, frequencies and bus types are the operational characteristics typically determined through design. Calculations are based on expected passenger volumes along routes that are estimated empirically or by applying transit assignmenttechniques,under frequency requirement constraints_minimum and maximum allowedfrequencies guaranteeing safety and tolerable waiting times,respectively_,desired load factors, fleet size,and availability.These steps as well as the overall design.process have been largely based upon practical guidelines,the expert judgment of transit planners,and operators experience_Baaj and Mahmassani1991_.Two handbooks by Black _1995_and Vuchic_2004_outline frameworks to be followed by planners when designing a public transportation network that include:_1_establishing the objectives for the network;_2_ defining the operational environment of the network_road structure,demand patterns,and characteristics_;_3_developing;and_4_evaluating alternative public transportation networks.Despite the extensive use of practical guidelines and experience for designing transit networks,researchers have argued that empirical rules may not be sufficient for designing an efficient transit network and improvements may lead to better quality and more efficient services. For example,Fan and Machemehl_2004_noted that researchers and practitioners have been realizing that systematic and integrated approaches are essential for designing economically and operationally efficient transit networks.A systematic design process implies clear and consistent steps and associated techniques for designing a public transportation network,which is the scope of the TRNDP.TRNDP:OverviewResearch has extensively examined the TRNDP since the late1960s.In1979,Newell discussed previous research on the optimal design of bus routes and Hasselström_1981_ analyzed relevant studies and identified the major features of the TRNDP as demand characteristics,objective functions,constraints,passengerbehavior,solution techniques,and computational time for solving the problem.An extensive review of existing work on transit network design was provided by Chua_1984_who reported five types of transit system planning:_1_manual;_2_marketanalysis;_3_systems analysis;_4_systems analysis with interactive graphics;and_5_ mathematical optimization approach.Axhausemm and Smith_1984_analyzed existing heuristic algorithms for formulating the TRNDP in Europe,tested them,anddiscussed their potential implementation in the United States.Ceder and Wilson_1986_reportedprior work on the TRNDP and distinguished studies into those that deal with idealized networks and to those that focus on actual routes,suggesting that the main features of the TRNDP include demand characteristics,objectivesand constraints,and solution methods.At the same period,Van Nes et al._1988_grouped TRNDP models into six categories:_1_ analytical models for relating parameters of the public transportation system;_2_models determining the links to be used for public transportation route construction;_3_models determining routes only;_4_models assigning frequencies to a set of routes;_5_two-stage models for constructing routes and then assigning frequencies;and_6_models for simultaneously determining routes and frequencies.Spacovic et al._1994_and Spacovic and Schonfeld_1994_proposed a matrix organization and classified each study according to design parameters examined,objectives anticipated,network geometry,and demand characteristics. Ceder and Israeli_1997_suggested broad categorizations for TRNDP models into passenger flow simulation and mathematical programming models.Russo_1998_adopted the same categorization and noted that mathematical programming models guarantee optimal transit network design but sacrifice the level of detail in passenger representation and design parameters, while simulation models address passenger behavior but use heuristic procedures obtaining a TRNDP solution.Ceder_2001_enhanced his earlier categorization by classifying TRNDP models into simulation,ideal network,and mathematical programming models.Finally,in a recent series of studies,Fan and Machemehl_2004,2006a,b_divided TRNDP approaches into practical approaches,analytical optimization models for idealized conditions,and metaheuristic procedures for practical problems.The TRNDP is an optimization problem where objectives are defined,its constraints are determined,and a methodology is selected and validated for obtaining an optimal solution.The TRNDP is described by the objectives of the public transportation network service to be achieved, the operational characteristics and environment under which the network will operate,and the methodological approach for obtaining the optimal network design.Based on this description of the TRNDP,we propose a three-layer structure for organizing TRNDP approaches_Objectives, Parameters,and Methodology_.Each layer includes one or more items that characterize each study.The“Objectives”layer incorporates the goals set when designing a public transportation system such as the minimization of the costs of the system or the maximization of the quality of services provided.The“Parameters”layer describes the operating environment and includes both the design variables expected to be derived for the transit network_route layouts,frequencies_as well as environmental and operational parameters affecting and constraining that network_for example,allowable frequencies,desired load factors,fleet availability,demand characteristics and patterns,and so on_.Finally,the“Methodology”layer covers the logical–mathematical framework and algorithmic tools necessary to formulate and solve the TRNDP.The proposed structure follows the basic concepts toward setting up a TRNDP:deciding upon the objectives, selecting the transit network items and characteristics to be designed,setting the necessary constraints for the operating environment,and formulating and solving the problem. TRNDP:ObjectivesPublic transportation serves a very important social role while attempting to do this at the lowest possible operating cost.Objectives for designing daily operations of a public transportation system should encompass both angles.The literature suggests that most studies actually focus on both the service and economic efficiency when designing such a system. Practical goals for the TRNDP can be briefly summarized as follows_Fielding1987;van Oudheudsen et al.1987;Black1995_:_1_user benefit maximization;_2_operator cost minimization;_3_total welfare maximization;_4_capacity maximization;_5_energy conservation—protection of the environment;and_6_individual parameter optimization.Mandl_1980_indicated that public transportation systems have different objectives to meet. He commented,“even a single objective problem is difficult to attack”_p.401_.Often,these objectives are controversial since cutbacks in operating costs may require reductions in the quality of services.Van Nes and Bovy_2000_pointed out that selected objectives influence the attractiveness and performance of a public transportation network.According to Ceder and Wilson_1986_,minimization of generalized cost or time or maximization of consumer surplus were the most common objectives selected when developing transit network design models. Berechman_1993_agreed that maximization of total welfare is the most suitable objective for designing a public transportation system while Van Nes and Bovy_2000_argued that the minimization of total user and system costs seem the most suit able and less complicatedobjective_compared to total welfare_,while profit maximization leads to nonattractive public transportation networks.As can be seen in Table1,most studies seek to optimize total welfare,which incorporates benefits to the user and to the er benefits may include travel,access and waiting cost minimization,minimization of transfers,and maximization of coverage,while benefits for the system are maximum utilization and quality of service,minimization of operating costs, maximization of profits,and minimization of the fleet size used.Most commonly,total welfare is represented by the minimization of user and system costs.Some studies address specific objectives from the user,theoperator,or the environmental perspective.Passenger convenience,the number of transfers, profit and capacity maximization,travel time minimization,and fuel consumption minimization are such objectives.These studies either attempt to simplify the complex objective functions needed to setup the TRNDP_Newell1979;Baaj and Mahmassani1991;Chakroborty and Dwivedi2002_,or investigate specific aspects of the problem,such as objectives_Delle Site and Fillipi2001_,and the solution methodology_Zhao and Zeng2006;Yu and Yang2006_.Total welfare is,in a sense,a compromise between objectives.Moreover,as reported by some researchers such as Baaj and Mahmassani_1991_,Bielli et al._2002_,Chackroborty and Dwivedi_2002_,and Chakroborty_2003_,transit network design is inherently a multiobjective problem.Multiobjective models for solving the TRNDP have been based on the calculation of indicators representing different objectives for the problem at hand,both from the user and operator perspectives,such as travel and waiting times_user_,and capacity and operating costs _operator_.In their multiobjective model for the TRNDP,Baaj and Majmassani_1991_relied on the planner’s judgment and experience for selecting the optimal public transportation network,based on a set of indicators.In contrast,Bielli et al._2002_and Chakroborty and Dwivedi_2002_,combined indicators into an overall,weighted sum value, which served as the criterion for determining the optimaltransit network.TRNDP:ParametersThere are multiple characteristics and design attributes to consider for a realistic representation of a public transportation network.These form the parameters for the TRNDP.Part of these parameters is the problem set of decision variables that define its layout and operational characteristics_frequencies,vehicle size,etc._.Another set of design parameters represent the operating environment_network structure,demand characters,and patterns_, operational strategies and rules,and available resources for the public transportation network. These form the constraints needed to formulate the TRNDP and are,a-priori fixed,decided upon or assumed.Decision VariablesMost common decision variables for the TRNDP are the routes and frequencies of the public transportation network_Table1_.Simplified early studies derived optimal route spacing between predetermined parallel or radial routes,along with optimal frequencies per route_Holroyd1967; Byrne and Vuchic1972;Byrne1975,1976;Kocur and Hendrickson1982;Vaughan1986_,while later models dealt with the development of optimal route layouts and frequency determination. Other studies,additionally,considered fares_Kocur and Hendrickson1982;Morlok and Viton 1984;Chang and Schonfeld1991;Chien and Spacovic2001_,zones_Tsao and Schonfeld1983; Chang and Schonfeld1993a_,stop locations_Black1979;Spacovic and Schonfeld1994; Spacovic et al.1994;Van Nes2003;Yu and Yang2006_and bus types_Delle Site and Filippi 2001_.Network StructureSome early studies focused on the design of systems in simplified radial_Byrne1975;Black 1979;Vaughan1986_,or rectangular grid road networks_Hurdle1973;Byrne and Vuchic1972; Tsao and Schonfeld1984_.However,most approaches since the1980s were either applied to realistic,irregular grid networks or the network structure was of no importance for the proposed model and therefore not specified at all.Demand PatternsDemand patterns describe the nature of the flows of passengers expected to be accommodated by the public transportation network and therefore dictate its structure.For example,transit trips from a number of origins_for example,stops in a neighborhood_to a single destination_such as a bus terminal in the CBD of a city_and vice-versa,are characterized as many-to-one_or one-tomany_transit demand patterns.These patterns are typically encountered in public transportation systems connecting CBDs with suburbs and imply a structure of radial orparallel routes ending at a single point;models for patterns of that type have been proposed by Byrne and Vuchic_1972_,Salzborn_1972_,Byrne_1975,1976_,Kocur and Hendrickson _1982_,Morlok and Viton_1984_,Chang and Schonfeld_1991,1993a_,Spacovic and Schonfeld_1994_,Spacovic et al._1994_,Van Nes_2003_,and Chien et al._2003_.On the other hand,many-to-many demand patterns correspond to flows between multiple origins and destinations within an urban area,suggesting that the public transportation network is expected to connect various points in an area.Demand CharacteristicsDemand can be characterized either as“fixed”_or“inelastic”_or“elastic”;the later meaning that demand is affected by the performance and services provided by the public transportation network.Lee and Vuchic_2005_distinguished between two types of elastic demand:_1_demand per mode affected by transportation services,with total demand for travel kept constant;and_2_total demand for travel varying as a result of the performance of the transportation system and its modes.Fan and Machemehl_2006b_noted that the complexity of the TRNDP has led researchers intoassuming fixed demand,despite its inherent elastic nature.However,since the early1980s, studies included aspects of elastic demand in modeling the TRNDP_Hasselstrom1981;Kocur and Hendrickson1982_.Van Nes et al._1988_applied a simultaneous distribution-modal split model based on transit deterrence for estimatingdemand for public transportation.In a series of studies,Chang and Schonfeld_1991,1993a,b_ and Spacovic et al._1994_estimated demand as a direct function of travel times and fares with respect to their elasticities,while Chien and Spacovic2001_,followed the same approach assuming that demand is additionally affected by headways,route spacing and fares.Finally, studies by Leblanc_1988_,Imam_1998_,Cipriani et al._2005_,Lee and Vuchic_2005_;and Fan and Machemehl_2006a_based demand estimation on mode choice models for estimating transit demand as a function of total demand for travel.中文译文:公交路线网络设计问题:回顾摘要:公共交通网络的有效设计让交通理论与实践成为众人关注的焦点,随之发展出了很多规划相关公交路线网络设计问题(TRNDP)的模型与方法。
毕业设计(论文)外文资料翻译
1、外文原文(复印件)2、外文资料翻译译文节能智能照明控制系统Sherif Matta and Syed Masud Mahmud, SeniorMember, IEEE Wayne State University, Detroit,Michigan 48202Sherif.Matta@,smahmud@摘要节约能源已成为当今最具挑战性的问题之一。
最浪费能源的来自低效利用的电能消耗的人工光源设备(灯具或灯泡)。
本文提出了一种通过把人工照明的强度控制到令人满意的水平,来节约电能,并且有详细设计步骤的系统。
在白天使用照明设备时,尽可能的节约。
当记录超过预设的照明方案时,引入改善日光采集和控制的调光系统。
设计原理是,如果它可以通过利用日光这样的一种方式,去控制百叶窗或窗帘。
否则,它使用的是人工建筑内部的光源。
光通量是通过控制百叶窗帘的开启角度来控制,同时,人工光源的强度的控制,通过控制脉冲宽度来调制(PWM)对直流灯的发电量或剪切AC灯泡的AC波。
该系统采用控制器区域网络(CAN),作为传感器和致动器通信用的介质。
该系统是模块化的,可用来跨越大型建筑物。
该设计的优点是,它为用户提供了一个单点操作,而这个正是用户所希望的光的亮度。
该控制器的功能是确定一种方法来满足所需的最小能量消耗光的量。
考虑的主要问题之一是系统组件的易于安装和低成本。
该系统显示出了显著节省的能源量,和在实际中实施的可行性。
关键词:智能光控系统,节能,光通量,百叶帘控制,控制器区域网络(CAN),光强度的控制一简介多年来,随着建筑物的数量和建筑物房间内的数量急剧增加,能源的浪费、低效光控制和照明分布难以管理。
此外,依靠用户对光的手动控制,来节省能源是不实际的。
很多技术和传感器最近已经向管理过多的能量消耗转变,例如在一定区域内的检测活动采用运动检测。
当有人进入房间时,自动转向灯为他们提供了便利。
他们通过在最后人员离开房间后不久关闭转向灯来减少照明能源的使用。
毕业设计外文翻译英文翻译英文原稿
Harmonic source identification and current separationin distribution systemsYong Zhao a,b,Jianhua Li a,Daozhi Xia a,*a Department of Electrical Engineering Xi’an Jiaotong University, 28 West Xianning Road, Xi’an, Shaanxi 710049, Chinab Fujian Electric Power Dispatch and Telecommunication Center, 264 Wusi Road, Fuzhou, Fujian, 350003, China AbstractTo effectively diminish harmonic distortions, the locations of harmonic sources have to be identified and their currents have to be separated from that absorbed by conventional linear loads connected to the same CCP. In this paper, based on the intrinsic difference between linear and nonlinear loads in their V –I characteristics and by utilizing a new simplified harmonic source model, a new principle for harmonic source identification and harmonic current separation is proposed. By using this method, not only the existence of harmonic source can be determined, but also the contributions of the harmonic source and the linear loads to harmonic voltage distortion can be distinguished. The detailed procedure based on least squares approximation is given. The effectiveness of the approach is illustrated by test results on a composite load.2004 Elsevier Ltd. All rights reserved.Keywords: Distribution system; Harmonic source identification; Harmonic current separation; Least squares approximation1. IntroductionHarmonic distortion has experienced a continuous increase in distribution systems owing to the growing use of nonlinear loads. Many studies have shown that harmonics may cause serious effects on power systems, communication systems, and various apparatus [1–3]. Harmonic voltages at each point on a distribution network are not only determined by the harmonic currents produced by harmonic sources (nonlinear loads), but also related to all linear loads (harmonic current sinks) as well as the structure and parameters of the network. To effectively evaluate and diminish the harmonic distortion in power systems, the locations of harmonic sources have to be identified and the responsibility of the distortion caused by related individual customers has to be separated.As to harmonic source identification, most commonly the negative harmonic power is considered as an essential evidence of existing harmonic source [4–7]. Several approaches aiming at evaluating the contribution of an individual customer can also be found in the literatures. Schemes based on power factor measurement to penalize the customer’s harmonic currents are discussed in Ref. [8]. However, it would be unfair to use economical penalization if we could not distinguish whether the measured harmonic current is from nonlinear load or from linear load.In fact, the intrinsic difference between linear and nonlinear loads lies in their V –I characteristics. Harmonic currents of a linear load are i n linear proportion to its supplyharmonic voltages of the same order 次, whereas the harmonic currents of a nonlinear load are complex nonlinear functions of its supply fundamental 基波and harmonic voltage components of all orders. To successfully identify and isolate harmonic source in an individual customer or several customers connected at same point in the network, the V –I characteristics should be involved and measurement of voltages and currents under several different supply conditions should be carried out.As the existing approaches based on measurements of voltage and current spectrum or harmonic power at a certain instant cannot reflect the V –I characteristics, they may not provide reliable information about the existence and contribution of harmonic sources, which has been substantiated by theoretical analysis or experimental researches [9,10].In this paper, to approximate the nonlinear characteristics and to facilitate the work in harmonic source identification and harmonic current separation, a new simplified harmonic source model is proposed. Then based on the difference between linear and nonlinear loads in their V –I characteristics, and by utilizing the harmonic source model, a new principle for harmonic source identification and harmonic current separation is presented. By using the method, not only the existence of harmonic source can be determined, but also the contributions of the harmonic sources and the linear loads can be separated. Detailed procedure of harmonic source identification and harmonic current separation based on least squares approximation is presented. Finally, test results on a composite load containing linear and nonlinear loads are given to illustrate the effectiveness of the approach.2. New principle for harmonic source identification and current separationConsider a composite load to be studied in a distribution system, which may represent an individual consumer or a group of customers supplied by a common feeder 支路in the system. To identify whether it contains any harmonic source and to separate the harmonic currents generated by the harmonic sources from that absorbed by conventional linear loads in the measured total harmonic currents of the composite load, the following assumptions are made.(a) The supply voltage and the load currents are both periodical waveforms withperiod T; so that they can be expressed by Fourier series as1()s i n (2)h h h v t ht T πθ∞==+ (1)1()sin(2)h h h i t ht πφ∞==+The fundamental frequency and harmonic components can further be presented bycorresponding phasorshr hi h h hr hi h hV jV V I jI I θφ+=∠+=∠ , 1,2,3,...,h n = (2)(b) During the period of identification, the composite load is stationary, i.e. both its composition and circuit parameters of all individual loads keep unchanged.Under the above assumptions, the relationship between the total harmonic currents of the harmonic sources(denoted by subscript N) in the composite load and the supply voltage, i.e. the V –I characteristics, can be described by the following nonlinear equation ()()()N i t f v t = (3)and can also be represented in terms of phasors as()()122122,,,...,,,,,,...,,Nhr r i nr ni Nh Nhi r inr ni I V V V V V I I V V V V V ⎡⎤=⎢⎥⎣⎦ 2,3,...,h n = (4)Note that in Eq. (4), the initial time (reference time) of the voltage waveform has been properly selected such that the phase angle u1 becomes 0 and 10i V =, 11r V V =in Eq. (2)for simplicity.The V –I characteristics of the linear part (denote by subscript L) of the composite load can be represented by its equivalent harmonic admittance Lh Lh Lh Y G jB =+, and the total harmonic currents absorbed by the linear part can be described as,Lhr LhLh hr Lh Lhi LhLh hi I G B V I I B G V -⎡⎤⎡⎤⎡⎤==⎢⎥⎢⎥⎢⎥⎣⎦⎣⎦⎣⎦2,3,...,h n = (5)From Eqs. (4) and (5), the whole harmonic currents absorbed by the composite load can be expressed as()()122122,,,...,,,,,,...,,hr Lhr Nhr r i nr ni h hi Lhi Nhi r inr ni I I I V V V V V I I I I V V V V V ⎡⎤⎡⎤⎡⎤==-⎢⎥⎢⎥⎢⎥⎣⎦⎣⎦⎣⎦ 2,3,...,h n = (6)As the V –I characteristics of harmonic source are nonlinear, Eq. (6) can neither be directly used for harmonic source identification nor for harmonic current separation. To facilitate the work in practice, simplified methods should be involved. The common practice in harmonic studies is to represent nonlinear loads by means of current harmonic sources or equivalent Norton models [11,12]. However, these models are not of enough precision and new simplified model is needed.From the engineering point of view, the variations of hr V and hi V ; ordinarily fall into^3% bound of the rated bus voltage, while the change of V1 is usually less than ^5%. Within such a range of supply voltages, the following simplified linear relation is used in this paper to approximate the harmonic source characteristics, Eq. (4)112222112322,ho h h r r h i i hnr nr hni ni Nh ho h h r r h i i hnr nr hni ni a a V a V a V a V a V I b b V b V b V b V b V ++++++⎡⎤=⎢⎥++++++⎣⎦2,3,...,h n = (7)这个地方不知道是不是原文写错?23h r r b V 其他的都是2The precision and superiority of this simplified model will be illustrated in Section 4 by test results on several kinds of typical harmonic sources.The total harmonic current (Eq. (6)) then becomes112222112222,2,3,...,Lh Lh hr ho h h r r h i i hnr nr hni ni h Lh Lh hi ho h h r r h i i hnr nr hni ni G B V a a V a V a V a V a V I B G V b b V b V b V b V b V h n-++++++⎡⎤⎡⎤⎡⎤=-⎢⎥⎢⎥⎢⎥++++++⎣⎦⎣⎦⎣⎦= (8)It can be seen from the above equations that the harmonic currents of the harmonic sources (nonlinear loads) and the linear loads differ from each other intrinsically in their V –I characteristics. The harmonic current component drawn by the linear loads is uniquely determined by the harmonic voltage component with same order in the supply voltage. On the other hand, the harmonic current component of the nonlinear loads contains not only a term caused by the same order harmonic voltage but also a constant term and the terms caused by fundamental and harmonic voltages of all other orders. This property will be used for identifying the existence of harmonic source sin composite load.As the test results shown in Section 4 demonstrate that the summation of the constant term and the component related to fundamental frequency voltage in the harmonic current of nonlinear loads is dominant whereas other components are negligible, further approximation for Eq. (7) can be made as follows.Let112'012()()nh h hkr kr hki ki k k h Nhnh h hkr kr hki kik k h a a V a V a V I b b V b V b V =≠=≠⎡⎤+++⎢⎥⎢⎥=⎢⎥⎢⎥+++⎢⎥⎢⎥⎣⎦∑∑ hhr hhi hr Nhhhr hhi hi a a V I b b V ⎡⎤⎡⎤''=⎢⎥⎢⎥⎣⎦⎣⎦hhrhhihr Lh Lh Nh hhrhhi hi a a V I I I b b V ''⎡⎤⎡⎤'''=-=⎢⎥⎢⎥''⎣⎦⎣⎦,2,3,...,hhr hhiLh Lh hhrhhi hhr hhi Lh Lh hhr hhi a a G B a a h n b b B G b b ''-⎡⎤⎡⎤⎡⎤=-=⎢⎥⎢⎥⎢⎥''⎣⎦⎣⎦⎣⎦The total harmonic current of the composite load becomes112012(),()2,3,...,nh h hkr kr hki ki k k hhhrhhi hr h Lh NhLhNh n hhrhhi hi h h hkr kr hki kik k h a a V a V a V a a V I I I I I b b V b b V b V b V h n=≠=≠⎡⎤+++⎢⎥⎢⎥''⎡⎤⎡⎤''=-=-=-⎢⎥⎢⎥⎢⎥''⎣⎦⎣⎦⎢⎥+++⎢⎥⎢⎥⎣⎦=∑∑ (9)By neglecting ''Nh I in the harmonic current of nonlinear load and adding it to the harmonic current of linear load, 'Nh I can then be deemed as harmonic current of thenonlinear load while ''Lh I can be taken as harmonic current of the linear load. ''Nh I =0 means the composite load contains no harmonic sources, while ''0NhI ≠signify that harmonic sources may exist in this composite load. As the neglected term ''Nh I is not dominant, it is obviousthat this simplification does not make significant error on the total harmonic current of nonlinear load. However, it makes the possibility or the harmonic source identification and current separation.3. Identification procedureIn order to identify the existence of harmonic sources in a composite load, the parameters in Eq. (9) should be determined primarily, i.e.[]0122hr h h h rh i hhr hhihnr hni C a a a a a a a a ''= []0122hi h h h rh i hhrhhihnr hni C b b b b b b b b ''=For this purpose, measurement of different supply voltages and corresponding harmoniccurrents of the composite load should be repeatedly performed several times in some short period while keeping the composite load stationary. The change of supply voltage can for example be obtained by switching in or out some shunt capacitors, disconnecting a parallel transformer or changing the tap position of transformers with OLTC. Then, the least squares approach can be used to estimate the parameters by the measured voltages and currents. The identification procedure will be explained as follows.(1) Perform the test for m (2m n ≥)times to get measured fundamental frequency andharmonic voltage and current phasors ()()k k h h V θ∠,()()k k hh I φ∠,()1,2,,,1,2,,k m h n == .(2) For 1,2,,k n = ,transfer the phasors corresponding to zero fundamental voltage phase angle ()1(0)k θ=and change them into orthogonal components, i.e.()()11kkr V V = ()10ki V =()()()()()()()()()()11cos sin kkkkk kkkhr h h hihhV V h V V h θθθθ=-=-()()()()()()()()()()11cos sin k kkkk kkkhrhhhihhI I h I I h φθφθ=-=-,2,3,...,h n =(3)Let()()()()()()()()1221Tk k k k k k k k r i hr hi nr ni VV V V V V V V ⎡⎤=⎣⎦ ,()1,2,,k m = ()()()12Tm X V V V ⎡⎤=⎣⎦ ()()()12T m hr hr hr hrW I I I ⎡⎤=⎣⎦()()()12Tm hi hi hihi W I I I ⎡⎤=⎣⎦ Minimize ()()()211hr mk hr k I C V=-∑ and ()()()211him k hi k IC V=-∑, and determine the parametershr C and hi C by least squares approach as [13]:()()11T T hr hr T T hi hiC X X X W C X X X W --== (10)(4) By using Eq. (9), calculate I0Lh; I0Nh with the obtained Chr and Chi; then the existence of harmonic source is identified and the harmonic current is separated.It can be seen that in the course of model construction, harmonic source identification and harmonic current separation, m times changing of supply system operating condition and measuring of harmonic voltage and currents are needed. More accurate the model, more manipulations are necessary.To compromise the needed times of the switching operations and the accuracy of the results, the proposed model for the nonlinear load (Eq. (7)) and the composite load (Eq. (9)) can be further simplified by only considering the dominant terms in Eq. (7), i.e.01111,Nhr h h hhr hhi hr Nh Nhi ho h hhrhhi hi I a a V a a V I I b b V b b V +⎡⎤⎡⎤⎡⎤⎡⎤==+⎢⎥⎢⎥⎢⎥⎢⎥+⎣⎦⎣⎦⎣⎦⎣⎦2,3,,h n = (11) 01111h h Nh ho h a a V I b b V +⎡⎤'=⎢⎥+⎣⎦01111,hr hhrhhi hr h h h LhNh hi hhr hhihi ho h I a a V a a V I I I I b b V b b V ''+⎡⎤⎡⎤⎡⎤⎡⎤''==-=-⎢⎥⎢⎥⎢⎥⎢⎥''+⎣⎦⎣⎦⎣⎦⎣⎦2,3,,h n = (12) In this case, part equations in the previous procedure should be changed as follows[]01hr h h hhrhhi C a a a a ''= []01hi h h hhrhhiC b b b b ''= ()()()1Tk k k hr hi V V V ⎡⎤=⎣⎦ Similarly, 'Nh I and 'Lh I can still be taken as the harmonic current caused by thenonlinear load and the linear load, respectively.4. Experimental validation4.1. Model accuracyTo demonstrate the validity of the proposed harmonic source models, simulations are performed on the following three kind of typical nonlinear loads: a three-phase six-pulse rectifier, a single-phase capacitor-filtered rectifier and an acarc furnace under stationary operating condition.Diagrams of the three-phase six-pulse rectifier and the single-phase capacitor-filtered rectifier are shown in Figs. 1 and 2 [14,15], respectively, the V –I characteristic of the arc furnace is simplified as shown in Fig. 3 [16].The harmonic currents used in the simulation test are precisely calculated from their mathematical model. As to the supply voltage, VekT1 is assumed to be uniformly distributed between 0.95 and 1.05, VekThr and VekThi ek 1; 2;…;m T are uniformly distributed between20.03 and 0.03 with base voltage 10 kV and base power 1 MVFig. 1. Diagram of three-phase six-pulse rectifier.Fig. 2. Diagram of single-phase capacitor-filtered rectifierFig. 3. Approximate V –I characteristics of arc furnace.Three different models including the harmonic current source (constant current) model, the Norton model and the proposed simplified model are simulated and estimated by the least squares approach for comparison.For the three-phase six-pulse rectifier with fundamental currentI=1.7621; the1 parameters in the simplified model for fifth and seventh harmonic currents are listed in Table 1.To compare the accuracy of the three different models, the mean and standard deviations of the errors on Ihr; Ihi and Ih between estimated value and the simulated actual value are calculated for each model. The error comparison of the three models on the three-phase six-pulse rectifier is shown in Table 2, where mhr; mhi and mha denote the mean, and shr; shi and sha represent the standard deviations. Note that I1 and _Ih in Table 2are the current values caused by rated pure sinusoidal supply voltage.Error comparisons on the single-phase capacitor-filtered rectifier and the arc furnace load are listed in Table 3 and 4, respectively.It can be seen from the above test results that the accuracy of the proposed model is different for different nonlinear loads, while for a certain load, the accuracy will decrease as the harmonic order increase. However, the proposed model is always more accurate than other two models.It can also be seen from Table 1 that the componenta50 t a51V1 and b50 t b51V1 are around 20:0074 t0:3939 0:3865 and 0:0263 t 0:0623 0:0886 while the componenta55V5r and b55V5i will not exceed 0:2676 £0:03 0:008 and 0:9675 £0:003 0:029; respectively. The result shows that the fifth harmonic current caused by the summation of constant term and the fundamental voltage is about 10 times of that caused by harmonic voltage with same order, so that the formal is dominant in the harmonic current for the three-phase six-pulse rectifier. The same situation exists for other harmonic orders and other nonlinear loads.4.2. Effectiveness of harmonic source identification and current separationTo show the effectiveness of the proposed harmonic source identification method, simulations are performed on a composite load containing linear load (30%) and nonlinear loads with three-phase six-pulse rectifier (30%),single-phase capacitor-filtered rectifier (20%) and ac arc furnace load (20%).For simplicity, only the errors of third order harmonic current of the linear and nonlinear loads are listed in Table 5, where IN3 denotes the third order harmonic current corresponding to rated pure sinusoidal supply voltage; mN3r ;mN3i;mN3a and mL3r ;mL3i;mL3a are error means of IN3r ; IN3i; IN3 and IL3r ; IL3i; IL3 between the simulated actual value and the estimated value;sN3r ;sN3i;sN3a and sL3r ;sL3i;sL3a are standard deviations.Table 2Table 3It can be seen from Table 5 that the current errors of linear load are less than that of nonlinear loads. This is because the errors of nonlinear load currents are due to both the model error and neglecting the components related to harmonic voltages of the same order, whereas only the later components introduce errors to the linear load currents. Moreover, it can be found that more precise the composite load model is, less error is introduced. However, even by using the very simple model (12), the existence of harmonic sources can be correctly identified and the harmonic current of linear and nonlinear loads can be effectively separated. Table 4Error comparison on the arc furnaceTable 55. ConclusionsIn this paper, from an engineering point of view, firstly anew linear model is presented for representing harmonic sources. On the basis of the intrinsic difference between linear and nonlinear loads in their V –I characteristics, and by using the proposed harmonic source model, a new concise principle for identifying harmonic sources and separating harmonic source currents from that of linear loads is proposed. The detailed modeling and identification procedure is also developed based on the least squares approximation approach. Test results on several kinds of typical harmonic sources reveal that the simplified model is of sufficient precision, and is superior to other existing models. The effectiveness of the harmonic source identification approach is illustrated using a composite nonlinear load.AcknowledgementsThe authors wish to acknowledge the financial support by the National Natural Science Foundation of China for this project, under the Research Program Grant No.59737140. References[1] IEEE Working Group on Power System Harmonics, The effects of power system harmonics on power system equipment and loads. IEEE Trans Power Apparatus Syst 1985;9:2555–63.[2] IEEE Working Group on Power System Harmonics, Power line harmonic effects on communication line interference. IEEE Trans Power Apparatus Syst 1985;104(9):2578–87.[3] IEEE Task Force on the Effects of Harmonics, Effects of harmonic on equipment. IEEE Trans Power Deliv 1993;8(2):681–8.[4] Heydt GT. Identification of harmonic sources by a State Estimation Technique. IEEE Trans Power Deliv 1989;4(1):569–75.[5] Ferach JE, Grady WM, Arapostathis A. An optimal procedure for placing sensors and estimating the locations of harmonic sources in power systems. IEEE Trans Power Deliv 1993;8(3):1303–10.[6] Ma H, Girgis AA. Identification and tracking of harmonic sources in a power system using Kalman filter. IEEE Trans Power Deliv 1996;11(3):1659–65.[7] Hong YY, Chen YC. Application of algorithms and artificial intelligence approach for locating multiple harmonics in distribution systems. IEE Proc.—Gener. Transm. Distrib 1999;146(3):325–9.[8] Mceachern A, Grady WM, Moncerief WA, Heydt GT, McgranaghanM. Revenue and harmonics: an evaluation of someproposed rate structures. IEEE Trans Power Deliv 1995;10(1):474–82.[9] Xu W. Power direction method cannot be used for harmonic sourcedetection. Power Engineering Society Summer Meeting, IEEE; 2000.p. 873–6.[10] Sasdelli R, Peretto L. A VI-based measurement system for sharing the customer and supply responsibility for harmonic distortion. IEEETrans Instrum Meas 1998;47(5):1335–40.[11] Arrillaga J, Bradley DA, Bodger PS. Power system harmonics. NewYork: Wiley; 1985.[12] Thunberg E, Soder L. A Norton approach to distribution networkmodeling for harmonic studies. IEEE Trans Power Deliv 1999;14(1):272–7.[13] Giordano AA, Hsu FM. Least squares estimation with applications todigital signal processing. New York: Wiley; 1985.[14] Xia D, Heydt GT. Harmonic power flow studies. Part I. Formulationand solution. IEEE Trans Power Apparatus Syst 1982;101(6):1257–65.[15] Mansoor A, Grady WM, Thallam RS, Doyle MT, Krein SD, SamotyjMJ. Effect of supply voltage harmonics on the input current of single phase diode bridge rectifier loads. IEEE Trans Power Deliv 1995;10(3):1416–22.[16] Varadan S, Makram EB, Girgis AA. A new time domain voltage source model for an arc furnace using EMTP. IEEE Trans Power Deliv 1996;11(3):1416–22.。
毕业设计(论文)外文文献原文及译文
毕业设计(论文)外文文献原文及译文Chapter 11. Cipher Techniques11.1 ProblemsThe use of a cipher without consideration of the environment in which it is to be used may not provide the security that the user expects. Three examples will make this point clear.11.1.1 Precomputing the Possible MessagesSimmons discusses the use of a "forward search" to decipher messages enciphered for confidentiality using a public key cryptosystem [923]. His approach is to focus on the entropy (uncertainty) in the message. To use an example from Section 10.1(page 246), Cathy knows that Alice will send one of two messages—BUY or SELL—to Bob. The uncertainty is which one Alice will send. So Cathy enciphers both messages with Bob's public key. When Alice sends the message, Bob intercepts it and compares the ciphertext with the two he computed. From this, he knows which message Alice sent.Simmons' point is that if the plaintext corresponding to intercepted ciphertext is drawn from a (relatively) small set of possible plaintexts, the cryptanalyst can encipher the set of possible plaintexts and simply search that set for the intercepted ciphertext. Simmons demonstrates that the size of the set of possible plaintexts may not be obvious. As an example, he uses digitized sound. The initial calculations suggest that the number of possible plaintexts for each block is 232. Using forward search on such a set is clearly impractical, but after some analysis of the redundancy in human speech, Simmons reduces the number of potential plaintexts to about 100,000. This number is small enough so that forward searches become a threat.This attack is similar to attacks to derive the cryptographic key of symmetric ciphers based on chosen plaintext (see, for example, Hellman's time-memory tradeoff attack [465]). However, Simmons' attack is for public key cryptosystems and does not reveal the private key. It only reveals the plaintext message.11.1.2 Misordered BlocksDenning [269] points out that in certain cases, parts of a ciphertext message can be deleted, replayed, or reordered.11.1.3 Statistical RegularitiesThe independence of parts of ciphertext can give information relating to the structure of the enciphered message, even if the message itself is unintelligible. The regularity arises because each part is enciphered separately, so the same plaintext always produces the same ciphertext. This type of encipherment is called code book mode, because each part is effectively looked up in a list of plaintext-ciphertext pairs.11.1.4 SummaryDespite the use of sophisticated cryptosystems and random keys, cipher systems may provide inadequate security if not used carefully. The protocols directing how these cipher systems are used, and the ancillary information that the protocols add to messages and sessions, overcome these problems. This emphasizes that ciphers and codes are not enough. The methods, or protocols, for their use also affect the security of systems.11.2 Stream and Block CiphersSome ciphers divide a message into a sequence of parts, or blocks, and encipher each block with the same key.Definition 11–1. Let E be an encipherment algorithm, and let Ek(b) bethe encipherment of message b with key k. Let a message m = b1b2…, whereeach biis of a fixed length. Then a block cipher is a cipher for whichE k (m) = Ek(b1)Ek(b2) ….Other ciphers use a nonrepeating stream of key elements to encipher characters of a message.Definition 11–2. Let E be an encipherment algorithm, and let Ek(b) bethe encipherment of message b with key k. Let a message m = b1b2…, whereeach bi is of a fixed length, and let k = k1k2…. Then a stream cipheris a cipher for which Ek (m) = Ek1(b1)Ek2(b2) ….If the key stream k of a stream cipher repeats itself, it is a periodic cipher.11.2.1 Stream CiphersThe one-time pad is a cipher that can be proven secure (see Section 9.2.2.2, "One-Time Pad"). Bit-oriented ciphers implement the one-time pad by exclusive-oring each bit of the key with one bit of the message. For example, if the message is 00101 and the key is 10010, the ciphertext is01||00||10||01||10 or 10111. But how can one generate a random, infinitely long key?11.2.1.1 Synchronous Stream CiphersTo simulate a random, infinitely long key, synchronous stream ciphers generate bits from a source other than the message itself. The simplest such cipher extracts bits from a register to use as the key. The contents of the register change on the basis of the current contents of the register.Definition 11–3. An n-stage linear feedback shift register (LFSR)consists of an n-bit register r = r0…rn–1and an n-bit tap sequence t =t 0…tn–1. To obtain a key bit, ris used, the register is shifted one bitto the right, and the new bit r0t0⊕…⊕r n–1t n–1 is inserted.The LFSR method is an attempt to simulate a one-time pad by generating a long key sequence from a little information. As with any such attempt, if the key is shorter than the message, breaking part of the ciphertext gives the cryptanalyst information about other parts of the ciphertext. For an LFSR, a known plaintext attack can reveal parts of the key sequence. If the known plaintext is of length 2n, the tap sequence for an n-stage LFSR can be determined completely.Nonlinear feedback shift registers do not use tap sequences; instead, the new bit is any function of the current register bits.Definition 11–4. An n-stage nonlinear feedback shift register (NLFSR)consists of an n-bit register r = r0…rn–1. Whenever a key bit is required,ris used, the register is shifted one bit to the right, and the new bitis set to f(r0…rn–1), where f is any function of n inputs.NLFSRs are not common because there is no body of theory about how to build NLFSRs with long periods. By contrast, it is known how to design n-stage LFSRs with a period of 2n– 1, and that period is maximal.A second technique for eliminating linearity is called output feedback mode. Let E be an encipherment function. Define k as a cryptographic key,(r) and define r as a register. To obtain a bit for the key, compute Ekand put that value into the register. The rightmost bit of the result is exclusive-or'ed with one bit of the message. The process is repeated until the message is enciphered. The key k and the initial value in r are the keys for this method. This method differs from the NLFSR in that the register is never shifted. It is repeatedly enciphered.A variant of output feedback mode is called the counter method. Instead of using a register r, simply use a counter that is incremented for every encipherment. The initial value of the counter replaces r as part of the key. This method enables one to generate the ith bit of the key without generating the bits 0…i – 1. If the initial counter value is i, set. In output feedback mode, one must generate all the register to i + ithe preceding key bits.11.2.1.2 Self-Synchronous Stream CiphersSelf-synchronous ciphers obtain the key from the message itself. The simplest self-synchronous cipher is called an autokey cipher and uses the message itself for the key.The problem with this cipher is the selection of the key. Unlike a one-time pad, any statistical regularities in the plaintext show up in the key. For example, the last two letters of the ciphertext associated with the plaintext word THE are always AL, because H is enciphered with the key letter T and E is enciphered with the key letter H. Furthermore, if theanalyst can guess any letter of the plaintext, she can determine all successive plaintext letters.An alternative is to use the ciphertext as the key stream. A good cipher will produce pseudorandom ciphertext, which approximates a randomone-time pad better than a message with nonrandom characteristics (such as a meaningful English sentence).This type of autokey cipher is weak, because plaintext can be deduced from the ciphertext. For example, consider the first two characters of the ciphertext, QX. The X is the ciphertext resulting from enciphering some letter with the key Q. Deciphering, the unknown letter is H. Continuing in this fashion, the analyst can reconstruct all of the plaintext except for the first letter.A variant of the autokey method, cipher feedback mode, uses a shift register. Let E be an encipherment function. Define k as a cryptographic(r). The key and r as a register. To obtain a bit for the key, compute Ek rightmost bit of the result is exclusive-or'ed with one bit of the message, and the other bits of the result are discarded. The resulting ciphertext is fed back into the leftmost bit of the register, which is right shifted one bit. (See Figure 11-1.)Figure 11-1. Diagram of cipher feedback mode. The register r is enciphered with key k and algorithm E. The rightmost bit of the result is exclusive-or'ed with one bit of the plaintext m i to produce the ciphertext bit c i. The register r is right shifted one bit, and c i is fed back into the leftmost bit of r.Cipher feedback mode has a self-healing property. If a bit is corrupted in transmission of the ciphertext, the next n bits will be deciphered incorrectly. But after n uncorrupted bits have been received, the shift register will be reinitialized to the value used for encipherment and the ciphertext will decipher properly from that point on.As in the counter method, one can decipher parts of messages enciphered in cipher feedback mode without deciphering the entire message. Let the shift register contain n bits. The analyst obtains the previous n bits of ciphertext. This is the value in the shift register before the bit under consideration was enciphered. The decipherment can then continue from that bit on.11.2.2 Block CiphersBlock ciphers encipher and decipher multiple bits at once, rather than one bit at a time. For this reason, software implementations of block ciphers run faster than software implementations of stream ciphers. Errors in transmitting one block generally do not affect other blocks, but as each block is enciphered independently, using the same key, identical plaintext blocks produce identical ciphertext blocks. This allows the analyst to search for data by determining what the encipherment of a specific plaintext block is. For example, if the word INCOME is enciphered as one block, all occurrences of the word produce the same ciphertext.To prevent this type of attack, some information related to the block's position is inserted into the plaintext block before it is enciphered. The information can be bits from the preceding ciphertext block [343] or a sequence number [561]. The disadvantage is that the effective block size is reduced, because fewer message bits are present in a block.Cipher block chaining does not require the extra information to occupy bit spaces, so every bit in the block is part of the message. Before a plaintext block is enciphered, that block is exclusive-or'ed with the preceding ciphertext block. In addition to the key, this technique requires an initialization vector with which to exclusive-or the initial plaintext block. Taking Ekto be the encipherment algorithm with key k, and I to be the initialization vector, the cipher block chaining technique isc 0 = Ek(m⊕I)c i = Ek(mi⊕ci–1) for i > 011.2.2.1 Multiple EncryptionOther approaches involve multiple encryption. Using two keys k and k' toencipher a message as c = Ek' (Ek(m)) looks attractive because it has aneffective key length of 2n, whereas the keys to E are of length n. However, Merkle and Hellman [700] have shown that this encryption technique can be broken using 2n+1encryptions, rather than the expected 22n(see Exercise 3).Using three encipherments improves the strength of the cipher. There are several ways to do this. Tuchman [1006] suggested using two keys k and k':c = Ek (Dk'(Ek(m)))This mode, called Encrypt-Decrypt-Encrypt (EDE) mode, collapses to a single encipherment when k = k'. The DES in EDE mode is widely used in the financial community and is a standard (ANSI X9.17 and ISO 8732). It is not vulnerable to the attack outlined earlier. However, it is vulnerable to a chosen plaintext and a known plaintext attack. If b is the block size in bits, and n is the key length, the chosen plaintext attacktakes O(2n) time, O(2n) space, and requires 2n chosen plaintexts. The known plaintext attack requires p known plaintexts, and takes O(2n+b/p) time and O(p) memory.A second version of triple encipherment is the triple encryption mode [700]. In this mode, three keys are used in a chain of encipherments.c = Ek (Ek'(Ek''(m)))The best attack against this scheme is similar to the attack on double encipherment, but requires O(22n) time and O(2n) memory. If the key length is 56 bits, this attack is computationally infeasible.11.3 Networks and CryptographyBefore we discuss Internet protocols, a review of the relevant properties of networks is in order. The ISO/OSI model [990] provides an abstract representation of networks suitable for our purposes. Recall that the ISO/OSI model is composed of a series of layers (see Figure 11-2). Each host, conceptually, has a principal at each layer that communicates with a peer on other hosts. These principals communicate with principals at the same layer on other hosts. Layer 1, 2, and 3 principals interact only with similar principals at neighboring (directly connected) hosts. Principals at layers 4, 5, 6, and 7 interact only with similar principals at the other end of the communication. (For convenience, "host" refers to the appropriate principal in the following discussion.)Figure 11-2. The ISO/OSI model. The dashed arrows indicate peer-to-peer communication. For example, the transport layers are communicating with each other. The solid arrows indicate the actual flow of bits. For example, the transport layer invokes network layer routines on the local host, which invoke data link layer routines, which put the bits onto the network. The physical layer passes the bits to the next "hop," or host, on the path. When the message reaches the destination, it is passed up to the appropriatelevel.Each host in the network is connected to some set of other hosts. They exchange messages with those hosts. If host nob wants to send a message to host windsor, nob determines which of its immediate neighbors is closest to windsor (using an appropriate routing protocol) and forwards the message to it. That host, baton, determines which of its neighbors is closest to windsor and forwards the message to it. This process continues until a host, sunapee, receives the message and determines that windsor is an immediate neighbor. The message is forwarded to windsor, its endpoint.Definition 11–5. Let hosts C0, …, Cnbe such that Ciand Ci+1are directlyconnected, for 0 i < n. A communications protocol that has C0 and Cnasits endpoints is called an end-to-end protocol. A communications protocolthat has Cj and Cj+1as its endpoints is called a link protocol.The difference between an end-to-end protocol and a link protocol is that the intermediate hosts play no part in an end-to-end protocol other than forwarding messages. On the other hand, a link protocol describes how each pair of intermediate hosts processes each message.The protocols involved can be cryptographic protocols. If the cryptographic processing is done only at the source and at the destination, the protocol is an end-to-end protocol. If cryptographic processing occurs at each host along the path from source to destination, the protocolis a link protocol. When encryption is used with either protocol, we use the terms end-to-end encryption and link encryption, respectively.In link encryption, each host shares a cryptographic key with its neighbor. (If public key cryptography is used, each host has its neighbor's public key. Link encryption based on public keys is rare.) The keys may be set on a per-host basis or a per-host-pair basis. Consider a network with four hosts called windsor, stripe, facer, and seaview. Each host is directly connected to the other three. With keys distributed on a per-host basis, each host has its own key, making four keys in all. Each host has the keys for the other three neighbors, as well as its own. All hosts use the same key to communicate with windsor. With keys distributed on a per-host-pair basis, each host has one key per possible connection, making six keys in all. Unlike the per-host situation, in the per-host-pair case, each host uses a different key to communicate with windsor. The message is deciphered at each intermediate host, reenciphered for the next hop, and forwarded. Attackers monitoring the network medium will not be able to read the messages, but attackers at the intermediate hosts will be able to do so.In end-to-end encryption, each host shares a cryptographic key with each destination. (Again, if the encryption is based on public key cryptography, each host has—or can obtain—the public key of each destination.) As with link encryption, the keys may be selected on a per-host or per-host-pair basis. The sending host enciphers the message and forwards it to the first intermediate host. The intermediate host forwards it to the next host, and the process continues until the message reaches its destination. The destination host then deciphers it. The message is enciphered throughout its journey. Neither attackers monitoring the network nor attackers on the intermediate hosts can read the message. However, attackers can read the routing information used to forward the message.These differences affect a form of cryptanalysis known as traffic analysis.A cryptanalyst can sometimes deduce information not from the content ofthe message but from the sender and recipient. For example, during the Allied invasion of Normandy in World War II, the Germans deduced which vessels were the command ships by observing which ships were sending and receiving the most signals. The content of the signals was not relevant; their source and destination were. Similar deductions can reveal information in the electronic world.第十一章密码技术11.1问题在没有考虑加密所要运行的环境时,加密的使用可能不能提供用户所期待的安全。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
外文翻译外文原文CHANGING ROLES OF THE CLIENTS、ARCHITECTSAND CONTRACTORS THROUGH BIMAbstract:Purpose –This paper aims to present a general review of the practical implications of building information modelling (BIM) based on literature and case studies. It seeks to address the necessity for applying BIM and re-organising the processes and roles in hospital building projects. This type of project is complex due to complicated functional and technical requirements, decision making involving a large number of stakeholders, and long-term development processes.Design/methodology/approach–Through desk research and referring to the ongoing European research project InPro, the framework for integrated collaboration and the use of BIM are analysed.Findings –One of the main findings is the identification of the main factors for a successful collaboration using BIM, which can be recognised as “POWER”: product information sharing (P),organisational roles synergy (O), work processes coordination (W), environment for teamwork (E), and reference data consolidation (R).Originality/value –This paper contributes to the actual discussion in science and practice on the changing roles and processes that are required to develop and operate sustainable buildings with the support of integrated ICT frameworks and tools. It presents the state-of-the-art of European research projects and some of the first real cases of BIM application in hospital building projects.Keywords:Europe, Hospitals, The Netherlands, Construction works, Response flexibility, Project planningPaper type :General review1. IntroductionHospital building projects, are of key importance, and involve significant investment, and usually take a long-term development period. Hospital building projects are also very complex due to the complicated requirements regarding hygiene, safety, special equipments, and handling of a large amount of data. The building process is very dynamic and comprises iterative phases and intermediate changes. Many actors with shifting agendas, roles and responsibilities are actively involved, such as: the healthcare institutions, national and local governments, project developers, financial institutions, architects, contractors, advisors, facility managers, and equipment manufacturers and suppliers. Such building projects are very much influenced, by the healthcare policy, which changes rapidly in response to the medical, societal and technological developments, and varies greatly between countries (World Health Organization, 2000). In The Netherlands, for example, the way a building project in the healthcare sector is organised is undergoing a major reform due to a fundamental change in the Dutch health policy that was introduced in 2008.The rapidly changing context posts a need for a building with flexibility over its lifecycle. In order to incorporate life-cycle considerations in the building design, construction technique, and facility management strategy, a multidisciplinary collaboration is required. Despite the attempt for establishing integrated collaboration, healthcare building projects still faces serious problems in practice, such as: budget overrun, delay, and sub-optimal quality in terms of flexibility, end-user’s dissatisfaction, and energy inefficiency. It is evident that the lack of communication and coordination between the actors involved in the different phases of a building project is among the most important reasons behind these problems. The communication between different stakeholders becomes critical, as each stakeholder possesses different setof skills. As a result, the processes for extraction, interpretation, and communication of complex design information from drawings and documents are often time-consuming and difficult. Advanced visualisation technologies, like 4D planning have tremendous potential to increase the communication efficiency and interpretation ability of the project team members. However, their use as an effective communication tool is still limited and not fully explored. There are also other barriers in the information transfer and integration, for instance: many existing ICT systems do not support the openness of the data and structure that is prerequisite for an effective collaboration between different building actors or disciplines.Building information modelling (BIM) offers an integrated solution to the previously mentioned problems. Therefore, BIM is increasingly used as an ICT support in complex building projects. An effective multidisciplinary collaboration supported by an optimal use of BIM require changing roles of the clients, architects, and contractors; new contractual relationships; and re-organised collaborative processes. Unfortunately, there are still gaps in the practical knowledge on how to manage the building actors to collaborate effectively in their changing roles, and to develop and utilise BIM as an optimal ICT support of the collaboration.This paper presents a general review of the practical implications of building information modelling (BIM) based on literature review and case studies. In the next sections, based on literature and recent findings from European research project InPro, the framework for integrated collaboration and the use of BIM are analysed. Subsequently, through the observation of two ongoing pilot projects in The Netherlands, the changing roles of clients, architects, and contractors through BIM application are investigated. In conclusion, the critical success factors as well as the main barriers of a successful integrated collaboration using BIM are identified.2. Changing roles through integrated collaboration and life-cycle design approachesA hospital building project involves various actors, roles, and knowledge domains. In The Netherlands, the changing roles of clients, architects, and contractors in hospital building projects are inevitable due the new healthcare policy. Previously under the Healthcare Institutions Act (WTZi), healthcare institutions were required to obtain both a license and a building permit for new construction projects and major renovations. The permit was issued by the Dutch Ministry of Health. The healthcare institutions were then eligible to receive financial support from the government. Since 2008, new legislation on the management of hospital building projects and real estate has come into force. In this new legislation, a permit for hospital building project under the WTZi is no longer obligatory, nor obtainable (Dutch Ministry of Health, Welfare and Sport, 2008). This change allows more freedom from the state-directed policy, and respectively, allocates more responsibilities to the healthcare organisations to deal with the financing and management of their real estate. The new policy implies that the healthcare institutions are fully responsible to man age and finance their building projects and real estate. The government’s support for the costs of healthcare facilities will no longer be given separately, but will be included in the fee for healthcare services. This means that healthcare institutions must earn back their investment on real estate through their services. This new policy intends to stimulate sustainable innovations in the design, procurement and management of healthcare buildings, which will contribute to effective and efficient primary healthcare services.The new strategy for building projects and real estate management endorses an integrated collaboration approach. In order to assure the sustainability during construction, use, and maintenance, the end-users, facility managers, contractors and specialist contractors need to be involved in the planning and design processes. The implications of the new strategy are reflected in the changing roles of the building actors and in the new procurement method.In the traditional procurement method, the design, and its details, are developed by the architect, and design engineers. Then, the client (the healthcare institution) sends an application to the Ministry of Healthto obtain an approval on the building permit and the financial support from the government. Following this, a contractor is selected through a tender process that emphasises the search for the lowest-price bidder. During the construction period, changes often take place due to constructability problems of the design and new requirements from the client. Because of the high level of technical complexity, and moreover, decision-making complexities, the whole process from initiation until delivery of a hospital building project can take up to ten years time. After the delivery, the healthcare institution is fully in charge of the operation of the facilities. Redesigns and changes also take place in the use phase to cope with new functions and developments in the medical world.The integrated procurement pictures a new contractual relationship between the parties involved in a building project. Instead of a relationship between the client and architect for design, and the client and contractor for construction, in an integrated procurement the client only holds a contractual relationship with the main party that is responsible for both design and construction. The traditional borders between tasks and occupational groups become blurred since architects, consulting firms, contractors, subcontractors, and suppliers all stand on the supply side in the building process while the client on the demand side. Such configuration puts the architect, engineer and contractor in a very different position that influences not only their roles, but also their responsibilities, tasks and communication with the client, the users, the team and other stakeholders.The transition from traditional to integrated procurement method requires a shift of mindset of the parties on both the demand and supply sides. It is essential for the client and contractor to have a fair and open collaboration in which both can optimally use their competencies. The effectiveness of integrated collaboration is also determined by the client’s capacity and strategy to organize innovative tendering procedures.A new challenge emerges in case of positioning an architect in a partnership with the contractor instead of with the client. In case of the architect enters a partnership with the contractor, an important issues is how to ensure the realisation of the architectural values as well as innovative engineering through an efficient construction process. In another case, the architect can stand at the client’s side in a strategic advisory role instead of being the designer. In this case, the architect’s responsibility is translating client’s requirements and wishes into the architectural values to be included in the design specification, and evaluating the contractor’s proposal against this. In any of this new role, the architect holds the responsibilities as stakeholder interest facilitator, custodian of customer value and custodian of design models.The transition from traditional to integrated procurement method also brings consequences in the payment schemes. In the traditional building process, the honorarium for the architect is usually based on a percentage of the project costs; this may simply mean that the more expensive the building is, the higher the honorarium will be. The engineer receives the honorarium based on the complexity of the design and the intensity of the assignment. A highly complex building, which takes a number of redesigns, is usually favourable for the engineers in terms of honorarium. A traditional contractor usually receives the commission based on the tender to construct the building at the lowest price by meeting the minimum specifications given by the client. Extra work due to modifications is charged separately to the client. After the delivery, the contractor is no longer responsible for the long-term use of the building. In the traditional procurement method, all risks are placed with the client.In integrated procurement method, the payment is based on the achieved building performance; thus, the payment is non-adversarial. Since the architect, engineer and contractor have a wider responsibility on the quality of the design and the building, the payment is linked to a measurement system of the functional and technical performance of the building over a certain period of time. The honorarium becomes an incentive to achieve the optimal quality. If the building actors succeed to deliver a higher added-value thatexceed the minimum client’s requirements, they will receive a bonus in accordance to the client’s extra gain. The level of transparency is also improved. Open book accounting is an excellent instrument provided that the stakeholders agree on the information to be shared and to its level of detail (InPro, 2009).Next to the adoption of integrated procurement method, the new real estate strategy for hospital building projects addresses an innovative product development and life-cycle design approaches. A sustainable business case for the investment and exploitation of hospital buildings relies on dynamic life-cycle management that includes considerations and analysis of the market development over time next to the building life-cycle costs (investment/initial cost, operational cost, and logistic cost). Compared to the conventional life-cycle costing method, the dynamic life-cycle management encompasses a shift from focusing only on minimizing the costs to focusing on maximizing the total benefit that can be gained. One of the determining factors for a successful implementation of dynamic life-cycle management is the sustainable design of the building and building components, which means that the design carries sufficient flexibility to accommodate possible changes in the long term (Prins, 1992).Designing based on the principles of life-cycle management affects the role of the architect, as he needs to be well informed about the usage scenarios and related financial arrangements, the changing social and physical environments, and new technologies. Design needs to integrate people activities and business strategies over time. In this context, the architect is required to align the design strategies with the organisational, local and global policies on finance, business operations, health and safety, environment, etc.The combination of process and product innovation, and the changing roles of the building actors can be accommodated by integrated project delivery or IPD (AIA California Council, 2007). IPD is an approach that integrates people, systems, business structures and practices into a process that collaboratively harnesses the talents and insights of all participants to reduce waste and optimize efficiency through all phases of design, fabrication and construction. IPD principles can be applied to a variety of contractual arrangements. IPD teams will usually include members well beyond the basic triad of client, architect, and contractor. At a minimum, though, an Integrated Project should include a tight collaboration between the client, the architect, and the main contractor ultimately responsible for construction of the project, from the early design until the project handover. The key to a successful IPD is assembling a team that is committed to collaborative processes and is capable of working together effectively. IPD is built on collaboration. As a result, it can only be successful if the participants share and apply common values and goals.3. Changing roles through BIM applicationBuilding information model (BIM) comprises ICT frameworks and tools that can support the integrated collaboration based on life-cycle design approach. BIM is a digital representation of physical and functional characteristics of a facility. As such it serves as a shared knowledge resource for information about a facility forming a reliable basis for decisions during its lifecycle from inception onward (National Institute of Building Sciences NIBS, 2007). BIM facilitates time and place independent collaborative working. A basic premise of BIM is collaboration by different stakeholders at different phases of the life cycle of a facility to insert, extract, update or modify information in the BIM to support and reflect the roles of that stakeholder. BIM in its ultimate form, as a shared digital representation founded on open standards for interoperability, can become a virtual information model to be handed from the design team to the contractor and subcontractors and then to the client.BIM is not the same as the earlier known computer aided design (CAD). BIM goes further than an application to generate digital (2D or 3D) drawings. BIM is an integrated model in which all process and product information is combined, stored, elaborated, and interactively distributed to all relevant building actors. As a central model for all involved actors throughout the project lifecycle, BIM develops andevolves as the project progresses. Using BIM, the proposed design and engineering solutions can be measured against the client’s requirements and expected building performance. The functionalities of BIM to support the design process extend to multidimensional (nD), including: three-dimensional visualisation and detailing, clash detection, material schedule, planning, cost estimate, production and logistic information, and as-built documents. During the construction process, BIM can support the communication between the building site, the factory and the design office– which is crucial for an effective and efficient prefabrication and assembly processes as well as to prevent or solve problems related to unforeseen errors or modifications. When the building is in use, BIM can be used in combination with the intelligent building systems to provide and maintain up-to-date information of the building performance, including the life-cycle cost.To unleash the full potential of more efficient information exchange in the AEC/FM industry in collaborative working using BIM, both high quality open international standards and high quality implementations of these standards must be in place. The IFC open standard is generally agreed to be of high quality and is widely implemented in software. Unfortunately, the certification process allows poor quality implementations to be certified and essentially renders the certified software useless for any practical usage with IFC. IFC compliant BIM is actually used less than manual drafting for architects and contractors, and show about the same usage for engineers. A recent survey shows that CAD (as a closed-system) is still the major form of technique used in design work (over 60 per cent) while BIM is used in around 20 percent of projects for architects and in around 10 per cent of projects for engineers and contractors.The application of BIM to support an optimal cross-disciplinary and cross-phase collaboration opens a new dimension in the roles and relationships between the building actors. Several most relevant issues are: the new role of a model manager; the agreement on the access right and Intellectual Property Right (IPR); the liability and payment arrangement according to the type of contract and in relation to the integrated procurement; and the use of open international standards.Collaborative working using BIM demands a new expert role of a model manager who possesses ICT as well as construction process know-how (InPro, 2009). The model manager deals with the system as well as with the actors. He provides and maintains technological solutions required for BIM functionalities, manages the information flow, and improves the ICT skills of the stakeholders. The model manager does not take decisions on design and engineering solutions, nor the organisational processes, but his roles in the chain of decision making are focused on:the development of BIM, the definition of the structure and detail level of the model, and the deployment of relevant BIM tools, such as for models checking, merging, and clash detections;the contribution to collaboration methods, especially decision making and communication protocols, task planning, and risk management;and the management of information, in terms of data flow and storage, identification of communication errors, and decision or process (re-)tracking.Regarding the legal and organisational issues, one of the actual questions is: “In what way does the intellectual property right (IPR) in collaborative working using BIM differ from the IPR in a traditional teamwork?”. In terms of combined work, the IPR of each element is at tached to its creator. Although it seems to be a fully integrated design, BIM actually resulted from a combination of works/elements; for instance: the outline of the building design, is created by the architect, the design for the electrical system, is created by the electrical contractor, etc. Thus, in case of BIM as a combined work, the IPR is similar to traditional teamwork. Working with BIM with authorship registration functionalities may actually make it easier to keep track of the IPR.How does collaborative working, using BIM, effect the contractual relationship? On the one hand,collaborative working using BIM does not necessarily change the liability position in the contract nor does it obligate an alliance contract. The General Principles of BIM A ddendum confirms: ‘This does not effectuate or require a restructuring of contractual relationships or shifting of risks between or among the Project Participants other than as specifically required per the Protocol Addendum and its Attachments’ (ConsensusDOCS, 2008). On the other hand, changes in terms of payment schemes can be anticipated. Collaborative processes using BIM will lead to the shifting of activities from to the early design phase. Much, if not all, activities in the detailed engineering and specification phase will be done in the earlier phases. It means that significant payment for the engineering phase, which may count up to 40 per cent of the design cost, can no longer be expected. As engineering work is done concurrently with the design, a new proportion of the payment in the early design phase is necessary.4. Review of ongoing hospital building projects using BIMIn The Netherlands, the changing roles in hospital building projects are part of the strategy, which aims at achieving a sustainable real estate in response to the changing healthcare policy. Referring to literature and previous research, the main factors that influence the success of the changing roles can be concluded as: the implementation of an integrated procurement method and a life-cycle design approach for a sustainable collaborative process; the agreement on the BIM structure and the intellectual rights; and the integration of the role of a model manager. The preceding sections have discussed the conceptual thinking on how to deal with these factors effectively. This current section observes two actual projects and compares the actual practice with the conceptual view respectively.The main issues, which are observed in the case studies, are:the selected procurement method and the roles of the involved parties within this method;the implementation of the life-cycle design approach;the type, structure, and functionalities of BIM used in the project;the openness in data sharing and transfer of the model, and the intended use of BIM in the future; and the roles and tasks of the model manager.The pilot experience of hospital building projects using BIM in the Netherlands can be observed at University Medical Centre St Radboud (further referred as UMC) and Maxima Medical Centre (further referred as MMC). At UMC, the new building project for the Faculty of Dentistry in the city of Nijmegen has been dedicated as a BIM pilot project. At MMC, BIM is used in designing new buildings for Medical Simulation and Mother-and-Child Centre in the city of Veldhoven.The first case is a project at the University Medical Centre (UMC) St Radboud. UMC is more than just a hospital. UMC combines medical services, education and research. More than 8500 staff and 3000 students work at UMC. As a part of the innovative real estate strategy, UMC has considered to use BIM for its building projects. The new development of the Faculty of Dentistry and the surrounding buildings on the Kapittelweg in Nijmegen has been chosen as a pilot project to gather practical knowledge and experience on collaborative processes with BIM support.The main ambition to be achieved through the use of BIM in the building projects at UMC can be summarised as follows:using 3D visualisation to enhance the coordination and communication among the building actors, and the user participation in design;integrating the architectural design with structural analysis, energy analysis, cost estimation, and planning;interactively evaluating the design solutions against the programme of requirements and specifications;reducing redesign/remake costs through clash detection during the design process; andoptimising the management of the facility through the registration of medical installations andequipments, fixed and flexible furniture, product and output specifications, and operational data.The second case is a project at the Maxima Medical Centre (MMC). MMC is a large hospital resulted from a merger between the Diaconessenhuis in Eindhoven and St Joseph Hospital in Veldhoven. Annually the 3,400 staff of MMC provides medical services to more than 450,000 visitors and patients. A large-scaled extension project of the hospital in Veldhoven is a part of its real estate strategy. A medical simulation centre and a women-and-children medical centre are among the most important new facilities within this extension project. The design has been developed using 3D modelling with several functionalities of BIM.The findings from both cases and the analysis are as follows. Both UMC and MMC opted for a traditional procurement method in which the client directly contracted an architect, a structural engineer, and a mechanical, electrical and plumbing (MEP) consultant in the design team. Once the design and detailed specifications are finished, a tender procedure will follow to select a contractor. Despite the choice for this traditional method, many attempts have been made for a closer and more effective multidisciplinary collaboration. UMC dedicated a relatively long preparation phase with the architect, structural engineer and MEP consultant before the design commenced. This preparation phase was aimed at creating a common vision on the optimal way for collaboration using BIM as an ICT support. Some results of this preparation phase are: a document that defines the common ambition for the project and the collaborative working process and a semi-formal agreement that states the commitment of the building actors for collaboration. Other than UMC, MMC selected an architecture firm with an in-house engineering department. Thus, the collaboration between the architect and structural engineer can take place within the same firm using the same software application.Regarding the life-cycle design approach, the main attention is given on life-cycle costs, maintenance needs, and facility management. Using BIM, both hospitals intend to get a much better insight in these aspects over the life-cycle period. The life-cycle sustainability criteria are included in the assignments for the design teams. Multidisciplinary designers and engineers are asked to collaborate more closely and to interact with the end-users to address life-cycle requirements. However, ensuring the building actors to engage in an integrated collaboration to generate sustainable design solutions that meet the life-cycle performance expectations is still difficult. These actors are contracted through a traditional procurement method. Their tasks are specific, their involvement is rather short-term in a certain project phase, their responsibilities and liabilities are limited, and there is no tangible incentive for integrated collaboration.From the current progress of both projects, it can be observed that the type and structure of BIM relies heavily on the choice for BIM software applications. Revit Architecture and Revit Structure by Autodesk are selected based on the argument that it has been widely used internationally and it is compatible with AutoCAD, a widely known product of the same software manufacturer. The compatibility with AutoCAD is a key consideration at MMC since the drawings of the existing buildings were created with this application. These 2D drawings were then used as the basis to generate a 3D model with the BIM software application. The architectural model generated with Revit Architecture and the structural model generated by Revit Structure can be linked directly. In case of a change in the architectural model, a message will be sent to the structural engineer. He can then adjust the structural model, or propose a change in return to the architect, so that the structural model is always consistent with the architectural one.Despite the attempt of the design team to agree on using the same software application, the MEP consultant is still not capable to use Revit; and therefore, a conversion of the model from and to Revit is still required. Another weakness of this “closed approach”, which is dependent to the use of the same software applications, may appear in the near future when the project further progresses into the construction phase. If the contractor uses another software application, considerable extra work will be needed to make the model creted during the design phase to be compatible for use in the construction phase.。