毕设外文文献翻译

合集下载

毕业设计外文文献翻译

毕业设计外文文献翻译

毕业设计(论文)外文资料翻译系别:专业:班级:姓名:学号:外文出处:附件: 1. 原文; 2. 译文2013年03月附件一:A Rapidly Deployable Manipulator SystemChristiaan J.J. Paredis, H. Benjamin Brown, Pradeep K. KhoslaAbstract:A rapidly deployable manipulator system combines the flexibility of reconfigurable modular hardware with modular programming tools, allowing the user to rapidly create a manipulator which is custom-tailored for a given task. This article describes two main aspects of such a system, namely, the Reconfigurable Modular Manipulator System (RMMS)hardware and the corresponding control software.1 IntroductionRobot manipulators can be easily reprogrammed to perform different tasks, yet the range of tasks that can be performed by a manipulator is limited by mechanicalstructure.Forexample, a manipulator well-suited for precise movement across the top of a table would probably no be capable of lifting heavy objects in the vertical direction. Therefore, to perform a given task,one needs to choose a manipulator with an appropriate mechanical structure.We propose the concept of a rapidly deployable manipulator system to address the above mentioned shortcomings of fixed configuration manipulators. As is illustrated in Figure 1, a rapidly deployable manipulator system consists of software and hardware that allow the user to rapidly build and program a manipulator which is customtailored for a given task.The central building block of a rapidly deployable system is a Reconfigurable Modular Manipulator System (RMMS). The RMMS utilizes a stock of interchangeable link and joint modules of various sizes and performance specifications. One such module is shown in Figure 2. By combining these general purpose modules, a wide range of special purpose manipulators can be assembled. Recently, there has been considerable interest in the idea of modular manipulators [2, 4, 5, 7, 9, 10, 14], for research applications as well as for industrial applications. However, most of these systems lack the property of reconfigurability, which is key to the concept of rapidly deployable systems. The RMMS is particularly easy toreconfigure thanks to its integrated quick-coupling connectors described in Section 3.Effective use of the RMMS requires, Task Based Design software. This software takes as input descriptions of the task and of the available manipulator modules; it generates as output a modular assembly configuration optimally suited to perform the given task. Several different approaches have been used successfully to solve simpli-fied instances of this complicated problem.A third important building block of a rapidly deployable manipulator system is a framework for the generation of control software. To reduce the complexity of softwaregeneration for real-time sensor-based control systems, a software paradigm called software assembly has been proposed in the Advanced Manipulators Laboratory at CMU.This paradigm combines the concept of reusable and reconfigurable software components, as is supported by the Chimera real-time operating system [15], with a graphical user interface and a visual programming language, implemented in OnikaA lthough the software assembly paradigm provides thesoftware infrastructure for rapidly programming manipulator systems, it does not solve the programming problem itself. Explicit programming of sensor-based manipulator systems is cumbersome due to the extensive amount of detail which must be specified for the robot to perform the task. The software synthesis problem for sensor-based robots can be simplified dramatically, by providing robust robotic skills, that is, encapsulated strategies for accomplishing common tasks in the robots task domain [11]. Such robotic skills can then be used at the task level planning stage without having to consider any of the low-level detailsAs an example of the use of a rapidly deployable system,consider a manipulator in a nuclear environment where it must inspect material and space for radioactive contamination, or assemble and repair equipment. In such an environment, widely varied kinematic (e.g., workspace) and dynamic (e.g., speed, payload) performance is required, and these requirements may not be known a priori. Instead of preparing a large set of different manipulators to accomplish these tasks—an expensive solution—one can use a rapidly deployable manipulator system. Consider the following scenario: as soon as a specific task is identified, the task based design software determinesthe task. This optimal configuration is thenassembled from the RMMS modules by a human or, in the future, possibly by anothermanipulator. The resulting manipulator is rapidly programmed by using the software assembly paradigm and our library of robotic skills. Finally,the manipulator is deployed to perform its task.Although such a scenario is still futuristic, the development of the reconfigurable modular manipulator system, described in this paper, is a major step forward towards our goal of a rapidly deployable manipulator system.Our approach could form the basis for the next generation of autonomous manipulators, in which the traditional notion of sensor-based autonomy is extended to configuration-based autonomy. Indeed, although a deployed system can have all the sensory and planning information it needs, it may still not be able to accomplish its task because the task is beyond the system’s physical capabilities. A rapidly deployable system, on the other hand, could adapt its physical capabilities based on task specifications and, with advanced sensing, control, and planning strategies, accomplish the task autonomously.2 Design of self-contained hardware modulesIn most industrial manipulators, the controller is a separate unit housing the sensor interfaces, power amplifiers, and control processors for all the joints of the manipulator.A large number of wires is necessary to connect this control unit with the sensors, actuators and brakes located in each of the joints of the manipulator. The large number of electrical connections and the non-extensible nature of such a system layout make it infeasible for modular manipulators. The solution we propose is to distribute the control hardware to each individual module of the manipulator. These modules then become self-contained units which include sensors, an actuator, a brake, a transmission, a sensor interface, a motor amplifier, and a communication interface, as is illustrated in Figure 3. As a result, only six wires are requiredfor power distribution and data communication.2.1 Mechanical designThe goal of the RMMS project is to have a wide variety of hardware modules available. So far, we have built four kinds of modules: the manipulator base, a link module, three pivot joint modules (one of which is shown in Figure 2), and one rotate joint module. The base module and the link module have no degrees-of-freedom; the joint modules have onedegree-of-freedom each. The mechanical design of the joint modules compactly fits aDC-motor, a fail-safe brake, a tachometer, a harmonic drive and a resolver.The pivot and rotate joint modules use different outside housings to provide the right-angle or in-line configuration respectively, but are identical internally. Figure 4 shows in cross-section the internal structure of a pivot joint. Each joint module includes a DC torque motor and 100:1 harmonic-drive speed reducer, and is rated at a maximum speed of 1.5rad/s and maximum torque of 270Nm. Each module has a mass of approximately 10.7kg. A single, compact, X-type bearing connects the two joint halves and provides the needed overturning rigidity. A hollow motor shaft passes through all the rotary components, and provides achannel for passage of cabling with minimal flexing.2.2 Electronic designThe custom-designed on-board electronics are also designed according to the principle of modularity. Each RMMS module contains a motherboard which provides the basic functionality and onto which daughtercards can be stacked to add module specific functionality.The motherboard consists of a Siemens 80C166 microcontroller, 64K of ROM, 64K of RAM, an SMC COM20020 universal local area network controller with an RS-485 driver, and an RS-232 driver. The function of the motherboard is to establish communication with the host interface via an RS-485 bus and to perform the lowlevel control of the module, as is explained in more detail in Section 4. The RS-232 serial bus driver allows for simple diagnostics and software prototyping.A stacking connector permits the addition of an indefinite number of daughtercards with various functions, such as sensor interfaces, motor controllers, RAM expansion etc. In our current implementation, only modules with actuators include a daughtercard. This card contains a 16 bit resolver to digital converter, a 12 bit A/D converter to interface with the tachometer, and a 12 bit D/A converter to control the motor amplifier; we have used an ofthe-shelf motor amplifier (Galil Motion Control model SSA-8/80) to drive the DC-motor. For modules with more than one degree-of-freedom, for instance a wrist module, more than one such daughtercard can be stacked onto the same motherboard.3 Integrated quick-coupling connectorsTo make a modular manipulator be reconfigurable, it is necessary that the modules can be easily connected with each other. We have developed a quick-coupling mechanism with which a secure mechanical connection between modules can be achieved by simply turning a ring handtight; no tools are required. As shown in Figure 5, keyed flanges provide precise registration of the two modules. Turning of the locking collar on the male end produces two distinct motions: first the fingers of the locking ring rotate (with the collar) about 22.5 degrees and capture the fingers on the flanges; second, the collar rotates relative to the locking ring, while a cam mechanism forces the fingers inward to securely grip the mating flanges. A ball- transfer mechanism between the collar and locking ring automatically produces this sequence of motions.At the same time the mechanical connection is made,pneumatic and electronic connections are also established. Inside the locking ring is a modular connector that has 30 male electrical pins plus a pneumatic coupler in the middle. These correspond to matching female components on the mating connector. Sets of pins are wired in parallel to carry the 72V-25A power for motors and brakes, and 48V–6A power for the electronics. Additional pins carry signals for two RS-485 serial communication busses and four video busses. A plastic guide collar plus six alignment pins prevent damage to the connector pins and assure proper alignment. The plastic block holding the female pins can rotate in the housing to accommodate the eight different possible connection orientations (8@45 degrees). The relative orientation is automatically registered by means of an infrared LED in the female connector and eight photodetectors in the male connector.4 ARMbus communication systemEach of the modules of the RMMS communicates with a VME-based host interface over a local area network called the ARMbus; each module is a node of the network. The communication is done in a serial fashion over an RS-485 bus which runs through the length of the manipulator. We use the ARCNET protocol [1] implemented on a dedicated IC (SMC COM20020). ARCNET is a deterministic token-passing network scheme which avoids network collisions and guarantees each node its time to access the network. Blocks ofinformation called packets may be sent from any node on the network to any one of the other nodes, or to all nodes simultaneously (broadcast). Each node may send one packet each time it gets the token. The maximum network throughput is 5Mb/s.The first node of the network resides on the host interface card, as is depicted in Figure 6. In addition to a VME address decoder, this card contains essentially the same hardware one can find on a module motherboard. The communication between the VME side of the card and the ARCNET side occurs through dual-port RAM.There are two kinds of data passed over the local area network. During the manipulator initialization phase, the modules connect to the network one by one, starting at the base and ending at the end-effector. On joining the network, each module sends a data-packet to the host interface containing its serial number and its relative orientation with respect to the previous module. This information allows us to automatically determine the current manipulator configuration.During the operation phase, the host interface communicates with each of the nodes at 400Hz. The data that is exchanged depends on the control mode—centralized or distributed. In centralized control mode, the torques for all the joints are computed on the VME-based real-time processing unit (RTPU), assembled into a data-packet by the microcontroller on the host interface card and broadcast over the ARMbus to all the nodes of the network. Each node extracts its torque value from the packet and replies by sending a data-packet containing the resolver and tachometer readings. In distributed control mode, on the other hand, the host computer broadcasts the desired joint values and feed-forward torques. Locally, in each module, the control loop can then be closed at a frequency much higher than 400Hz. The modules still send sensor readings back to the host interface to be used in the computation of the subsequent feed-forward torque.5 Modular and reconfigurable control softwareThe control software for the RMMS has been developed using the Chimera real-time operating system, which supports reconfigurable and reusable software components [15]. The software components used to control the RMMS are listed in Table 1. The trjjline, dls, and grav_comp components require the knowledge of certain configuration dependent parametersof the RMMS, such as the number of degrees-of-freedom, the Denavit-Hartenberg parameters etc. During the initialization phase, the RMMS interface establishes contact with each of the hardware modules to determine automatically which modules are being used and in which order and orientation they have been assembled. For each module, a data file with a parametric model is read. By combining this information for all the modules, kinematic and dynamic models of the entire manipulator are built.After the initialization, the rmms software component operates in a distributed control mode in which the microcontrollers of each of the RMMS modules perform PID control locally at 1900Hz. The communication between the modules and the host interface is at 400Hz, which can differ from the cycle frequency of the rmms software component. Since we use a triple buffer mechanism [16] for the communication through the dual-port RAM on the ARMbus host interface, no synchronization or handshaking is necessary.Because closed form inverse kinematics do not exist for all possible RMMS configurations, we use a damped least-squares kinematic controller to do the inverse kinematics computation numerically..6 Seamless integration of simulationTo assist the user in evaluating whether an RMMS con- figuration can successfully complete a given task, we have built a simulator. The simulator is based on the TeleGrip robot simulation software from Deneb Inc., and runs on an SGI Crimson which is connected with the real-time processing unit through a Bit3 VME-to-VME adaptor, as is shown in Figure 6.A graphical user interface allows the user to assemble simulated RMMS configurations very much like assembling the real hardware. Completed configurations can be tested and programmed using the TeleGrip functions for robot devices. The configurations can also be interfaced with the Chimera real-time softwarerunning on the same RTPUs used to control the actual hardware. As a result, it is possible to evaluate not only the movements of the manipulator but also the realtime CPU usage and load balancing. Figure 7 shows an RMMS simulation compared with the actual task execution.7 SummaryWe have developed a Reconfigurable Modular Manipulator System which currently consists of six hardware modules, with a total of four degrees-of-freedom. These modules can be assembled in a large number of different configurations to tailor the kinematic and dynamic properties of the manipulator to the task at hand. The control software for the RMMS automatically adapts to the assembly configuration by building kinematic and dynamic models of the manipulator; this is totally transparent to the user. To assist the user in evaluating whether a manipulator configuration is well suited for a given task, we have also built a simulator.AcknowledgmentThis research was funded in part by DOE under grant DE-F902-89ER14042, by Sandia National Laboratories under contract AL-3020, by the Department of Electrical and Computer Engineering, and by The Robotics Institute, Carnegie Mellon University.The authors would also like to thank Randy Casciola, Mark DeLouis, Eric Hoffman, and Jim Moody for their valuable contributions to the design of the RMMS system.附件二:可迅速布置的机械手系统作者:Christiaan J.J. Paredis, H. Benjamin Brown, Pradeep K. Khosla摘要:一个迅速可部署的机械手系统,可以使再组合的标准化的硬件的灵活性用标准化的编程工具结合,允许用户迅速建立为一项规定的任务来通常地控制机械手。

毕设外文原文及译文

毕设外文原文及译文

北京联合大学毕业设计(论文)任务书题目:OFDM调制解调技术的设计与仿真实现专业:通信工程指导教师:张雪芬学院:信息学院学号:2011080331132班级:1101B姓名:徐嘉明一、外文原文Evolution Towards 5G Multi-tier Cellular WirelessNetworks:An Interference ManagementPerspectiveEkram Hossain, Mehdi Rasti, Hina Tabassum, and Amr AbdelnasserAbstract—The evolving fifth generation (5G) cellular wireless networks are envisioned to overcome the fundamental challenges of existing cellular networks, e.g., higher data rates, excellent end-to-end performance and user-coverage in hot-spots and crowded areas with lower latency, energy consumption and cost per information transfer. To address these challenges, 5G systems will adopt a multi-tier architecture consisting of macrocells, different types of licensed small cells, relays, and device-to-device (D2D) networks to serve users with different quality-of-service (QoS) requirements in a spectrum and energy-efficient manner. Starting with the visions and requirements of 5G multi-tier networks, this article outlines the challenges of interference management (e.g., power control, cell association) in these networks with shared spectrum access (i.e., when the different network tiers share the same licensed spectrum). It is argued that the existing interference management schemes will not be able to address the interference management problem in prioritized 5G multitier networks where users in different tiers have different priorities for channel access. In this context, a survey and qualitative comparison of the existing cell association and power control schemes is provided to demonstrate their limitations for interference management in 5G networks. Open challenges are highlighted and guidelines are provided to modify the existing schemes in order to overcome these limitations and make them suitable for the emerging 5G systems.Index Terms—5G cellular wireless, multi-tier networks, interference management, cell association, power control.I. INTRODUCTIONTo satisfy the ever-increasing demand for mobile broadband communications, the IMT-Advanced (IMT-A) standards have been ratified by the International Telecommunications Union (ITU) in November 2010 and the fourth generation (4G) wireless communication systems are currently being deployed worldwide. The standardization for LTE Rel-12, also known as LTE-B, is also ongoing and expected to be finalized in 2014. Nonetheless, existing wireless systems will not be able to deal with the thousand-fold increase in total mobile broadband data [1] contributed by new applications and services such as pervasive 3D multimedia, HDTV, VoIP, gaming, e-Health, and Car2x communication. In this context, the fifth generation (5G) wireless communication technologies are expected to attain 1000 times higher mobile data volume per unit area,10-100 times higher number of connecting devices and user data rate, 10 times longer battery life and 5 times reduced latency [2]. While for 4G networks the single-user average data rate is expected to be 1 Gbps, it is postulated that cell data rate of theorder of 10 Gbps will be a key attribute of 5G networks.5G wireless networks are expected to be a mixture of network tiers of different sizes, transmit powers, backhaul connections, different radio access technologies (RATs) that are accessed by an unprecedented numbers of smart and heterogeneous wireless devices. This architectural enhancement along with the advanced physical communications technology such as high-order spatial multiplexing multiple-input multiple-output (MIMO) communications will provide higher aggregate capacity for more simultaneous users, or higher level spectral efficiency, when compared to the 4G networks. Radio resource and interference management will be a key research challenge in multi-tier and heterogeneous 5G cellular networks. The traditional methods for radio resource and interference management (e.g., channel allocation, power control, cell association or load balancing) in single-tier networks (even some of those developed for two-tier networks) may not be efficient in this environment and a new look into the interference management problem will be required.First, the article outlines the visions and requirements of 5G cellular wireless systems. Major research challenges are then highlighted from the perspective of interference management when the different network tiers share the same radio spectrum. A comparative analysis of the existing approaches for distributed cell association and power control (CAPC) is then provided followed by a discussion on their limitations for5G multi-tier cellular networks. Finally, a number of suggestions are provided to modifythe existing CAPC schemes to overcome these limitations.II. VISIONS AND REQUIREMENTS FOR 5G MULTI-TIERCELLULAR NETWORKS5G mobile and wireless communication systems will require a mix of new system concepts to boost the spectral and energy efficiency. The visions and requirements for 5G wireless systems are outlined below.·Data rate and latency: For dense urban areas, 5G networks are envisioned to enable an experienced data rate of 300 Mbps and 60 Mbps in downlink and uplink, respectively, in 95% of locations and time [2]. The end-to- end latencies are expected to be in the order of 2 to 5 milliseconds. The detailed requirements for different scenarios are listed in [2].·Machine-type Communication (MTC) devices: The number of traditional human-centric wireless devices with Internet connectivity (e.g., smart phones, super-phones, tablets) may be outnumbered by MTC devices which can be used in vehicles, home appliances, surveillance devices, and sensors.·Millimeter-wave communication: To satisfy the exponential increase in traffic and the addition of different devices and services, additional spectrum beyond what was previously allocated to 4G standard is sought for. The use of millimeter-wave frequency bands (e.g., 28 GHz and 38 GHz bands) is a potential candidate to overcome the problem of scarce spectrum resources since it allows transmission at wider bandwidths than conventional 20 MHz channels for 4G systems.·Multiple RATs: 5G is not about replacing the existing technologies, but it is about enhancing and supporting them with new technologies [1]. In 5G systems, the existing RATs, including GSM (Global System for Mobile Communications), HSPA+ (Evolved High-Speed Packet Access), and LTE, will continue to evolve to provide a superior system performance. They will also be accompanied by some new technologies (e.g., beyondLTE-Advanced).·Base station (BS) densification: BS densification is an effective methodology to meet the requirements of 5G wireless networks. Specifically, in 5G networks, there will be deployments of a large number of low power nodes, relays, and device-to-device (D2D) communication links with much higher density than today’s macrocell networks.Fig. 1 shows such a multi-tier network with a macrocell overlaid by relays, picocells, femtocells, and D2D links. The adoption of multiple tiers in the cellular networkarchitecture will result in better performance in terms of capacity, coverage, spectral efficiency, and total power consumption, provided that the inter-tier and intratier interferences are well managed.·Prioritized spectrum access: The notions of both trafficbased and tier-based Prioriti -es will exist in 5G networks. Traffic-based priority arises from the different requirements of the users (e.g., reliability and latency requirements, energy constraints), whereas the tier-based priority is for users belonging to different network tiers. For example, with shared spectrum access among macrocells and femtocells in a two-tier network, femtocells create ―dead zones‖ around them in the downlink for macro users. Protection should, thus, be guaranteed for the macro users. Consequently, the macro and femtousers play the role of high-priority users (HPUEs) and lowpriority users (LPUEs), respectively. In the uplink direction, the macrocell users at the cell edge typically transmit with high powers which generates high uplink interference to nearby femtocells. Therefore, in this case, the user priorities should get reversed. Another example is a D2D transmission where different devices may opportunistically access the spectrum to establish a communication link between them provided that the interference introduced to the cellular users remains below a given threshold. In this case, the D2D users play the role of LPUEs whereas the cellular users play the role of HPUEs.·Network-assisted D2D communication: In the LTE Rel- 12 and beyond, focus will be on network controlled D2D communications, where the macrocell BS performs control signaling in terms of synchronization, beacon signal configuration and providing identity and security management [3]. This feature will extend in 5G networks to allow other nodes, rather than the macrocell BS, to have the control. For example, consider a D2D link at the cell edge and the direct link between the D2D transmitter UE to the macrocell is in deep fade, then the relay node can be responsible for the control signaling of the D2Dlink (i.e., relay-aided D2D communication).·Energy harvesting for energy-efficient communication: One of the main challenges in 5G wireless networks is to improve the energy efficiency of the battery-constrained wireless devices. To prolong the battery lifetime as well as to improve the energy efficiency, an appealing solution is to harvest energy from environmental energy sources (e.g., solar and wind energy). Also, energy can be harvested from ambient radio signals (i.e., RF energy harvesting) with reasonable efficiency over small distances. The havested energy could be used for D2D communication or communication within a small cell. Inthis context, simultaneous wireless information and power transfer (SWIPT) is a promising technology for 5G wireless networks. However, practical circuits for harvesting energy are not yet available since the conventional receiver architecture is designed for information transfer only and, thus, may not be optimal for SWIPT. This is due to the fact that both information and power transfer operate with different power sensitivities at the receiver (e.g., -10dBm and -60dBm for energy and information receivers, respectively) [4]. Also, due to the potentially low efficiency of energy harvesting from ambient radio signals, a combination of different energy harvesting technologies may be required for macrocell communication.III. INTERFERENCE MANAGEMENT CHALLENGES IN 5GMULTI-TIER NETWORKSThe key challenges for interference management in 5G multi-tier networks will arise due to the following reasons which affect the interference dynamics in the uplink and downlink of the network: (i) heterogeneity and dense deployment of wireless devices, (ii) coverage and traffic load imbalance due to varying transmit powers of different BSs in the downlink, (iii) public or private access restrictions in different tiers that lead to diverse interference levels, and (iv) the priorities in accessing channels of different frequencies and resource allocation strategies. Moreover, the introduction of carrier aggregation, cooperation among BSs (e.g., by using coordinated multi-point transmission (CoMP)) as well as direct communication among users (e.g., D2D communication) may further complicate the dynamics of the interference. The above factors translate into the following key challenges.·Designing optimized cell association and power control (CAPC) methods for multi-tier networks: Optimizing the cell associations and transmit powers of users in the uplink or the transmit powers of BSs in the downlink are classical techniques to simultaneously enhance the system performance in various aspects such as interference mitigation, throughput maximization, and reduction in power consumption. Typically, the former is needed to maximize spectral efficiency, whereas the latter is required to minimize the power (and hence minimize the interference to other links) while keeping theFig. 1. A multi-tier network composed of macrocells, picocells, femtocells, relays, and D2D links.Arrows indicate wireless links, whereas the dashed lines denote the backhaul connections. desired link quality. Since it is not efficient to connect to a congested BS despite its high achieved signal-to-interference ratio (SIR), cell association should also consider the status of each BS (load) and the channel state of each UE. The increase in the number of available BSs along with multi-point transmissions and carrier aggregation provide multiple degrees of freedom for resource allocation and cell-selection strategies. For power control, the priority of different tiers need also be maintained by incorporating the quality constraints of HPUEs. Unlike downlink, the transmission power in the uplink depends on the user’s batt ery power irrespective of the type of BS with which users are connected. The battery power does not vary significantly from user to user; therefore, the problems of coverage and traffic load imbalance may not exist in the uplink. This leads to considerable asymmetries between the uplink and downlink user association policies. Consequently, the optimal solutions for downlink CAPC problems may not be optimal for the uplink. It is therefore necessary to develop joint optimization frameworks that can provide near-optimal, if not optimal, solutions for both uplink and downlink. Moreover, to deal with this issue of asymmetry, separate uplink and downlink optimal solutions are also useful as far as mobile users can connect with two different BSs for uplink and downlink transmissions which is expected to be the case in 5G multi-tier cellular networks [3].·Designing efficient methods to support simultaneous association to multiple BSs: Compared to existing CAPC schemes in which each user can associate to a singleBS, simultaneous connectivity to several BSs could be possible in 5G multi-tier network. This would enhance the system throughput and reduce the outage ratio by effectively utilizing the available resources, particularly for cell edge users. Thus the existing CAPCschemes should be extended to efficiently support simultaneous association of a user to multiple BSs and determine under which conditions a given UE is associated to which BSs in the uplink and/or downlink.·Designing efficient methods for cooperation and coordination among multiple tiers: Cooperation and coordination among different tiers will be a key requirement to mitigate interference in 5G networks. Cooperation between the macrocell and small cells was proposed for LTE Rel-12 in the context of soft cell, where the UEs are allowed to have dual connectivity by simultaneously connecting to the macrocell and the small cell for uplink and downlink communications or vice versa [3]. As has been mentioned before in the context of asymmetry of transmission power in uplink and downlink, a UE may experience the highest downlink power transmission from the macrocell, whereas the highest uplink path gain may be from a nearby small cell. In this case, the UE can associate to the macrocell in the downlink and to the small cell in the uplink. CoMP schemes based on cooperation among BSs in different tiers (e.g., cooperation between macrocells and small cells) can be developed to mitigate interference in the network. Such schemes need to be adaptive and consider user locations as well as channel conditions to maximize the spectral and energy efficiency of the network. This cooperation however, requires tight integration of low power nodes into the network through the use of reliable, fast andlow latency backhaul connections which will be a major technical issue for upcoming multi-tier 5G networks. In the remaining of this article, we will focus on the review of existing power control and cell association strategies to demonstrate their limitations for interference management in 5G multi-tier prioritized cellular networks (i.e., where users in different tiers have different priorities depending on the location, application requirements and so on). Design guidelines will then be provided to overcome these limitations. Note that issues such as channel scheduling in frequency domain, timedomain interference coordination techniques (e.g., based on almost blank subframes), coordinated multi-point transmission, and spatial domain techniques (e.g., based on smart antenna techniques) are not considered in this article.IV. DISTRIBUTED CELL ASSOCIATION AND POWERCONTROL SCHEMES: CURRENT STATE OF THE ARTA. Distributed Cell Association SchemesThe state-of-the-art cell association schemes that are currently under investigation formulti-tier cellular networks are reviewed and their limitations are explained below.·Reference Signal Received Power (RSRP)-based scheme [5]: A user is associated with the BS whose signal is received with the largest average strength. A variant of RSRP, i.e., Reference Signal Received Quality (RSRQ) is also used for cell selection in LTE single-tier networks which is similar to the signal-to-interference (SIR)-based cell selection where a user selects a BS communicating with which gives the highest SIR. In single-tier networks with uniform traffic, such a criterion may maximize the network throughput. However, due to varying transmit powers of different BSs in the downlink of multi-tier networks, such cell association policies can create a huge traffic load imbalance. This phenomenon leads to overloading of high power tiers while leaving low power tiers underutilized.·Bias-based Cell Range Expansion (CRE) [6]: The idea of CRE has been emerged as a remedy to the problem of load imbalance in the downlink. It aims to increase the downlink coverage footprint of low power BSs by adding a positive bias to their signal strengths (i.e., RSRP or RSRQ). Such BSs are referred to as biased BSs. This biasing allows more users to associate with low power or biased BSs and thereby achieve a better cell load balancing. Nevertheless, such off-loaded users may experience unfavorable channel from the biased BSs and strong interference from the unbiased high-power BSs. The trade-off between cell load balancing and system throughput therefore strictly depends on the selected bias values which need to be optimized in order to maximize the system utility. In this context, a baseline approach in LTE-Advanced is to ―orthogonalize‖ the transmissions of the biased and unbiased BSs in time/frequency domain such that an interference-free zone is created.·Association based on Almost Blank Sub-frame (ABS) ratio [7]: The ABS technique uses time domain orthogonalization in which specific sub-frames are left blank by the unbiased BS and off-loaded users are scheduled within these sub-frames to avoid inter-tier interference. This improves the overall throughput of the off-loaded users by sacrificing the time sub-frames and throughput of the unbiased BS. The larger bias values result in higher degree of offloading and thus require more blank subframes to protect the offloaded users. Given a specific number of ABSs or the ratio of blank over total number of sub-frames (i.e., ABS ratio) that ensures the minimum throughput of the unbiased BSs, this criterion allows a user to select a cell with maximum ABS ratio and may even associate with the unbiased BS if ABS ratio decreases significantly. A qualitative comparison amongthese cell association schemes is given in Table I. The specific key terms used in Table I are defined as follows: channel-aware schemes depend on the knowledge of instantaneous channel and transmit power at the receiver. The interference-aware schemes depend on the knowledge of instantaneous interference at the receiver. The load-aware schemes depend on the traffic load information (e.g., number of users). The resource-aware schemes require the resource allocation information (i.e., the chance of getting a channel or the proportion of resources available in a cell). The priority-aware schemes require the information regarding the priority of different tiers and allow a protection to HPUEs. All of the above mentioned schemes are independent, distributed, and can be incorporated with any type of power control scheme. Although simple and tractable, the standard cell association schemes, i.e., RSRP, RSRQ, and CRE are unable to guarantee the optimum performance in multi-tier networks unless critical parameters, such as bias values, transmit power of the users in the uplink and BSs in the downlink, resource partitioning, etc. are optimized.B. Distributed Power Control SchemesFrom a user’s point of view, the objective of power control is to support a user with its minimum acceptable throughput, whereas from a system’s point of view it is t o maximize the aggregate throughput. In the former case, it is required to compensate for the near-far effect by allocating higher power levels to users with poor channels as compared to UEs with good channels. In the latter case, high power levels are allocated to users with best channels and very low (even zero) power levels are allocated to others. The aggregate transmit power, the outage ratio, and the aggregate throughput (i.e., the sum of achievable rates by the UEs) are the most important measures to compare the performance of different power control schemes. The outage ratio of a particular tier can be expressed as the ratio of the number of UEs supported by a tier with their minimum target SIRs and the total number of UEs in that tier. Numerous power control schemes have been proposed in the literature for single-tier cellular wireless networks. According to the corresponding objective functions and assumptions, the schemes can be classified into the following four types.·Target-SIR-tracking power control (TPC) [8]: In the TPC, each UE tracks its own predefined fixed target-SIR. The TPC enables the UEs to achieve their fixed target-TABLE IQUALITATIVE COMPARISON OF EXISTING CELL ASSOCIATION SCHEMESFOR MULTI-TIER NETWORKSSIRs at minimal aggregate transmit power, assuming thatthe target-SIRs are feasible. However, when the system is infeasible, all non-supported UEs (those who cannot obtain their target-SIRs) transmit at their maximum power, which causes unnecessary power consumption and interference to other users, and therefore, increases the number of non-supported UEs.·TPC with gradual removal (TPC-GR) [9], [10], and [11]:To decrease the outage ra -tio of the TPC in an infeasiblesystem, a number of TPC-GR algorithms were proposedin which non-supported users reduce their transmit power[10] or are gradually removed [9], [11].·Opportunistic power control (OPC) [12]: From the system’s point of view, OPC allocates high power levels to users with good channels (experiencing high path-gains and low interference levels) and very low power to users with poor channels. In this algorithm, a small difference in path-gains between two users may lead to a large difference in their actual throughputs [12]. OPC improves the system performance at the cost of reduced fairness among users.·Dynamic-SIR tracking power control (DTPC) [13]: When the target-SIR requirements for users are feasible, TPC causes users to exactly hit their fixed target-SIRs even if additional resources are still available that can otherwise be used to achieve higher SIRs (and thus better throughputs). Besides, the fixed-target-SIR assignment is suitable only for voice service for which reaching a SIR value higher than the given target value does not affect the service quality significantly. In contrast, for data services, a higher SIR results in a better throughput, which is desirable. The DTPC algorithm was proposed in [13] to address the problem of system throughput maximization subject to a given feasible lower bound for the achieved SIRs of all users in cellular networks. In DTPC, each user dynamically sets its target-SIR by using TPC and OPC in a selective manner. It was shown that when the minimum acceptable target-SIRs are feasible, the actual SIRs received by some users can be dynamically increased (to a value higher than their minimum acceptabletarget-SIRs) in a distributed manner so far as the required resources are available and the system remains feasible (meaning that reaching the minimum target-SIRs for the remaining users are guaranteed). This enhances the system throughput (at the cost of higher power consumption) as compared to TPC. The aforementioned state-of-the-art distributed power control schemes for satisfying various objectives in single-tier wireless cellular networks are unable to address the interference management problem in prioritized 5G multi-tier networks. This is due to the fact that they do not guarantee that the total interference caused by the LPUEs to the HPUEs remain within tolerable limits, which can lead to the SIR outage of some HPUEs. Thus there is a need to modify the existing schemes such that LPUEs track their objectives while limiting their transmit power to maintain a given interference threshold at HPUEs. A qualitative comparison among various state-of-the-art power control problems with different objectives and constraints and their corresponding existing distributed solutions are shown in Table II. This table also shows how these schemes can be modified and generalized for designing CAPC schemes for prioritized 5G multi-tier networks.C. Joint Cell Association and Power Control SchemesA very few work in the literature have considered the problem of distributed CAPC jointly (e.g., [14]) with guaranteed convergence. For single-tier networks, a distributed framework for uplink was developed [14], which performs cell selection based on the effective-interference (ratio of instantaneous interference to channel gain) at the BSs and minimizes the aggregate uplink transmit power while attaining users’ desire d SIR targets. Following this approach, a unified distributed algorithm was designed in [15] for two-tier networks. The cell association is based on the effective-interference metric and is integrated with a hybrid power control (HPC) scheme which is a combination of TPC and OPC power control algorithms.Although the above frameworks are distributed and optimal/ suboptimal with guaranteed convergence in conventional networks, they may not be directly compatible to the 5G multi-tier networks. The interference dynamics in multi-tier networks depends significantly on the channel access protocols (or scheduling), QoS requirements and priorities at different tiers. Thus, the existing CAPC optimization problems should be modified to include various types of cell selection methods (some examples are provided in Table I) and power control methods with different objectives and interference constraints (e.g., interference constraints for macro cell UEs, picocell UEs, or D2Dreceiver UEs). A qualitative comparison among the existing CAPC schemes along with the open research areas are highlighted in Table II. A discussion on how these open problems can be addressed is provided in the next section.V. DESIGN GUIDELINES FOR DISTRIBUTED CAPCSCHEMES IN 5G MULTI-TIER NETWORKSInterference management in 5G networks requires efficient distributed CAPC schemes such that each user can possibly connect simultaneously to multiple BSs (can be different for uplink and downlink), while achieving load balancing in different cells and guaranteeing interference protection for the HPUEs. In what follows, we provide a number of suggestions to modify the existing schemes.A. Prioritized Power ControlTo guarantee interference protection for HPUEs, a possible strategy is to modify the existing power control schemes listed in the first column of Table II such that the LPUEs limit their transmit power to keep the interference caused to the HPUEs below a predefined threshold, while tracking their own objectives. In other words, as long as the HPUEs are protected against existence of LPUEs, the LPUEs could employ an existing distributed power control algorithm to satisfy a predefined goal. This offers some fruitful direction for future research and investigation as stated in Table II. To address these open problems in a distributed manner, the existing schemes should be modified so that the LPUEs in addition to setting their transmit power for tracking their objectives, limit their transmit power to keep their interference on receivers of HPUEs below a given threshold. This could be implemented by sending a command from HPUEs to its nearby LPUEs (like a closed-loop power control command used to address the near-far problem), when the interference caused by the LPUEs to the HPUEs exceeds a given threshold. We refer to this type of power control as prioritized power control. Note that the notion of priority and thus the need of prioritized power control exists implicitly in different scenarios of 5G networks, as briefly discussed in Section II. Along this line, some modified power control optimization problems are formulated for 5G multi-tier networks in second column of Table II.To compare the performance of existing distributed power control algorithms, let us consider a prioritized multi-tier cellular wireless network where a high-priority tier consisting of 3×3 macro cells, each of which covers an area of 1000 m×1000 m, coexists with a low-priority tier consisting of n small-cells per each high-priority macro cell, each。

毕业设计外文文献翻译

毕业设计外文文献翻译

Encoding the Java Virtual Machine’s Instruction Set1 IntroductionThe development of programs that parse and analyze Java Bytecode [9] has a long history and new programs are still developed [2,3,4,7,13]. When developing such tools, however, a lot of effort is spent to develop a parser for the bytecode and for (re-) developing standard control- and data-flow analyses which calculate, e.g., the control-flow graph or the data-dependency graph.To reduce these efforts, we have developed a specification language (OPAL SPL) for encoding the instructions of stack-based intermediate languages. The idea is that—once the instruction set is completely specified using OPAL SPL—generating both bytecode parsers and standard analyses is much easier than their manual development. To support this goal, OPAL SPL supports the specification of both the format of bytecode instructions and the effect on the stack and registers these instructions have when executed. An alternative use of an OPAL SPL specification is as input to a generic parser or to generic analyses as illustrated by Fig. 1Though the language was designed with Java Bytecode specifically in mind and is used to encode the complete instruction set of the Java Virtual Machine (JVM) , we have striven for a Java-independent specification language. In particular, OPAL SPL focuses on specifying the instruction set rather than the complete class file format, not only because the former’s structure is much more regular than the latter’s,but also because a specifi cation of the instruction set promises to be most beneficial. Given the primary focus of OPAL SPL—generating parsers and facilitating basic analyses—we explicitly designed the language such that it is possible to group related instructions. This makes specifications more concise and allows analyses to treat similar instructions in nearly the same way. For example, the JVM’s iload 5 instruction, which loads the integer value stored in register #5, is a special case of the generic iload instruction where the instruction’s operand is 5. We also designed OPAL SPL in such a way that specifications do not prescribe how a framework represents or processes information; i.e., OPAL SPL is representation agnostic.The next section describes the specification langua ge. In Section3we reason about the language’s design by discussing the specification of selected JVM instructions. In Section4the validation of specifications is discussed. The evaluation of the approach is presented in Section5. The paper ends with a discussion of related work and a conclusion.2 Specifying Bytecode InstructionsThe language for specifying bytecode instructions (OPAL SPL) was primarily designed to enable aconcise specification of the JVM’s instruction set. OPAL SPL supports the sp ecification of both an instruction’s format and its effect on the stack and local variables (registers)when the instruction is executed. It is thus possible to specify which kind of values are popped from and pushed onto the stack as well as which local variables are read or written. Given a specification of the complete instruction set the information required by standard control- and data-flow analyses is then available.However, OPAL SPL is not particularly tied to Java as it abstracts from the particularities of the JVM Specification. For example, the JVM’s type system is part of an OPAL SPL specification rather than an integral part of the OPAL SPL language itself.Next, we first give an overview of the language before we discuss its semantics.2.1 SyntaxThe OPAL Specification Language (OPAL SPL) is an XML-based language. Its grammar is depicted in Fig.2using an EBNF-like format. Non-terminals are written in capital letters (INSTRUCTIONS, TYPES, etc.), the names of XML-elements are written in small letters (types, stack, etc.) and the names of XML-attributes start with ―@‖ (@type, @var, etc.). We refer to the content of an XML-element using symbols that start with―/‖ (/V ALUEEXPRESSION, /EXPECTEDV ALUE, etc.). ―<>‖ is used to specify nesting of elements. ―( ),?,+,*,{},|‖ have the usual semantics. For example,exceptions<(exception @type)+>specifies that the XML-elementexceptionshas one or moreexceptionchild elements that always have the attributetype.2.2 SemanticsFormat SpecificationEach specification written in OPAL SPL consists of four major parts (line 1 in Fig.2). The first part(types, lines 2–3) specifies the type system that is used by the underlying virtual machine. The second part (exceptions, line 4) declares the exceptions that may be thrown when instructions are executed. The third part (functions, line 5) declares the functions that are used in instruction specifications. The fourth part is the specification of the instructions themselves (lines 6–12), each of which may resort to the declared functions to access information not simply stored along with the instruction. For example,invoke instructions do not store the signature and declaring class of the called methods. Instead, a reference to an entry in the so-called constant pool is stored. Only this constant pool entry has all information about the method. To obtain, e.g., the return type of the called method, an abstract function TYPE method refreturn type(method ref) is declared that takes a reference to the entry as i nput and returns the method’s return type. Using abstract function declarations, we abstract—in the specification of the instructions—from the concrete representation of such information by the enclosing by tecode toolkit.The specification of an instruction consists of up to four parts:the instruction’s format (lines 7–8), a description of the effect the instruction has on the stack when executed (lines 9–10), a descriptions of theregisters it affects upon execution (lines 11–12), and information about the exceptions that may be thrown during execution (end of line 6). An instruction’s format is specified by sequences which describe how an instruction is stored. Theu1, u2andu4elements (line 8) of each format sequence specify that the current value is an unsigned integer value with 1, 2 and 4 bytes, respectively. Similarly, thei1, i2 andi4 elements (line 8) are used to specify that the current value is a (1, 2 or 4 byte) signed integer value. The values can be bound to variables using thevarat tribute and can be given a second semantics using thetype attribute. For example,<i2 type=‖short‖ var=‖value‖/>is a twobyte signed integer value that is bound to the variable value and has type short with respect to the instruction set’s type system. Additionally, it is possible to specify expected values (line 8). This enables the selection of the format sequence to be used for reading in the instruction. E.g., <sequence><u1 var=‖opcode‖>171</u1>... specifies that this sequence matches if the value of the first byt e is 171. A sequence’s list element is used to specify that a variable number of values need to be read. The concrete number of elements is determined by the count attribute. The attribute’s value is an expression that can use values that were previously assigned to a variable. The sequence elements implicit and implicit type are used to bind implicit value and type information to variables that can later on be used in type or value expressions(line 7, 10 and 11). To make it possible to aggregate related bytecode instructions to one logical instruction, several format sequences can be defined. The effect on the stack is determined by the number and type of stack operands that are popped (line 9) and pushed (line 10). If multiple stack layouts are specified, the effect on the stack is determined by the firstbefore-executionstack layout that matches; i.e., to determine the effect on the stack a data-flow analysis is necessary.Unique Prefix RuleOne constraint placed upon specifications written in OPAL SPL is that a format sequence can be identified unambiguously by only parsing a prefix of the instruction; no lookahead is necessary. In other words, if each format sequence is considered a production and eachu1, u2, etc. is considered a terminal, then OPAL SPL requires the format sequences to constitute an LR(0) grammar This unique prefix rule is checked automatically (cf. Sec.4); furthermore, this rule facilitates generating fast parsers from the specification, e.g., using nestedswitchstatements.Type SystemOPAL SPL does not have a hard-coded type hierarchy. Instead, each specification written in SPL contains a description of the type system used by the bytecode language being described. The only restriction is that all types have to be arranged in a single, strict hierarchy.The Java Virtual Machine Specification [9]’s type hierarchy is shown in Fig.3(1). It captures all runtime types known to the Java virtual machine, as well as those types that are used only at link- or compile-time, e.g., branchoffset, fieldref and methodref. The hierarchy is a result of the peculiarities of theJVM’s instruction set. The byteorbooleantype, e.g., is required to model the baloadandbastore instructions, which operate on arrays of byteorbooleanalike.OPAL SPL’s type system implicitly defines a second type hierarchy ((2) in Fig. 3). The declared hierarchy of types (1) is mirrored by a hierarchy of kinds (2); for every (lower-case) type there automatically exists an (upper-case) kind. This convention ensures their consistency and keeps the specification itself brief. The values of kindINT LIKEareint, short, etc., just as the values of type int like are 1, 2, etc. Kinds enable parameterizing logical instructions likeareturnwith types,thus making a concise specification of related instructions (e.g., freturn, ireturn, andareturn) possible (cf. Sec.3.12).Information FlowIn OPAL SPL, the flow of information (values, types, register IDs) is modeled by means of named variables and expressions using the variables. In general, the flow of information is subject to the constraints illustrated by Fig.4. For example, variables defined within a specific format sequence can only be referred to by later elements within the same format sequence; a variable cannot be referred to across format sequences. If the same variable is bound by all format sequences, i.e., it is common to all format sequences, then the variable can be used to identify register IDs, the values pushed onto the stack, etc. Similarly, if an instruction defines multiple stack layouts, then a value can only flow from the i-th stack layout before execution to the i-th stack layout after execution and only information that is common to all stack layouts before execution may be stored in a register.3 Design DiscussionThe design of the OPAL specification language (OPAL SPL) is influenced by the peculiarities of the JVM’s instruction set [9, Chapter 6]. In the following, we discuss those instructions that had a major influence on the design.3.1 Modeling the Stack Bottom(athrow)All JVM instructions—with the exception ofathrow—specify only the number and types of operands popped from and pushed onto the stack; they do not determine the layout of the complete stack. In case of the athrowinstruction, however, the stack layout after its execution is completely determined (Fig.5, line 6); the single element on the stack is the thrown exception. This necessitates explicit modeling of the stack’s contents beyond the operands that are pushed and popped by a particular instruction. The explicit modeling of the rest of the stack (line5) here by allows for the (implicit) modeling of stacks of a fixed size (line6).3.2 Pure Register Instructions(iinc)The flow of information for instructions that do not affect the stack—e.g., the JVM’siinc instruction—is depicted in Fig. 7and adheres to the general scheme of information flow (cf. Fig. 4). After parsing the instruction according to the format sequence(Fig. 6, lines3–5), the two variables lvIndex an dincrement are initialized.3.3 Interpretation of Arithmetic Instructions (iinc, add, sub,etc.)The specification ofiinc (Fig. 6) also illustrates OPAL SPL’s ability to model computed values, e.g., add(value, increment). This information can subsequently be used, e.g., by static analyses to determine data dependencies or to perform abstract interpretations.3.4 Constant Pool Handling (ldc)The Java class file format achieves its compactness in part through the use of a constant pool. Hereby, immediate operands of an instruction are replaced by an index into the (global) pool. For example, in case of the load constant intructionldc, the operand needs to be programmatically retrieved from the constant pool (Fig.8, line 5). To obtain the value’s type, one uses the reflective type offunction that the enclosing toolkitx has to provide (line14).3.5 Multiple Format Sequences, Single Logical InstructionAn instruction such asldc, which may refer to an integer value in the constant pool, is conceptually similar to instructions such asiconst 0orsipush;allofthem push a constant value onto the operand stack. The primary difference between the format sequences of ldc(Fig. 8, lines 3–5)andiconst 0(lines 6–7)isthat the former’s operand resides in th e constant pool. In contrast, sipushencodes its operand explicitly in the bytecode stream as an immediate value (line9).To facilitate standard control- and data-flow analyses, OPAL SPL abstracts away from such details, so that similar instructions can be subsumed by more generic instructions using explicit or implicit type and value bindings. A generic push instruction (Fig. 8), e.g., subsumes all JVM instructions that just push a constant value onto the stack. In this case the pushed value is either a computed value (line5), an implicit value (line7), or an immediate operand (line9).3.6 Variable Operand Counts (invokevirtual, invokespecial,etc.)Some instructions pop a variable number of operands, e.g., the four invoke instructions invokevirtual, invokespecial, invokeinterface,andinvokestatic. In their case the number of popped operands directly depends on the number of arguments of the method. To support instructions that pop a variable number of operands, OPAL SPL provides the list element (Fig.9, line 8). Using the list element’scountattribute, it is possible to specify a function that determines the number of operands actually popped from the stack. It is furthermore possible, by using theloop varattribute, to specify a variable iterating over these operands. The loop variable (i) can then be used inside the list element to specify the expected operands (line10). This enables specification of both the expected number and type of operands, i.e., of the method arguments (lines8–10).Using functions (methodrefargcount, methodrefargtype, ...) offloads the intricate handling of the constant pool to externally supplied code (cf. Sec.3.4)—the enclosing toolkit; the OPAL specification language itself remains independent of how the framework or toolkit under development stores suchinformation.3.7 ExceptionsThe specification of invokevirtual (Fig. 9) also makes explicit which exceptions the instruction may throw (line 16). This information is required by control-flow analyses and thus needs to be present in specifications. To identify the instructions which may handle the exception the function (caughtby)needs to be defined by the toolkit. This functions computes, given both the instruction’s address and the type of the exception, the addresses of all instructions in the same method that handle the exception. Similar to the handling of the constant pool, OPAL SPL thus offloads the handling of the exceptions attribute.3.8 Variable-length Instructions (tableswitch, lookupswitch)The support for variable-length instructions (tableswitch, lookupswitch) is similar to the support for instructions with a variable stack size (cf. Sec. 3.6). In this case, anelementselement can be used to specify how many times (Fig.10, line 7) which kind of values (lines8–9) need to be read. Hereby, the elementsconstruct can accommodate multiple sequence elements (lines7–10).The variable number of cases is, however, just one reason why table switch and lookupswitch are classified as variable-length instructions; the JVM Specification mandates that up to three padding bytes are inserted, to align the following format elements on a four-byte boundary (line4).3.9 Single Instruction, Multiple Operand Stacks (dup2)The JVM specification defines several instructions that operate on the stack independent of their operands’ types or—if we change the perspective—that behave differently depending on the type of the operands present on the stack prior to their execution. For example, thedup2instruction (Fig. 11) duplicates the contents of two one-word stack slots.Instructions such asdup2anddup2x1distinguish their operands by their computational type (category 1 or 2) rather than by their actual type (int, reference,etc.). This makes it possible to compactly encode instructions such asdup2and motivates the corresponding level in the type hierarchy (cf. Sec.2.2). Additionally, this requires that OPAL SPL supports multiple stack layouts.In OPAL SPL, the stack is modeled as a list of operands, not as a list of slots as discussed in the JVM specification. While the effect of an instruction such asdup2 is more easily expressed in terms of stack slots, the vast majority of instructions naturally refers to operands. In particular, the decision to base the stack model on operands rather than slots avoids explicit modeling of the higher and lower halves of category-2-values, e.g., the high and low word of a 64 bitlong operand.3.10 (Conditional) Control Transfer Instructions (if, goto, jsr, ret)To perform control-flow analyses it is necessary to identify those instructions that may transfer control, either by directly manipulating the program counter or terminating the current method. This information is specified using theinstruction element’s optional transferscontrol attribute (Fig.12, line 1). Itspecifies if control is transfered conditionally or always. The target instruction to which control is transferredisidentifiedbythevaluesoftype branchoffset orabsoluteaddress.For these two types the type system contains the meta-information (cf. Fig.3)thatthe values have to be interpreted either as relative or absolute program counters.3.11 Multibyte Opcodes and Modifiers (wideinstructions, newarray)The JVM instruction set consists mostly of instructions whose opcode is a single byte, although a few instructions have longer opcode sequences. In most cases this is due to the widemodifier, a single byte prefix to the instruction. In case of the newarray instruction, however, a suffix is used to determine its precise effect. As can be seen in Fig.13, the parser needs to examine two bytes to determine the correct format sequence.3.12 Implicit Types and Type ConstructorsThe specification ofnewarray(Fig.13) also illustrates the specification of implied types and type constructors. As the JVM instruction set is a typed assembly language, many instructions exist in a variety of formats, e.g., asiadd, ladd, fadd, anddadd.Theimplicit type construct is designed to eliminate this kind of redundancy in the specification, resulting in a single, logical instruction:add. Similarily, newarraymakes use of type bindings (lines5, 8).But, to precisely model the effect ofnewarrayon the operand stack, an additional function that constructs a type is needed. Given a type and an integer, the function arrayconstructs a new type; here, a one-dimensional array of the base type (line14).3.13 Extension MechanismOPAL SPL has been designed with extensibility in mind. The extension point for additional information is the instruction element’sappinfochild, whose content can consist of arbitrary elements with a namespace other than OPAL SPL’s own.To illustrate the mechanism, suppose that we want to create a Prolog representation for Java Bytecode, in which information about operators is explicit, i.e., in which theifgt instruction is an if instruction which compares two values using the greater than operator, as illustrated by Fig.14.4 V alidating SpecificationsTo validate an OPAL SPL specification, we have defined an XML Schema which ensures syntactic correctness of the specification and performs basic identity checking. It checks, for example, that each declared type and each instruction’s mnemonic is unique. Additionally, we have developed a program which analyzes a specification and detects the following errors: (a) a format sequence does not have a unique prefix path, (b) multiple format sequences of a single instruction do not agree in the variables bound by them, (c) the number or type of function’s a rguments is wrong or its result is of the wrong type.5 EvaluationWe have used the specification of the JVM’s instruction set [9] for the implementation of a highly flexible bytecode toolkit. The toolkit supports four representations of Java bytecode: a native representation, which is a one-to-one representation of the Java Bytecode; a higher-level representation, which abstracts away some details of Java bytecode—in particular from the constant pool; an XML representation which uses the higher-level representation; a Prolog-based representation of Java Bytecode, which is also based on the higher-level representation.6 Related WorkApplying XML technologies to Java bytecode is not a new idea [5]. The XML serialization of class files, e.g., allows for their declarative transformation using XSLT. The XMLVM [11] project aims to support not only the JVM instruction set [9], but also the CLR instruction set [8]. This requires that at least the CLR’s operand stack is transformed [12], as the JVM r equires. The description of the effect that individual CLR instructions have on the operand stack is, however, not specified in an easily accessible format like OPAL SPL, but rather embedded within the XSL transformations.7 Conclusion and Future WorkIn future work, we will investigate the use of OPAL SPL for the encoding of other bytecode languages, such as the Common Intermediate Language. This would make it possible to develop (control- and dataflow-) analyses with respect to the OPAL SPL and to use the same analysis to analyze bytecode of different languages.From:Encoding the Java Virtual Machine’s Instruction SetJava虚拟机指令系统的编码1引言解释和分析Java字节码程序的发展有已经长的历史了,新的方案仍在研究。

毕业设计论文外文文献翻译

毕业设计论文外文文献翻译

毕业设计(论文)外文文献翻译院系:财务与会计学院年级专业:201*级财务管理姓名:学号:132148***附件: 财务风险管理【Abstract】Although financial risk has increased significantly in recent years risk and risk management are not contemporary issues。

The result of increasingly global markets is that risk may originate with events thousands of miles away that have nothing to do with the domestic market。

Information is available instantaneously which means that change and subsequent market reactions occur very quickly。

The economic climate and markets can be affected very quickly by changes in exchange rates interest rates and commodity prices。

Counterparties can rapidly become problematic。

As a result it is important to ensure financial risks are identified and managed appropriately. Preparation is a key component of risk management。

【Key Words】Financial risk,Risk management,YieldsI. Financial risks arising1.1What Is Risk1.1.1The concept of riskRisk provides the basis for opportunity. The terms risk and exposure have subtle differences in their meaning. Risk refers to the probability of loss while exposure is the possibility of loss although they are often used interchangeably。

毕设外文文献+翻译1

毕设外文文献+翻译1

毕设外文文献+翻译1外文翻译外文原文CHANGING ROLES OF THE CLIENTS、ARCHITECTSAND CONTRACTORS THROUGH BIMAbstract:Purpose –This paper aims to present a general review of the practical implications of building information modelling (BIM) based on literature and case studies. It seeks to address the necessity for applying BIM and re-organising the processes and roles in hospital building projects. This type of project is complex due to complicated functional and technical requirements, decision making involving a large number of stakeholders, and long-term development processes.Design/methodology/approach–Through desk research and referring to the ongoing European research project InPro, the framework for integrated collaboration and the use of BIM are analysed.Findings –One of the main findings is the identification of the main factors for a successful collaboration using BIM, which can be recognised as “POWER”: product information sharing (P),organisational roles synergy (O), work processes coordination (W), environment for teamwork (E), and reference data consolidation (R).Originality/value –This paper contributes to the actual discussion in science and practice on the changing roles and processes that are required to develop and operate sustainable buildings with the support of integrated ICT frameworks and tools. It presents the state-of-the-art of European research projects and some of the first real cases of BIM application inhospital building projects.Keywords:Europe, Hospitals, The Netherlands, Construction works, Response flexibility, Project planningPaper type :General review1. IntroductionHospital building projects, are of key importance, and involve significant investment, and usually take a long-term development period. Hospital building projects are also very complex due to the complicated requirements regarding hygiene, safety, special equipments, and handling of a large amount of data. The building process is very dynamic and comprises iterative phases and intermediate changes. Many actors with shifting agendas, roles and responsibilities are actively involved, such as: the healthcare institutions, national and local governments, project developers, financial institutions, architects, contractors, advisors, facility managers, and equipment manufacturers and suppliers. Such building projects are very much influenced, by the healthcare policy, which changes rapidly in response to the medical, societal and technological developments, and varies greatly between countries (World Health Organization, 2000). In The Netherlands, for example, the way a building project in the healthcare sector is organised is undergoing a major reform due to a fundamental change in the Dutch health policy that was introduced in 2008.The rapidly changing context posts a need for a building with flexibility over its lifecycle. In order to incorporate life-cycle considerations in the building design, construction technique, and facility management strategy, a multidisciplinary collaboration is required. Despite the attempt for establishing integrated collaboration, healthcare building projects still facesserious problems in practice, such as: budget overrun, delay, and sub-optimal quality in terms of flexibility, end-user?s dissatisfaction, and energy inefficiency. It is evident that the lack of communication and coordination between the actors involved in the different phases of a building project is among the most important reasons behind these problems. The communication between different stakeholders becomes critical, as each stakeholder possesses different setof skills. As a result, the processes for extraction, interpretation, and communication of complex design information from drawings and documents are often time-consuming and difficult. Advanced visualisation technologies, like 4D planning have tremendous potential to increase the communication efficiency and interpretation ability of the project team members. However, their use as an effective communication tool is still limited and not fully explored. There are also other barriers in the information transfer and integration, for instance: many existing ICT systems do not support the openness of the data and structure that is prerequisite for an effective collaboration between different building actors or disciplines.Building information modelling (BIM) offers an integrated solution to the previously mentioned problems. Therefore, BIM is increasingly used as an ICT support in complex building projects. An effective multidisciplinary collaboration supported by an optimal use of BIM require changing roles of the clients, architects, and contractors; new contractual relationships; and re-organised collaborative processes. Unfortunately, there are still gaps in the practical knowledge on how to manage the building actors to collaborate effectively in their changing roles, and todevelop and utilise BIM as an optimal ICT support of the collaboration.This paper presents a general review of the practical implications of building information modelling (BIM) based on literature review and case studies. In the next sections, based on literature and recent findings from European research project InPro, the framework for integrated collaboration and the use of BIM are analysed. Subsequently, through the observation of two ongoing pilot projects in The Netherlands, the changing roles of clients, architects, and contractors through BIM application are investigated. In conclusion, the critical success factors as well as the main barriers of a successful integrated collaboration using BIM are identified.2. Changing roles through integrated collaboration and life-cycle design approachesA hospital building project involves various actors, roles, and knowledge domains. In The Netherlands, the changing roles of clients, architects, and contractors in hospital building projects are inevitable due the new healthcare policy. Previously under the Healthcare Institutions Act (WTZi), healthcare institutions were required to obtain both a license and a building permit for new construction projects and major renovations. The permit was issued by the Dutch Ministry of Health. The healthcare institutions were then eligible to receive financial support from the government. Since 2008, new legislation on the management of hospital building projects and real estate has come into force. In this new legislation, a permit for hospital building project under the WTZi is no longer obligatory, nor obtainable (Dutch Ministry of Health, Welfare and Sport, 2008). This change allows more freedom from the state-directed policy, and respectively,allocates more responsibilities to the healthcare organisations to deal with the financing and management of their real estate. The new policy implies that the healthcare institutions are fully responsible to man age and finance their building projects and real estate. The government?s support for the costs of healthcare facilities will no longer be given separately, but will be included in the fee for healthcare services. This means that healthcare institutions must earn back their investment on real estate through their services. This new policy intends to stimulate sustainable innovations in the design, procurement and management of healthcare buildings, which will contribute to effective and efficient primary healthcare services.The new strategy for building projects and real estate management endorses an integrated collaboration approach. In order to assure the sustainability during construction, use, and maintenance, the end-users, facility managers, contractors and specialist contractors need to be involved in the planning and design processes. The implications of the new strategy are reflected in the changing roles of the building actors and in the new procurement method.In the traditional procurement method, the design, and its details, are developed by the architect, and design engineers. Then, the client (the healthcare institution) sends an application to the Ministry of Healthto obtain an approval on the building permit and the financial support from the government. Following this, a contractor is selected through a tender process that emphasises the search for the lowest-price bidder. During the construction period, changes often take place due to constructability problems of the design and new requirements from the client.Because of the high level of technical complexity, and moreover, decision-making complexities, the whole process from initiation until delivery of a hospital building project can take up to ten years time. After the delivery, the healthcare institution is fully in charge of the operation of the facilities. Redesigns and changes also take place in the use phase to cope with new functions and developments in the medical world.The integrated procurement pictures a new contractual relationship between the parties involved in a building project. Instead of a relationship between the client and architect for design, and the client and contractor for construction, in an integrated procurement the client only holds a contractual relationship with the main party that is responsible for both design and construction. The traditional borders between tasks and occupational groups become blurred since architects, consulting firms, contractors, subcontractors, and suppliers all stand on the supply side in the building process while the client on the demand side. Such configuration puts the architect, engineer and contractor in a very different position that influences not only their roles, but also their responsibilities, tasks and communication with the client, the users, the team and other stakeholders.The transition from traditional to integrated procurement method requires a shift of mindset of the parties on both the demand and supply sides. It is essential for the client and contractor to have a fair and open collaboration in which both can optimally use their competencies. The effectiveness of integrated collaboration is also determined by the client?s capacity and strategy to organize innovative tendering procedures.A new challenge emerges in case of positioning an architect in a partnership with the contractor instead of with the client. In case of the architect enters a partnership with the contractor, an important issues is how to ensure the realisation of the architectural values as well as innovative engineering through an efficient construction process. In another case, the architect can stand at the client?s side in a strategic advisory role instead of being the designer. In this case, the architect?s responsibility is translating client?s requirements and wishes into the architectural values to be included in the design specification, and evaluating the contractor?s proposal against this. In any of this new role, the architect holds the responsibilities as stakeholder interest facilitator, custodian of customer value and custodian of design models.The transition from traditional to integrated procurement method also brings consequences in the payment schemes. In the traditional building process, the honorarium for the architect is usually based on a percentage of the project costs; this may simply mean that the more expensive the building is, the higher the honorarium will be. The engineer receives the honorarium based on the complexity of the design and the intensity of the assignment. A highly complex building, which takes a number of redesigns, is usually favourable for the engineers in terms of honorarium. A traditional contractor usually receives the commission based on the tender to construct the building at the lowest price by meeting the minimum specifications given by the client. Extra work due to modifications is charged separately to the client. After the delivery, the contractor is no longer responsible for the long-term use of the building. In the traditional procurement method, all risks are placed with theclient.In integrated procurement method, the payment is based on the achieved building performance; thus, the payment is non-adversarial. Since the architect, engineer and contractor have a wider responsibility on the quality of the design and the building, the payment is linked to a measurement system of the functional and technical performance of the building over a certain period of time. The honorarium becomes an incentive to achieve the optimal quality. If the building actors succeed to deliver a higher added-value thatexceed the minimum client?s requirements, they will receive a bonus in accordance to the client?s extra gain. The level of transparency is also improved. Open book accounting is an excellent instrument provided that the stakeholders agree on the information to be shared and to its level of detail (InPro, 2009).Next to the adoption of integrated procurement method, the new real estate strategy for hospital building projects addresses an innovative product development and life-cycle design approaches. A sustainable business case for the investment and exploitation of hospital buildings relies on dynamic life-cycle management that includes considerations and analysis of the market development over time next to the building life-cycle costs (investment/initial cost, operational cost, and logistic cost). Compared to the conventional life-cycle costing method, the dynamic life-cycle management encompasses a shift from focusing only on minimizing the costs to focusing on maximizing the total benefit that can be gained. One of the determining factors for a successful implementation of dynamic life-cycle management is the sustainable design of the building and building components, which means that the design carriessufficient flexibility to accommodate possible changes in the long term (Prins, 1992).Designing based on the principles of life-cycle management affects the role of the architect, as he needs to be well informed about the usage scenarios and related financial arrangements, the changing social and physical environments, and new technologies. Design needs to integrate people activities and business strategies over time. In this context, the architect is required to align the design strategies with the organisational, local and global policies on finance, business operations, health and safety, environment, etc.The combination of process and product innovation, and the changing roles of the building actors can be accommodated by integrated project delivery or IPD (AIA California Council, 2007). IPD is an approach that integrates people, systems, business structures and practices into a process that collaboratively harnesses the talents and insights of all participants to reduce waste and optimize efficiency through all phases of design, fabrication and construction. IPD principles can be applied to a variety of contractual arrangements. IPD teams will usually include members well beyond the basic triad of client, architect, and contractor. At a minimum, though, an Integrated Project should include a tight collaboration between the client, the architect, and the main contractor ultimately responsible for construction of the project, from the early design until the project handover. The key to a successful IPD is assembling a team that is committed to collaborative processes and is capable of working together effectively. IPD is built on collaboration. As a result, it can only be successful if the participants share and apply common values and goals.3. Changing roles through BIM applicationBuilding information model (BIM) comprises ICT frameworks and tools that can support the integrated collaboration based on life-cycle design approach. BIM is a digital representation of physical and functional characteristics of a facility. As such it serves as a shared knowledge resource for information about a facility forming a reliable basis for decisions during its lifecycle from inception onward (National Institute of Building Sciences NIBS, 2007). BIM facilitates time and place independent collaborative working. A basic premise of BIM is collaboration by different stakeholders at different phases of the life cycle of a facility to insert, extract, update or modify information in the BIM to support and reflect the roles of that stakeholder. BIM in its ultimate form, as a shared digital representation founded on open standards for interoperability, can become a virtual information model to be handed from the design team to the contractor and subcontractors and then to the client.BIM is not the same as the earlier known computer aided design (CAD). BIM goes further than an application to generate digital (2D or 3D) drawings. BIM is an integrated model in which all process and product information is combined, stored, elaborated, and interactively distributed to all relevant building actors. As a central model for all involved actors throughout the project lifecycle, BIM develops andevolves as the project progresses. Using BIM, the proposed design and engineering solutions can be measured against the client?s requirements and expected building performance. The functionalities of BIM to support the design process extend to multidimensional (nD), including: three-dimensional visualisation and detailing, clash detection, material schedule, planning, costestimate, production and logistic information, and as-built documents. During the construction process, BIM can support the communication between the building site, the factory and the design office– which is crucial for an effective and efficient prefabrication and assembly processes as well as to prevent or solve problems related to unforeseen errors or modifications. When the building is in use, BIM can be used in combination with the intelligent building systems to provide and maintain up-to-date information of the building performance, including the life-cycle cost.To unleash the full potential of more efficient information exchange in the AEC/FM industry in collaborative working using BIM, both high quality open international standards and high quality implementations of these standards must be in place. The IFC open standard is generally agreed to be of high quality and is widely implemented in software. Unfortunately, the certification process allows poor quality implementations to be certified and essentially renders the certified software useless for any practical usage with IFC. IFC compliant BIM is actually used less than manual drafting for architects and contractors, and show about the same usage for engineers. A recent survey shows that CAD (as a closed-system) is still the major form of technique used in design work (over 60 per cent) while BIM is used in around 20 percent of projects for architects and in around 10 per cent of projects for engineers and contractors.The application of BIM to support an optimal cross-disciplinary and cross-phase collaboration opens a new dimension in the roles and relationships between the building actors. Several most relevant issues are: the new role of a model manager; the agreement on the access right and IntellectualProperty Right (IPR); the liability and payment arrangement according to the type of contract and in relation to the integrated procurement; and the use of open international standards.Collaborative working using BIM demands a new expert role of a model manager who possesses ICT as well as construction process know-how (InPro, 2009). The model manager deals with the system as well as with the actors. He provides and maintains technological solutions required for BIM functionalities, manages the information flow, and improves the ICT skills of the stakeholders. The model manager does not take decisions on design and engineering solutions, nor the organisational processes, but his roles in the chain of decision making are focused on:the development of BIM, the definition of the structure and detail level of the model, and the deployment of relevant BIM tools, such as for models checking, merging, and clash detections;the contribution to collaboration methods, especially decision making and communication protocols, task planning, and risk management;and the management of information, in terms of data flow and storage, identification of communication errors, and decision or process (re-)tracking.Regarding the legal and organisational issues, one of the actual questions is: “In what way does the intellectual property right (IPR) in collaborative working using BIM differ from the IPR in a traditional teamwork?”. In terms of combine d work, the IPR of each element is at tached to its creator. Although it seems to be a fully integrated design, BIM actually resulted from a combination of works/elements; for instance: the outline of the building design, is created by the architect, the design for theelectrical system, is created by the electrical contractor, etc. Thus, in case of BIM as a combined work, the IPR is similar to traditional teamwork. Working with BIM with authorship registration functionalities may actually make it easier to keep track of the IPR.How does collaborative working, using BIM, effect the contractual relationship? On the one hand,collaborative working using BIM does not necessarily change the liability position in the contract nor does it obligate an alliance contract. The General Principles of BIM A ddendum confirms: …This does not effectuate or require a restructuring of contractual relationships or shifting of risks between or among the Project Participants other than as specifically required per the Protocol Addendum and its Attachments? (ConsensusDOCS, 2008). On the other hand, changes in terms of payment schemes can be anticipated. Collaborative processes using BIM will lead to the shifting of activities from to the early design phase. Much, if not all, activities in the detailed engineering and specification phase will be done in the earlier phases. It means that significant payment for the engineering phase, which may count up to 40 per cent of the design cost, can no longer be expected. As engineering work is done concurrently with the design, a new proportion of the payment in the early design phase is necessary.4. Review of ongoing hospital building projects using BIMIn The Netherlands, the changing roles in hospital building projects are part of the strategy, which aims at achieving a sustainable real estate in response to the changing healthcare policy. Referring to literature and previous research, the main factors that influence the success of the changing roles can be concluded as: the implementation of an integrated procurementmethod and a life-cycle design approach for a sustainable collaborative process; the agreement on the BIM structure and the intellectual rights; and the integration of the role of a model manager. The preceding sections have discussed the conceptual thinking on how to deal with these factors effectively. This current section observes two actual projects and compares the actual practice with the conceptual view respectively.The main issues, which are observed in the case studies, are: the selected procurement method and the roles of the involved parties within this method;the implementation of the life-cycle design approach;the type, structure, and functionalities of BIM used in the project;the openness in data sharing and transfer of the model, and the intended use of BIM in the future; and the roles and tasks of the model manager.The pilot experience of hospital building projects using BIM in the Netherlands can be observed at University Medical Centre St Radboud (further referred as UMC) and Maxima Medical Centre (further referred as MMC). At UMC, the new building project for the Faculty of Dentistry in the city of Nijmegen has been dedicated as a BIM pilot project. At MMC, BIM is used in designing new buildings for Medical Simulation and Mother-and-Child Centre in the city of Veldhoven.The first case is a project at the University Medical Centre (UMC) St Radboud. UMC is more than just a hospital. UMC combines medical services, education and research. More than 8500 staff and 3000 students work at UMC. As a part of the innovative real estate strategy, UMC has considered to use BIM for its building projects. The new development of the Faculty ofDentistry and the surrounding buildings on the Kapittelweg in Nijmegen has been chosen as a pilot project to gather practical knowledge and experience on collaborative processes with BIM support.The main ambition to be achieved through the use of BIM in the building projects at UMC can be summarised as follows: using 3D visualisation to enhance the coordination and communication among the building actors, and the user participation in design;integrating the architectural design with structural analysis, energy analysis, cost estimation, and planning;interactively evaluating the design solutions against the programme of requirements and specifications;reducing redesign/remake costs through clash detection during the design process; andoptimising the management of the facility through the registration of medical installations andequipments, fixed and flexible furniture, product and output specifications, and operational data.The second case is a project at the Maxima Medical Centre (MMC). MMC is a large hospital resulted from a merger between the Diaconessenhuis in Eindhoven and St Joseph Hospital in Veldhoven. Annually the 3,400 staff of MMC provides medical services to more than 450,000 visitors and patients. A large-scaled extension project of the hospital in Veldhoven is a part of its real estate strategy. A medical simulation centre and a women-and-children medical centre are among the most important new facilities within this extension project. The design has been developed using 3D modelling with several functionalities of BIM.The findings from both cases and the analysis are as follows.Both UMC and MMC opted for a traditional procurement method in which the client directly contracted an architect, a structural engineer, and a mechanical, electrical and plumbing (MEP) consultant in the design team. Once the design and detailed specifications are finished, a tender procedure will follow to select a contractor. Despite the choice for this traditional method, many attempts have been made for a closer and more effective multidisciplinary collaboration. UMC dedicated a relatively long preparation phase with the architect, structural engineer and MEP consultant before the design commenced. This preparation phase was aimed at creating a common vision on the optimal way for collaboration using BIM as an ICT support. Some results of this preparation phase are: a document that defines the common ambition for the project and the collaborative working process and a semi-formal agreement that states the commitment of the building actors for collaboration. Other than UMC, MMC selected an architecture firm with an in-house engineering department. Thus, the collaboration between the architect and structural engineer can take place within the same firm using the same software application.Regarding the life-cycle design approach, the main attention is given on life-cycle costs, maintenance needs, and facility management. Using BIM, both hospitals intend to get a much better insight in these aspects over the life-cycle period. The life-cycle sustainability criteria are included in the assignments for the design teams. Multidisciplinary designers and engineers are asked to collaborate more closely and to interact with the end-users to address life-cycle requirements. However, ensuring the building actors to engage in an integrated collaboration to generate sustainable design solutions that meet the life-cycle。

毕业设计外文文献翻译(原文+译文)

毕业设计外文文献翻译(原文+译文)

Environmental problems caused by Istanbul subway excavation and suggestionsfor remediation伊斯坦布尔地铁开挖引起的环境问题及补救建议Ibrahim Ocak Abstract:Many environmental problems caused by subway excavations have inevitably become an important point in city life. These problems can be categorized as transporting and stocking of excavated material, traffic jams, noise, vibrations, piles of dust mud and lack of supplies. Although these problems cause many difficulties,the most pressing for a big city like Istanbul is excava tion,since other listed difficulties result from it. Moreover, these problems are environmentally and regionally restricted to the period over which construction projects are underway and disappear when construction is finished. Currently, in Istanbul, there are nine subway construction projects in operation, covering approximately 73 km in length; over 200 km to be constructed in the near future. The amount of material excavated from ongoing construction projects covers approximately 12 million m3. In this study, problems—primarily, the problem with excavation waste(EW)—caused by subway excavation are analyzed and suggestions for remediation are offered.摘要:许多地铁开挖引起的环境问题不可避免地成为城市生活的重要部分。

毕业设计(论文)外文翻译【范本模板】

毕业设计(论文)外文翻译【范本模板】

华南理工大学广州学院本科生毕业设计(论文)翻译英文原文名Review of Vibration Analysis Methods for Gearbox Diagnostics and Prognostics中文译名对变速箱振动分析的诊断和预测方法综述学院汽车工程学院专业班级车辆工程七班学生姓名刘嘉先学生学号201130085184指导教师李利平填写日期2015年3月15日英文原文版出处:Proceedings of the 54th Meeting of the Society for Machinery Failure Prevention Technology, Virginia Beach,V A, May 1-4,2000,p. 623-634译文成绩:指导教师(导师组长)签名:译文:简介特征提取技术在文献中有描述;然而,大多数人似乎掩盖所需的特定的预处理功能。

一些文件没有提供足够的细节重现他们的结果,并没有一个全面的比较传统的功能过渡齿轮箱数据。

常用术语,如“残差信号”,是指在不同的文件不同的技术.试图定义了状态维修社区中的常用术语和建立所需的特定的预处理加工特性。

本文的重点是对所使用的齿轮故障检测功能。

功能分为五个不同的组基于预处理的需要。

论文的第一部分将提供预处理流程的概述和其中每个特性计算的处理方案。

在下一节中,为特征提取技术描述,将更详细地讨论每一个功能。

最后一节将简要概述的宾夕法尼亚州立大学陆军研究实验室的CBM工具箱用于齿轮故障诊断。

特征提取概述许多类型的缺陷或损伤会增加机械振动水平。

这些振动水平,然后由加速度转换为电信号进行数据测量。

原则上,关于受监视的计算机的健康的信息被包含在这个振动签名。

因此,新的或当前振动签名可以与以前的签名进行比较,以确定该元件是否正常行为或显示故障的迹象。

在实践中,这种比较是不能奏效的。

由于大的变型中,签名的直接比较是困难的。

相反,一个涉及从所述振动署名数据特征提取更多有用的技术也可以使用。

软件工程专业毕业设计外文文献翻译

软件工程专业毕业设计外文文献翻译

软件工程专业毕业设计外文文献翻译1000字本文将就软件工程专业毕业设计的外文文献进行翻译,能够为相关考生提供一定的参考。

外文文献1: Software Engineering Practices in Industry: A Case StudyAbstractThis paper reports a case study of software engineering practices in industry. The study was conducted with a large US software development company that produces software for aerospace and medical applications. The study investigated the company’s software development process, practices, and techniques that lead to the production of quality software. The software engineering practices were identified through a survey questionnaire and a series of interviews with the company’s software development managers, software engineers, and testers. The research found that the company has a well-defined software development process, which is based on the Capability Maturity Model Integration (CMMI). The company follows a set of software engineering practices that ensure quality, reliability, and maintainability of the software products. The findings of this study provide a valuable insight into the software engineering practices used in industry and can be used to guide software engineering education and practice in academia.IntroductionSoftware engineering is the discipline of designing, developing, testing, and maintaining software products. There are a number of software engineering practices that are used in industry to ensure that software products are of high quality, reliable, and maintainable. These practices include software development processes, software configuration management, software testing, requirements engineering, and project management. Software engineeringpractices have evolved over the years as a result of the growth of the software industry and the increasing demands for high-quality software products. The software industry has developed a number of software development models, such as the Capability Maturity Model Integration (CMMI), which provides a framework for software development organizations to improve their software development processes and practices.This paper reports a case study of software engineering practices in industry. The study was conducted with a large US software development company that produces software for aerospace and medical applications. The objective of the study was to identify the software engineering practices used by the company and to investigate how these practices contribute to the production of quality software.Research MethodologyThe case study was conducted with a large US software development company that produces software for aerospace and medical applications. The study was conducted over a period of six months, during which a survey questionnaire was administered to the company’s software development managers, software engineers, and testers. In addition, a series of interviews were conducted with the company’s software development managers, software engineers, and testers to gain a deeper understanding of the software engineering practices used by the company. The survey questionnaire and the interview questions were designed to investigate the software engineering practices used by the company in relation to software development processes, software configuration management, software testing, requirements engineering, and project management.FindingsThe research found that the company has a well-defined software development process, which is based on the Capability Maturity Model Integration (CMMI). The company’s software development process consists of five levels of maturity, starting with an ad hoc process (Level 1) and progressing to a fully defined and optimized process (Level 5). The company has achieved Level 3 maturity in its software development process. The company follows a set of software engineering practices that ensure quality, reliability, and maintainability of the software products. The software engineering practices used by the company include:Software Configuration Management (SCM): The company uses SCM tools to manage software code, documentation, and other artifacts. The company follows a branching and merging strategy to manage changes to the software code.Software Testing: The company has adopted a formal testing approach that includes unit testing, integration testing, system testing, and acceptance testing. The testing process is automated where possible, and the company uses a range of testing tools.Requirements Engineering: The company has a well-defined requirements engineering process, which includes requirements capture, analysis, specification, and validation. The company uses a range of tools, including use case modeling, to capture and analyze requirements.Project Management: The company has a well-defined project management process that includes project planning, scheduling, monitoring, and control. The company uses a range of tools to support project management, including project management software, which is used to track project progress.ConclusionThis paper has reported a case study of software engineering practices in industry. The study was conducted with a large US software development company that produces software for aerospace and medical applications. The study investigated the company’s software development process,practices, and techniques that lead to the production of quality software. The research found that the company has a well-defined software development process, which is based on the Capability Maturity Model Integration (CMMI). The company uses a set of software engineering practices that ensure quality, reliability, and maintainability of the software products. The findings of this study provide a valuable insight into the software engineering practices used in industry and can be used to guide software engineering education and practice in academia.外文文献2: Agile Software Development: Principles, Patterns, and PracticesAbstractAgile software development is a set of values, principles, and practices for developing software. The Agile Manifesto represents the values and principles of the agile approach. The manifesto emphasizes the importance of individuals and interactions, working software, customer collaboration, and responding to change. Agile software development practices include iterative development, test-driven development, continuous integration, and frequent releases. This paper presents an overview of agile software development, including its principles, patterns, and practices. The paper also discusses the benefits and challenges of agile software development.IntroductionAgile software development is a set of values, principles, and practices for developing software. Agile software development is based on the Agile Manifesto, which represents the values and principles of the agile approach. The manifesto emphasizes the importance of individuals and interactions, working software, customer collaboration, and responding to change. Agile software development practices include iterative development, test-driven development, continuous integration, and frequent releases.Agile Software Development PrinciplesAgile software development is based on a set of principles. These principles are:Customer satisfaction through early and continuous delivery of useful software.Welcome changing requirements, even late in development. Agile processes harness change for the customer's competitive advantage.Deliver working software frequently, with a preference for the shorter timescale.Collaboration between the business stakeholders and developers throughout the project.Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.Working software is the primary measure of progress.Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.Continuous attention to technical excellence and good design enhances agility.Simplicity – the art of maximizing the amount of work not done – is essential.The best architectures, requirements, and designs emerge from self-organizing teams.Agile Software Development PatternsAgile software development patterns are reusable solutions to common software development problems. The following are some typical agile software development patterns:The Single Responsibility Principle (SRP)The Open/Closed Principle (OCP)The Liskov Substitution Principle (LSP)The Dependency Inversion Principle (DIP)The Interface Segregation Principle (ISP)The Model-View-Controller (MVC) PatternThe Observer PatternThe Strategy PatternThe Factory Method PatternAgile Software Development PracticesAgile software development practices are a set ofactivities and techniques used in agile software development. The following are some typical agile software development practices:Iterative DevelopmentTest-Driven Development (TDD)Continuous IntegrationRefactoringPair ProgrammingAgile Software Development Benefits and ChallengesAgile software development has many benefits, including:Increased customer satisfactionIncreased qualityIncreased productivityIncreased flexibilityIncreased visibilityReduced riskAgile software development also has some challenges, including:Requires discipline and trainingRequires an experienced teamRequires good communicationRequires a supportive management cultureConclusionAgile software development is a set of values, principles, and practices for developing software. Agile software development is based on the Agile Manifesto, which represents the values and principles of the agile approach. Agile software development practices include iterative development, test-driven development, continuous integration, and frequent releases. Agile software development has many benefits, including increased customer satisfaction, increased quality, increased productivity, increased flexibility, increased visibility, and reduced risk. Agile software development also has some challenges, including the requirement for discipline and training, the requirement for an experienced team, the requirement for good communication, and the requirement for a supportive management culture.。

本科毕业设计外文文献及译文

本科毕业设计外文文献及译文

本科毕业设计外文文献及译文文献、资料题目:Structural Design of ReinforcedConcrete Sloping Roof文献、资料来源:网络文献、资料发表(出版)日期:2007.1院(部):xxx专业:xxx班级:xxx姓名:xxx学号:2007011287指导教师:xxx翻译日期:xxx外文文献:Structural Design of Reinforced Concrete Sloping Roof Abstract: This paper point out common mistakes and problems in actual engineering design according immediately poured reinforced concrete sloping roof especially common residential structure.It brings out layout and design concept use folded plate and arch shell structure in order to reduction or elimination beam and column Layout to reduce costs and expand use function for user of garret . The paper also discussed the need to open the roof holes, windows, and with other design with complex forms . The corresponding simple approximate calculation method and the structure treatment also described in this paper.Keywords : sloping roof;folded plate; along plane load;vertical plane load1. IntroductionIn recent years, reinforced concrete slope of the roof has been very common seen, the correct method of it’s design need establish urgently It’s target is to abolish or reduce the roof beams and columns, to obtain big room and make the roof plate "clean ". This not only benefits tructure specialty itself but also to the design of the building professionals to develop new field, and ultimately to allow users, property developers benefited,and so it has far-reaching significance.In the common practice engineering practice, a designer in the calculation of the mechanical model often referred sloping roof as vertical sloping roof under the projection plane Beam, or take level ridge, ramps ridge contour as a framework and increase unnecessary beam and tilt column . In fact ,the stress is similar between General square planar housing, double slope, multi-slope roof and arch, shell.Ping and oblique ridge are folded plate like “A”, whether layout beams and columns, its ridge line of the deformation pattern is different from the framework fundamentally. All these method will make the difference between calculation results and real internal structure force. During the construction process, housing backbone, plate bias department template has complex shapes, multi-angle bars overlap, installation and casting is very difficult. These projects are common in construction and is a typical superfluous. Some scholars use the elastic shell theory to analyze folded plate roof、internal force and deformation, reveals the vertical loads law of surrounding the base is neither level rise nor the vertical displacement whichto some extent reflects the humps and shell’s features .But assume that boundary conditions which is very different from general engineering actual situation and covered the eaves of a vertical cross-settlement and bottom edge under the fundamental characteristics of rally, so it is not for general engineering design .2. Outlines of MethodsFor most frequently span, the way to cancel the backbone of housing, didn’t add axillary often. But in the periphery under the eaves to the framework need established grid-beam or beams over windows. For long rectangular planar multi-room, multi-column, building professionals in a horizontal layout of the partition wall between each pair of columns and the direction set deep into the same thickness width have possession of a gathering of the rafah beam profiles . Pull beam above has a two-slope roof plate affixed sloping beams expect smaller span. For residential, if it has no needs according construction professional, we will be able to achieve within the household no ceiling beams exposed, see figure 1. Similar lattice theory, this approach emphasizes the use of axial force component effect, But is different with the truss because it’s load distribution along the bar not only single but also along the axis of the plate. Generally each plate has force characteristics of folded plate, for bear gravity at the roof, wind, earthquake loads, caused the plate along with the internal force components, each plate is equivalent to strengthen the thin flange beams .Among vertical bearing , it is thin-walled beams anti-edge horizontal component to balance Wang thrust formed by arch shell effect. When plates bear the the vertical component load, each plate is equivalent to a solid edge embedded multilateral bearing plates .The design feature of this method is establish and perfect the sloping roof of the arch, folded plate system Consciously, at top of the roof, using a minimal level of rafah balance beam ramp at the level of thrust.It’s calculation methods can be divided into hand algorithm and computer paper, this paper focus on the hand algorithm.Hand algorithm take the single-slope plate of sloping roof plate as slider , through approximate overall analysis, Simplified boundary conditions of determine plate,solving load effect along level and vertical plane, Internal forces of various linear superposition under the condition of assumption of normal straight, testing stability and integrated reinforcement. The method pursuit of operational, use general engineer familiar calculation steps to address more complex issues.This method is suitable for the framework structure, little modifications also apply to masonry structure or Frame-wall structure. General arch structure have good anti-seismic performance, if designed properly, the sloping roof will also do so. In this paper the pseudo-static is used to analysis earthquake effects.3. Analysis and Design for Along Plane Effect of LoadsFirst regard to cross profile of figure 1,we analysis equal width rectangular parts of long trapezoidal panels 1、2. as for approximate calculation,it is take plane loads along plane as a constant just like four rectangular plate can be simplified to one-way slab,we take along to long unit width narrow structure as analysis object ,take hinged arch model shown in figure 2.图2a图3a图2b图3b图2c图3cIn Figure 2 the right supports vertical linkage representatives roof beams supporting role, ramps connecting rod on behalf of the board itself thin beam reaction effect which is virtual and approximate equivalent. We would like to calculate two anti-bearing.Because the total pressure of physical project through two plate roof beams and transfer to the ends column, So Anti two numerical difference can be seen as two plates bear along with the plane load and roof beams bear the vertical load pressure. Two Anti power link expressions in Various conditions were given as follows, because the model take units width,so the results is line averageload distribution except it has Focus quality in house.They are bouth represent by N , English leftover subscript s, b, represent the plane along the roof panels and vertical role in the roof beam, g, w, e,represent gravity, air pressure and the level of earthquake separately. d, c, represent distribution of concentrated load or effect separately, In the formula h is thicness of every plate,g is gravitation acceleration, a is roof for the horizontal seismic acceleration value formula, Wk represent the standard value Pressure.m with number footnotesrepresent every numbered ramp the quality distribution per unit area ,m with english footnotes represent quality of per location.as to two symmetrical slopes, the formula can be moreconcise.Figure 2a represent situation of vertical gravity load ,these formulas as follows:()()'''111100110cos cos 38cos cos cos cos L AL L m L AL N l h l h l m ωαβμααββ-=++ ()()()()'10000000101'100000cos cos 2cos cos 8sin cos 8sin cos cos 8sin cos cos cos l l l l l h m m s h N l l h h l h l μαβωααηαβωμβββαββααβ++-=--++ ()()()()101101110100001012111cos 2cos cos 2L L L L L L L m LL L L mLL L L L L L N h B hL hL LIμξβαβ⎡⎤⎛⎫⎛⎫⎛⎫--+-+--+⎢⎥ ⎪ ⎪ ⎪⎝⎭⎝⎭⎝⎭⎣⎦=++()()()()()001001110011200101021000110111121cos sin 2sin 2sin cos cos A L h L m LL L L mL L m a L L L L h h L m l m N L L L Ah L L k B h L h L δδββββαβ⎛⎫⎛⎫⎡⎤⎛⎫-+-+--+ ⎪ ⎪ ⎪⎢⎥+⎝⎭⎝⎭⎝⎭⎣⎦=+---++Figure 2b represent situation of bear wind load, these formulas as follows:()()222211122111cos cos cos 8cos cos cos cos wkL h L L S li N a L h h b ωαωββαβα-=++ ()()()()22222001111222212110cos cos cos 11cos cos cos cos sin 5cos sin cos cos sin cos k K L h l w L w w h w h m L N l l AL h L a h L αωαβαβλαβααββββαββ⎡⎤-⎡⎤+⎢⎥=+++-+⎢⎥++⎢⎥⎣⎦⎣⎦Figure 2c represent situation of role of level earthquake, these formulas as follows:()()2222210011022001sin cos sin cos 3sin cos cos cos cos cos a a L h l L L N L h l hl αμβαωαβωβδαβαβδβ+=--+ ()()()()222221011120322222102101sin cos sin cos sin sin sin 3cos 2ln cos 5ln cos cos cos cos a l h m l m L m m m N n s l l l g h l h l δβααβαββββαβαβαβ++=++++ ()()()0010011012110121000111sin cos 2cos 2cos cos cos a a L L m L L L n L L L L L nh L N L l h l h l ββαβαβ⎡⎤⎛⎫⎛⎫-+-+⎢⎥ ⎪ ⎪⎝⎭⎝⎭⎢⎥=+⎢⎥+⎢⎥⎢⎥⎣⎦ ()00000201sin 2cos a a L m L L L h L l θβα⎡⎤⎛⎫-+-⎢⎥ ⎪⎝⎭⎣⎦+()()()2000010121001sin sin cos sin cos sin cos cos 2sin cos a e L m L L L h L m m N l l h βααβαββαβββ⎡⎤⎛⎫-+-⎢⎥ ⎪+⎝⎭⎣⎦=-+()()()001001001221111221001sin 1sin cos 2cos 2cos cos cos sin a a L L L L L L m L L L L L h L h l L h l h ωαββαβαββ⎡⎤⎛⎫⎛⎫-+-+⎢⎥ ⎪ ⎪⎝⎭⎝⎭⎢⎥-+⎢⎥+⎢⎥⎢⎥⎣⎦ When vertical seismic calculation required by Seismic Design ParametersIt’s calculate formula generally similar as formula 1 to 4 which only need take gravity g as vertical seismic acceleration a. Above formulas apply to right bearings in figure 2 and also to left when exchange data of two plate.As end triangle of Multi-slope roof ,for simplify and approximate calculation need, we assume two lines distribution load only produced by roof board of several load, effect.now II-II cross-section from figure is took to analysis Long trapezoidal plate two’s end triangle, assuming the structure symmetry approximately, take half of structure to establish model (figure 3). Because linked with the end triangular plate-3 plane has great lateral stiffness ,therefore assume the model leftist stronghold along the central component around which can not be shifted direction. Central Plate vertical stiffness small, in general gravity load of roughly symmetric midpoint only next movement happened possible, Therefore, the model used parallel two-link connection. Wind loading, and the general role of the earthquake in two slope was roughly antisymmetric,so plate model in the central use fixed hinge bearings which allow rotation and transtlateral force to plate 3near the plate beam. Under plate two triangular area is eaves of vertical beams and plates itself along with plane load distribution is functionshown in Figure 1 take the variable x as an argument,assume the distance from position of section II to end part is x 0s so the slope level length is y 0=x 0L 2/L 3,formula 11 to 14 is the value of Vertical triangle of gravity along the x direction arbitrary location of the two load distribution ,where h 3 is Slitting vertical thickness of plate 3.()22001cos 212cos e a a mkxL h x N L sh v l x ββ⎡⎤=-⎢⎥+-⎢⎥⎣⎦ ()211121001sin cos 212cos m kvL h x N l xh x L V βββ⎡⎤=+⎢⎥+-⎢⎥⎣⎦()22000002221100max 1123cos L La h L L L L N VL h h l a V L L αγβ⎡⎤⎛⎫=---⎢⎥ ⎪+-⎢⎥⎝⎭⎣⎦ ()22201000112222201001ln 23cos a L L h l L L L n V s xl h v h L x x l L ββ⎡⎤⎛⎫=+-⎢⎥ ⎪+-⎢⎥⎝⎭⎣⎦As wind load and earthquake effect, sketch could use approximate figure 3b 、3c and use method of structural mechanics to solve But the process is cumbersome and reasonable extent is limited .the wind and earthquake effect is not important compare with the load effect. Moreover, the triangle area is small As approximate calculation, such direct-use rectangular plate slope calculation is more convenient and not obvious waste. The method of solve two load distribution of plate three is same as the solution of Long trapezoidal plate area just make the change of x and y 、L 2 and L 3 in figure 1.The actual profile is part III-III shown in figure 1 A BC 图4a 图4b BDFigure 4 is vertical launch plan and bear load portfolio value of roof ramp shown in Figure 1 to analysis inclined plate and the internal forces of the anti-bearing column . in the figure hypotenuse is oblique roof equal to strengthen frame, Similar wind ramp truss rod and the next edge portfolio, could form the dark truss system ,while long rectangular plate can be seen as part of thin-walled beams, which could also be seen as truss. Therefore, we called roof boarding the plane formed a "thin-walled beam-truss" system, in concrete theory, between the truss and the beam have no natural divide . it’s no need h and count accurate internal forces and bearing force to such a joint system, Because on the one hand span more, big bending stiffness structure sensitive to the bearing uneven subsidence and have to stay safe reserves; on the other hand it has high cross-section, by increasing reinforced to increase capacity on the cost impact is not significant. Specific algorithm is: Single-ramp calculate by simple cradle, Multi-Span ramp’s bending moment, shear, and supporting anti-edge use the calculate value by the possiblemaximum numerical control methods, Moment is calculate by simple cradle two sides of supports middle Shear, negative moment and support force calculate according to bearing this continuous, two-hinged, about two span take the largest one. Pin-Pin bearing shear force that is supported by the inter-simple calculate according to simple cradle. But in this method the location of the various internal force’s safety level is uneven expansion, appropriate adjustment should be made is late calculation. No mater f the triangular or rectangular part of plate, Thin-plane bending rebar can get by method of moment right boards from the bottom point for the moment distance which assigned to the eaves or roof. The author believe it has no necessary control number of reinforcement according to smallest beams reinforced rate. On the rim of triangle equivalent to ramp strut can shear entirety. when consider the end is weak can properly reinforced its roof beam below the reinforcement. If shear required stirrup in the rectangular part of thin-walled, should superposition to the beam, generally it’s no need to intentionally imaginary abdominal strengthening reinforcement at rod position.4. Calculation and Design of Pull Beam and Roof BeamsBy column in figure 1 marked calculated value of supporting force and their level of vertical component, horizontal component of the total force multiplied by the cosine of angle. Take column A as example, the first footnotes in R A2 is column number, the first footnotes represent the force generated by the panel two. Their horizontal component balanced by triangle three under the eaves of beams. horizontal component of intermediate support reaction is balanced by the two-level pull beam in deep direction. Then pull beam and above the sloping beams constitutes steel Arch. Because of the existence of antisymmetric load, bilateral role in the anti-power-level components may be inconsistent and pull beam should take the average lag. consider the support impact of uneven settlement, the level pull beam design should take bigger value.Roof beams general under four internal forces: First of the above is levels Rally, The second is axial force generated when oblique roofing in the flange plate plane bending. The third is the vertical load to bear as the roof slab edge beams under bending moment, shear ,like board supported by multi-faceted, Actual force is smaller than bear calculated by one-way plate N b,Fourth is the effect of lateral framework of internal forces .it should linear superposition ,Composite Reinforced, in the situation of weight Load, span and the small dip,checking computations should be took for tension beams cracking, appropriate intensify the section, with fine steel, including the side beams of steel beams rafah terminal should take two meander anchorage,just like letter L With ng as 10d long bends, meander 135 degrees angle and put pull beam intersection with the vertical reinforcement column touting the Meander overcast horn.This paper take model in figure 1 as example, ignore tigers window , 4 sloping roof are 35 o angle, the length of roof slab dimensions are shown in figure 4. Plate unit area quality is 350 kg/m2,Overhaul live load is 0.50 kN/m2, Pressure standard of windward side is 0.21 kN/m2, Leeward face is -0.45 kN/m2, Design value of roof horizontal seismic acceleration is 0.1g, Calculate the bearing capacity limit by standardizing, Considered separately with and without seismic load effect of the combination basic design value,we use combination of without earthquake force through compare,Load calculation and analysis results of every position shown in table 1:Roof triangular plate3 the long trapezoidal plate2D~A B=800m A~B L=11.00m B B~C L=12.00mSymbol Units Formula D Span A FormulaA SpanB B Span CSurface load Seismic loadwithoutpermanentstandardN kN/m (11) 0-18.62 18.6218.62-0 (2) 0-18.62 18.6218.6218.6218.62 18.62-0No seismic loadstandard oflivingN kN/m (11) 0-2.66 2.66 2.66-0 (2) 0-2.66 2.66 2.66 2.66 2.66 2.66-0Seismic gravityload arepresentativeN kN/m (11) 0-19.95 19.9519.95-0 (2) 0-19.95 19.9519.9519.9519.95 19.95-0valueWind Load N kN/m (6) Parallel with the wind at theplate(6) 0-0.76 0.76 0.76 0.76 0.76 0.76-0Earthquake level role N kN/m (8) Earthquake direction parallelwith the board and notconsidered(8) 0-2.09 2.09 2.09 2.09 2.09 2.09-0Momen t Moment designvalueM kN﹒m 151.36429.25-510.84-510.84510.84Axial force N kN 34.4497.67116.24116.24116.24Shear V kN 56.76 56.76 156.09 212.85212.85170.28Anti vertical edge R kN 56.76 56.76 156.09 212.85425.70212.85170.28Anti level of R kN 46.50 46.50 127.86 174.35 348.71174.35139.49Vertical and horizontal beams pull beam Rally kN A~B 46.50 A~D 127.86 348.71Column C and beamdirection betweencolumns139.495. Analysis and Design for Roof of the Vertical Loads Under Sloping RoofSlabs as a Multilateral Support PlateFolded plate structure has character of “unified of borad and frame”: General intersection of each pair of ramps are for mutual support, both sides of the transition line’ plate can be counted dogleg small rotation and transmission, distribution Moment.Under load control which is the role of gravity the two sloping geometry load roughly symmetrical occasions, there is no corner at symmetry capital turning point, Approximate seen as the plate embedded solid edge.if take out a distance by plate of eaves, plate of inside ridge also formation to negative moment,and longroof slabs in the plate sloping beams department and neighbor plate linked together, these all can be approximated as embedded-plate edge to process.For antisymmetric load like horizontal seismic load,the Ping roof should be treated as shear,but it is not control load usually. Plate final design moment value is the status of various unfavorable combination of linear superposition, from the cross-sectional direction plate reinforced by the columns, Reference, balance the require of concrete deep beams of tectonic, upper plate for Moment of negative reinforcement should be reinforced at all or an entire cross-leader, as they also serve as a deep beam distribution lumbar tendons or stirrup. plate in the bottom vertical with reinforcement eaves, Negative reinforcement in accordance with their respective calcualte requirements,and it is different after superposition stirrups requirementBoth sides of "stirrup" in this situation cann’t linked at awnings edge follow shape “U”, can be bent to shape "L" follow upper and down direction,legnth of packs could equal to thickness of plate.It should enhenced at the node of ramp at the intersection appropriately. It recommended that use swagger tectonic shown as in Figure 5 considing simple structure without axillary at the situation of Cloudy angle without pull. To ensure all reinforced Installing accuracy, Few of the rhombus with the supports and rebar stirrups could be added to formed positioning Skeleton at strengthening reinforced department in the figure, Let two later installed sloping steel plate tie to its lashing,designers should use a three-dimensional geometric method to accurately calculate the diamond stirrups limb edge length and Forming a swagger construction plans6. Calculating and processing of open window and hole in sloping roof Assume the plate in figure 6 has a big hole whose wideth is b ,height is h0 ,assuming that tung center along with the plane bending moment, shear, respectively are M and V through overall calculation, use vierendeel calculation method get about middle cave:1XO MM T τ= 2NR MM T τ=3113312h V V h h =+ 0X O N R M M M V h --= Where I 1、I 2 、I respectively represent upper and down plate limb’s Section moment of inertia anddouble limbs section moment of inertia.while Edge Moment by hole is:1113I M V b M α=+ 2212I M V b M μ=+ not very big by the hole, close to the neutral axis in most cases overall, under the no-hole designof the reinforced the opening hole after the plane can meet the demands by calculation,under theno-hole design of the reinforced the opening hole after the plane can meet the demands bycalculation.General tiger window’s form prominent roof Facade which a hole had opened up and the otherfaces a concrete slab closed.when analysis of vertical slab roof slab surface loads ,compare withwithout windows and roof slabs hole window sheet increased load. profiles of window’s foldedplate form make it reduce the bending stiffness compare with without hole roof board, But withthe profile hole edge which parallel to the vertical plate is a partial increase in bending stiffness.In the absence of the vertical plate window subordinate legislation should have upturns beam toincrease stiffness of the surrounding caves near.in this way i can temporarily ignore the platestiffness variation acording to the actual load, size and boundary conditions by entities plate tocalculate psitive and negative moment and further processing nodes.it should point out thattheRoof ramp layout hole edge ideal location is near the plate-bending line, especially in theopen side of the window because it was cut down byvertical transmission line of the moment. If the roof slab roof beams department no outward roofthen the actual plate-bending force on the line near the roof beam reversed also true, Because ofthis architects should strive for when determine oosition of tiger position take appropriatecare.When pin tung far away from line-bending window wall and roofing in the intersection mustbear folded plate and transmission moment, but compare with plate without hole its capacity isweaken surely,and it’s node turn into weak parts. To fill thy judgment and calculation errorstwo panels can be double reinforcement. When the hole is less than line-bending scope shouldincrease negative reinforcement around to keep overall security plate bearing capacity. To ensuresteel plate in place accuratly,also should use positioning stirrups and longitudinal reinforcement constitute skeleton similar as figure 5. Hoop end within vertical bars should be strengthen steel and end cave corner should be harvested more than one anchor length to make sure that bottom of the cave 4 tensile stress concentration.7. Stabilize Roof SlopeIn China's V-shaped folded plate structure design norms,the method prevent both sides of the flanges at local instability is limit its generous ratio,This requirement come from the use of isotropic plate buckling theory analysis. In research the flanges outside instability in critical state, the boundary conditions of winglets suppose as freedom outside, fixed interior, pre - and post-hinged on both sides,the situation plates subjected to the bending stress to solve width and height ratio corresponding with the critical pressure compressive stress. When the grade of concreteIs C30,the limit of width and height(b/t)ratio is 47, take 35 as stress non-normative value. Concrete elastic modulus and strength levels is not a linear relationship if use high-strength concrete other study should be taken. In the actual slope roof only a long row to the middle plate bearing plate outside may receive pressure. And here is just the pouringplate affixed roof sloping beams and horizontal pull beam cast together.Have no possible of rollover and foreign rising displacement. norms limited of folded plate span is 21m. roof below and the vertical column spacing generally much smaller it. And the board which into one with roof beams changed boundary conditions of plate, anti-great instability role also very big. For other locations ramp vertical compression edge May also set up the appropriate plate edge beams all these method will receive beyond the norms of redundant safety. Taking into account the plate shear plane, while the vertical direction of the load caused the exit plane effects, Therefore, the grasp of security of caution should cautious. This paper proposed ramp thickness not less than to the short span of 1 / 35 which also conform to design experience of generally confined SLABS, Concrete should graded between C25 and C35 while Steel should I or class II.puter Calculation Method of Local Sloping Roof Structure andOverall ICC of Overall StructureAny calculate software with inclined plate shell modules and the modules bar structural finite element can calculation of competent sloping roof. Shell element of each node have 3 membrane freedom and three panels freedom and can analysis the plane board and internal forces Of out-of-plane effects. However, the current prevalence of certain spatial structure finite element computer program which although have shell model but some are not inclined plate, some not right at the same plane, the stress state and foreign integrated reinforcement are not perfect. With structures becoming more diverse, complex and ramp space problems often encountered. Such software should expand its pre - and post-processing functions for conversion of shell element stiffness matrix and loading vector in the direction of freedom and further analysis of ramp space, the space of concrete against stress integrated reinforcement. In a fundamental sense manual method and the finite element method are interchangeable but the result may be very different. As long as layout roof component as this concept,then use the software to calculate can fast, precise, to achieve this goal of this paper.From the eaves to the roof elevation areas, the whole roof of anti-lateral stiffness lower than mutation, quality small than lower,this could not easy to simulate in calculation of whole housing. At the top construction of the seismic as higher-mode response which is also whiplash effect, the earthquake-lateral force may be abnormal and have effect on under layers. Therefore, in the partial hand count roof occasions when take ICC analysis to the overall structure, it proposed roof layer use model of tilt rod ramp support to reduce effect on the overall results distortion.If use software with function of space ramp handling and sloping roof modeling with shell element,all will be wrapped from top to bottom. Top results can be directly used and the distortion of the overall impact would cease to exist.10. Conclusion1)Concrete ramps, side beams in different directions superposition of internal forces, reinforced and ramp stability, the hole limits all to be do in-depth study related this research. Similar typical problems are top floor of structural transformation layer and box-type base box side wall all their research results can be used to adopt.It’s a important method do observation on project; finite element analysis ICC will be more economical, practical and popular. Currently existing completed sloping roof no matter the subjective designers use what kind of assumptions。

毕业设计文献翻译

毕业设计文献翻译

毕业设计文献翻译篇一:毕业设计外文文献翻译毕业设计文文献翻译专业学生姓名班级学号指导教师优集学院外外文资料名称: Knowledge-Based Engineeri- -ng Design Methodology外文资料出处:附件: 1.外文资料翻译译文基于知识工程(KBE)设计方法D. E. CALKINS1. 背景复杂系统的发展需要很多工程和管理方面的知识、决策,它要满足很多竞争性的要求。

设计被认为是决定产品最终形态、成本、可靠性、市场接受程度的首要因素。

高级别的工程设计和分析过程(概念设计阶段)特别重要,因为大多数的生命周期成本和整体系统的质量都在这个阶段。

产品成本的压缩最可能发生在产品设计的最初阶段。

整个生命周期阶段大约百分之七十的成本花费在概念设计阶段结束时,缩短设计周期的关键是缩短概念设计阶段,这样同时也减少了工程的重新设计工作量。

工程权衡过程中采用良好的估计和非正式的启发进行概念设计。

传统CAD工具对概念设计阶段的支持非常有限。

有必要,进行涉及多个学科的交流合作来快速进行设计分析(包括性能,成本,可靠性等)。

最后,必须能够管理大量的特定领域的知识。

解决方案是在概念设计阶段包含进更过资源,通过消除重新设计来缩短整个产品的时间。

所有这些因素都主张采取综合设计工具和环境,以在早期的综合设计阶段提供帮助。

这种集成设计工具能够使由不同学科的工程师、设计者在面对复杂的需求和约束时能够对设计意图达成共识。

那个设计工具可以让设计团队研究在更高级别上的更多配置细节。

问题就是架构一个设计工具,以满足所有这些要求。

2. 虚拟(数字)原型模型现在需要是一种代表产品设计为得到一将允许一产品的早发展和评价的真实事实上原型的过程的方式。

虚拟样机将取代传统的物理样机,并允许设计工程师,研究“假设”的情况,同时反复更新他们的设计。

真正的虚拟原型,不仅代表形状和形式,即几何形状,它也代表如重量,材料,性能和制造工艺的非几何属性。

毕业设计外文资料翻译——翻译译文

毕业设计外文资料翻译——翻译译文

毕业设计外文资料翻译(二)外文出处:Jules Houde 《Sustainable development slowed down by bad construction practices and natural and technological disasters》2、外文资料翻译译文混凝土结构的耐久性即使是工程师认为的最耐久和最合理的混凝土材料,在一定的条件下,混凝土也会由于开裂、钢筋锈蚀、化学侵蚀等一系列不利因素的影响而易受伤害。

近年来报道了各种关于混凝土结构耐久性不合格的例子。

尤其令人震惊的是混凝土的结构过早恶化的迹象越来越多。

每年为了维护混凝土的耐久性,其成本不断增加。

根据最近在国内和国际中的调查揭示,这些成本在八十年代间翻了一番,并将会在九十年代变成三倍。

越来越多的混凝土结构耐久性不合格的案例使从事混凝土行业的商家措手不及。

混凝土结构不仅代表了社会的巨大投资,也代表了如果耐久性问题不及时解决可能遇到的成本,更代表着,混凝土作为主要建筑材料,其耐久性问题可能导致的全球不公平竞争以及行业信誉等等问题。

因此,国际混凝土行业受到了强烈要求制定和实施合理的措施以解决当前耐久性问题的双重的挑战,即:找到有效措施来解决现有结构剩余寿命过早恶化的威胁。

纳入新的结构知识、经验和新的研究结果,以便监测结构耐久性,从而确保未来混凝土结构所需的服务性能。

所有参与规划、设计和施工过程的人,应该具有获得对可能恶化的过程和决定性影响参数的最低理解的可能性。

这种基本知识能力是要在正确的时间做出正确的决定,以确保混凝土结构耐久性要求的前提。

加固保护混凝土中的钢筋受到碱性的钝化层(pH值大于12.5)保护而阻止了锈蚀。

这种钝化层阻碍钢溶解。

因此,即使所有其它条件都满足(主要是氧气和水分),钢筋受到锈蚀也都是不可能的。

混凝土的碳化作用或是氯离子的活动可以降低局部面积或更大面积的pH值。

当加固层的pH值低于9或是氯化物含量超过一个临界值时,钝化层和防腐保护层就会失效,钢筋受腐蚀是可能的。

毕业设计外文翻译例文

毕业设计外文翻译例文

大连科技学院毕业设计(论文)外文翻译学生姓名专业班级指导教师职称所在单位教研室主任完成日期 2016年4月15日Translation EquivalenceDespite the fact that the world is becoming a global village, translation remains a major way for languages and cultures to interact and influence each other. And name translation, especially government name translation, occupies a quite significant place in international exchange.Translation is the communication of the meaning of a source-language text by means of an equivalent target-language text. While interpreting—the facilitating of oral or sign-language communication between users of different languages—antedates writing, translation began only after the appearance of written literature. There exist partial translations of the Sumerian Epic of Gilgamesh (ca. 2000 BCE) into Southwest Asian languages of the second millennium BCE. Translators always risk inappropriate spill-over of source-language idiom and usage into the target-language translation. On the other hand, spill-overs have imported useful source-language calques and loanwords that have enriched the target languages. Indeed, translators have helped substantially to shape the languages into which they have translated. Due to the demands of business documentation consequent to the Industrial Revolution that began in the mid-18th century, some translation specialties have become formalized, with dedicated schools and professional associations. Because of the laboriousness of translation, since the 1940s engineers have sought to automate translation (machine translation) or to mechanically aid the human translator (computer-assisted translation). The rise of the Internet has fostered a world-wide market for translation services and has facilitated language localizationIt is generally accepted that translation, not as a separate entity, blooms into flower under such circumstances like culture, societal functions, politics and power relations. Nowadays, the field of translation studies is immersed with abundantly diversified translation standards, with no exception that some of them are presented by renowned figures and are rather authoritative. In the translation practice, however, how should we select the so-called translation standards to serve as our guidelines in the translation process and how should we adopt the translation standards to evaluate a translation product?In the macro - context of flourish of linguistic theories, theorists in the translation circle, keep to the golden law of the principle of equivalence. The theory of Translation Equivalence is the central issue in western translation theories. And the presentation of this theory gives great impetus to the development and improvement of translation theory. It‟s not diffi cult for us to discover that it is the theory of Translation Equivalence that serves as guidelines in government name translation in China. Name translation, as defined, is the replacement of thename in the source language by an equivalent name or other words in the target language. Translating Chinese government names into English, similarly, is replacing the Chinese government name with an equivalent in English.Metaphorically speaking, translation is often described as a moving trajectory going from A to B along a path or a container to carry something across from A to B. This view is commonly held by both translation practitioners and theorists in the West. In this view, they do not expect that this trajectory or something will change its identity as it moves or as it is carried. In China, to translate is also understood by many people normally as “to translate the whole text sentence by sentence and paragraph by paragraph, without any omission, addition, or other changes. In both views, the source text and the target text must be “the same”. This helps explain the etymological source for the term “translation equivalence”. It is in essence a word which describes the relationship between the ST and the TT.Equivalence means the state or fact or property of being equivalent. It is widely used in several scientific fields such as chemistry and mathematics. Therefore, it comes to have a strong scientific meaning that is rather absolute and concise. Influenced by this, translation equivalence also comes to have an absolute denotation though it was first applied in translation study as a general word. From a linguistic point of view, it can be divided into three sub-types, i.e., formal equivalence, semantic equivalence, and pragmatic equivalence. In actual translation, it frequently happens that they cannot be obtained at the same time, thus forming a kind of relative translation equivalence in terms of quality. In terms of quantity, sometimes the ST and TT are not equivalent too. Absolute translation equivalence both in quality and quantity, even though obtainable, is limited to a few cases.The following is a brief discussion of translation equivalence study conducted by three influential western scholars, Eugene Nida, Andrew Chesterman and Peter Newmark. It‟s expected that their studies can instruct GNT study in China and provide translators with insightful methods.Nida‟s definition of translation is: “Translation consists in reproducing in the receptor language the closest natural equivalent of the source language message, first in terms of meaning and secondly in terms of style.” It i s a replacement of textual material in one language〔SL〕by equivalent textual material in another language(TL). The translator must strive for equivalence rather than identity. In a sense, this is just another way of emphasizing the reproducing of the message rather than the conservation of the form of the utterance. The message in the receptor language should match as closely as possible the different elements in the source language to reproduce as literally and meaningfully as possible the form and content of the original. Translation equivalence is an empirical phenomenon discovered bycomparing SL and TL texts and it‟s a useful operational concept like the term “unit of translati on”.Nida argues that there are two different types of equivalence, namely formal equivalence and dynamic equivalence. Formal correspondence focuses attention on the message itself, in both form and content, whereas dynamic equivalence is based upon “the principle of equivalent effect”.Formal correspondence consists of a TL item which represents the closest equivalent of a ST word or phrase. Nida and Taber make it clear that there are not always formal equivalents between language pairs. Therefore, formal equivalents should be used wherever possible if the translation aims at achieving formal rather than dynamic equivalence. The use of formal equivalents might at times have serious implications in the TT since the translation will not be easily understood by the target readership. According to Nida and Taber, formal correspondence distorts the grammatical and stylistic patterns of the receptor language, and hence distorts the message, so as to cause the receptor to misunderstand or to labor unduly hard.Dyn amic equivalence is based on what Nida calls “the principle of equivalent effect” where the relationship between receptor and message should be substantially the same as that which existed between the original receptors and the message. The message has to be modified to the receptor‟s linguistic needs and cultural expectation and aims at complete naturalness of expression. Naturalness is a key requirement for Nida. He defines the goal of dynamic equivalence as seeking the closest natural equivalent to the SL message. This receptor-oriented approach considers adaptations of grammar, of lexicon and of cultural references to be essential in order to achieve naturalness; the TL should not show interference from the SL, and the …foreignness …of the ST setting is minimized.Nida is in favor of the application of dynamic equivalence, as a more effective translation procedure. Thus, the product of the translation process, that is the text in the TL, must have the same impact on the different readers it was addressing. Only in Nida and Taber's edition is it clearly stated that dynamic equivalence in translation is far more than mere correct communication of information.As Andrew Chesterman points out in his recent book Memes of Translation, equivalence is one of the five element of translation theory, standing shoulder to shoulder with source-target, untranslatability, free-vs-literal, All-writing-is-translating in importance. Pragmatically speaking, observed Chesterman, “the only true examples of equivalence (i.e., absolute equivalence) are those in which an ST item X is invariably translated into a given TL as Y, and vice versa. Typical examples would be words denoting numbers (with the exceptionof contexts in which they have culture-bound connotations, such as “magic” or “unlucky”), certain technical terms (oxygen, molecule) and the like. From this point of view, the only true test of equivalence would be invariable back-translation. This, of course, is unlikely to occur except in the case of a small set of lexical items, or perhaps simple isolated syntactic structure”.Peter Newmark. Departing from Nida‟s receptor-oriented line, Newmark argues that the success of equivalent effect is “illusory “and that the conflict of loyalties and the gap between emphasis on source and target language will always remain as the overriding problem in translation theory and practice. He suggests narrowing the gap by replacing the old terms with those of semantic and communicative translation. The former attempts to render, as closely as the semantic and syntactic structures of the second language allow, the exact contextual meaning of the original, while the latter “attempts to produce on its readers an effect as close as possible to that obtained on the readers of the original.” Newmark‟s description of communicative translation resembles Nida‟s dynamic equivalence in the effect it is trying to create on the TT reader, while semantic translation has similarities to Nida‟s formal equivalence.Meanwhile, Newmark points out that only by combining both semantic and communicative translation can we achieve the goal of keeping the …spirit‟ of the original. Semantic translation requires the translator retain the aesthetic value of the original, trying his best to keep the linguistic feature and characteristic style of the author. According to semantic translation, the translator should always retain the semantic and syntactic structures of the original. Deletion and abridgement lead to distortion of the author‟s intention and his writing style.翻译对等尽管全世界正在渐渐成为一个地球村,但翻译仍然是语言和和文化之间的交流互动和相互影响的主要方式之一。

_毕业设计外文文献及翻译_

_毕业设计外文文献及翻译_

_毕业设计外文文献及翻译_Graduation Thesis Foreign Literature Review and Chinese Translation1. Title: "The Impact of Artificial Intelligence on Society"Abstract:人工智能对社会的影响摘要:人工智能技术的快速发展引发了关于其对社会影响的讨论。

本文探讨了人工智能正在重塑不同行业(包括医疗保健、交通运输和教育)的各种方式。

还讨论了AI实施的潜在益处和挑战,以及伦理考量。

总体而言,本文旨在提供对人工智能对社会影响的全面概述。

2. Title: "The Future of Work: Automation and Job Displacement"Abstract:With the rise of automation technologies, there is growing concern about the potential displacement of workers in various industries. This paper examines the trends in automation and its impact on jobs, as well as the implications for workforce development and retraining programs. The ethical and social implications of automation are also discussed, along with potential strategies for mitigating job displacement effects.工作的未来:自动化和失业摘要:随着自动化技术的兴起,人们越来越担心各行业工人可能被替代的问题。

毕业设计(论文)外文文献翻译要求

毕业设计(论文)外文文献翻译要求

毕业设计(论文)外文文献翻译要求
根据《普通高等学校本科毕业设计(论文)指导》的内容,特对外文文献翻译提出以下要求:
一、翻译的外文文献一般为1~2篇,外文字符要求不少于1.5万(或翻译成中文后至少在3000字以上)。

二、翻译的外文文献应主要选自学术期刊、学术会议的文章、有关著作及其他相关材料,应与毕业论文(设计)主题相关,并作为外文参考文献列入毕业论文(设计)的参考文献。

并在每篇中文译文首页用“脚注”形式注明原文作者及出处,中文译文后应附外文原文。

三、中文译文的基本撰写格式为题目采用小三号黑体字居中打印,正文采用宋体小四号字,行间距一般为固定值20磅,标准字符间距。

页边距为左3cm,右2.5cm,上下各2.5cm,页面统一采用A4纸。

四、封面格式由学校统一制作(注:封面上的“翻译题目”指中文译文的题目,附件1为一篇外文翻译的封面格式,附件二为两篇外文翻译的封面格式),若有两篇外文文献,请按“封面、译文一、外文原文一、译文二、外文原文二”的顺序统一装订。

教务处
20XX年2月27日
杭州电子科技大学
毕业设计(论文)外文文献翻译
毕业设计(论文)题

翻译题目
学院
专业
姓名
班级
学号
指导教师
杭州电子科技大学
毕业设计(论文)外文文献翻译
毕业设计(论文)题

翻译(1)题目
翻译(2)题目
学院
专业
姓名
班级
学号指导教师。

毕业设计外文文献翻译

毕业设计外文文献翻译

毕业设计外文文献翻译Graduation Design Foreign Literature Translation (700 words) Title: The Impact of Artificial Intelligence on the Job Market Introduction:Artificial Intelligence (AI) is a rapidly growing field that has the potential to revolutionize various industries and job markets. With advancements in technologies such as machine learning and natural language processing, AI has become capable of performing tasks traditionally done by humans. This has raised concerns about the future of jobs and the impact AI will have on the job market. This literature review aims to explore the implications of AI on employment and job opportunities.AI in the Workplace:AI technologies are increasingly being integrated into the workplace, with the aim of automating routine and repetitive tasks. For example, automated chatbots are being used to handle customer service queries, while machine learning algorithms are being employed to analyze large data sets. This has resulted in increased efficiency and productivity in many industries. However, it has also led to concerns about job displacement and unemployment.Job Displacement:The rise of AI has raised concerns about job displacement, as AI technologies are becoming increasingly capable of performing tasks previously done by humans. For example, automated machines can now perform complex surgeries with greaterprecision than human surgeons. This has led to fears that certain jobs will become obsolete, leading to unemployment for those who were previously employed in these industries.New Job Opportunities:While AI might potentially replace certain jobs, it also creates new job opportunities. As AI technologies continue to evolve, there will be a greater demand for individuals with technical skills in AI development and programming. Additionally, jobs that require human interaction and emotional intelligence, such as social work or counseling, may become even more in demand, as they cannot be easily automated.Job Transformation:Another potential impact of AI on the job market is job transformation. AI technologies can augment human abilities rather than replacing them entirely. For example, AI-powered tools can assist professionals in making decisions, augmenting their expertise and productivity. This may result in changes in job roles and the need for individuals to adapt their skills to work alongside AI technologies.Conclusion:The impact of AI on the job market is still being studied and debated. While AI has the potential to automate certain tasks and potentially lead to job displacement, it also presents opportunities for new jobs and job transformation. It is essential for individuals and organizations to adapt and acquire the necessary skills to navigate these changes in order to stay competitive in the evolvingjob market. Further research is needed to fully understand the implications of AI on employment and job opportunities.。

毕业设计文献翻译

毕业设计文献翻译

毕业设计文献翻译篇一:毕业设计外文文献翻译毕业设计文文献翻译专业学生姓名班级学号指导教师优集学院外外文资料名称: Knowledge-Based Engineeri- -ng Design Methodology外文资料出处: Int.J.Engng Ed.Vol.16.No.1 附件: 1.外文资料翻译译文基于知识工程设计方法D. E. CALKINS1. 背景复杂系统的发展需要很多工程和管理方面的知识、决策,它要满足很多竞争性的要求。

设计被认为是决定产品最终形态、成本、可靠性、市场接受程度的首要因素。

高级别的工程设计和分析过程特别重要,因为大多数的生命周期成本和整体系统的质量都在这个阶段。

产品成本的压缩最可能发生在产品设计的最初阶段。

整个生命周期阶段大约百分之七十的成本花费在概念设计阶段结束时,缩短设计周期的关键是缩短概念设计阶段,这样同时也减少了工程的重新设计工作量。

工程权衡过程中采用良好的估计和非正式的启发进行概念设计。

传统CAD工具对概念设计阶段的支持非常有限。

有必要,进行涉及多个学科的交流合作来快速进行设计分析。

最后,必须能够管理大量的特定领域的知识。

解决方案是在概念设计阶段包含进更过资源,通过消除重新设计来缩短整个产品的时间。

所有这些因素都主张采取综合设计工具和环境,以在早期的综合设计阶段提供帮助。

这种集成设计工具能够使由不同学科的工程师、设计者在面对复杂的需求和约束时能够对设计意图达成共识。

那个设计工具可以让设计团队研究在更高级别上的更多配置细节。

问题就是架构一个设计工具,以满足所有这些要求。

2. 虚拟原型模型现在需要是一种代表产品设计为得到一将允许一产品的早发展和评价的真实事实上原型的过程的方式。

虚拟样机将取代传统的物理样机,并允许设计工程师,研究“假设”的情况,同时反复更新他们的设计。

真正的虚拟原型,不仅代表形状和形式,即几何形状,它也代表如重量,材料,性能和制造工艺的非几何属性。

本科毕业设计外文文献及译文1

本科毕业设计外文文献及译文1

本科毕业设计外文文献及译文文献、资料题目:Transit Route Network Design Problem:Review文献、资料来源:网络文献、资料发表(出版)日期:2007.1院(部):xxx专业:xxx班级:xxx姓名:xxx学号:xxx指导教师:xxx翻译日期:xxx外文文献:Transit Route Network Design Problem:Review Abstract:Efficient design of public transportation networks has attracted much interest in the transport literature and practice,with manymodels and approaches for formulating the associated transit route network design problem _TRNDP_having been developed.The presentpaper systematically presents and reviews research on the TRNDP based on the three distinctive parts of the TRNDP setup:designobjectives,operating environment parameters and solution approach.IntroductionPublic transportation is largely considered as a viable option for sustainable transportation in urban areas,offering advantages such as mobility enhancement,traffic congestion and air pollution reduction,and energy conservation while still preserving social equity considerations. Nevertheless,in the past decades,factors such as socioeconomic growth,the need for personalized mobility,the increase in private vehicle ownership and urban sprawl have led to a shift towards private vehicles and a decrease in public transportation’s share in daily commuting (Sinha2003;TRB2001;EMTA2004;ECMT2002;Pucher et al.2007).Efforts for encouraging public transportation use focuses on improving provided services such as line capacity,service frequency,coverage,reliability,comfort and service quality which are among the most important parameters for an efficient public transportation system(Sinha2003;Vuchic2004.) In this context,planning and designing a cost and service efficientpublic transportation network is necessary for improving its competitiveness and market share. The problem that formally describes the design of such a public transportation network is referred to as the transit route network design problem(TRNDP);it focuses on the optimization of a number of objectives representing the efficiency of public transportation networks under operational and resource constraints such as the number and length of public transportation routes, allowable service frequencies,and number of available buses(Chakroborty2003;Fan and Machemehl2006a,b).The practical importance of designing public transportation networks has attractedconsiderable interest in the research community which has developed a variety of approaches and modelsfor the TRNDP including different levels of design detail and complexity as well as interesting algorithmic innovations.In thispaper we offer a structured review of approaches for the TRNDP;researchers will obtain a basis for evaluating existing research and identifying future research paths for further improving TRNDP models.Moreover,practitioners will acquire a detailed presentation of both the process and potential tools for automating the design of public transportation networks,their characteristics,capabilities,and strengths.Design of Public Transportation NetworksNetwork design is an important part of the public transportation operational planning process_Ceder2001_.It includes the design of route layouts and the determination of associated operational characteristics such as frequencies,rolling stock types,and so on As noted by Ceder and Wilson_1986_,network design elements are part of the overall operational planning process for public transportation networks;the process includes five steps:_1_design of routes;_2_ setting frequencies;_3_developing timetables;_4_scheduling buses;and_5_scheduling drivers. Route layout design is guided by passenger flows:routes are established to provide direct or indirect connection between locations and areas that generate and attract demand for transit travel, such as residential and activity related centers_Levinson1992_.For example,passenger flows between a central business district_CBD_and suburbs dictate the design of radial routes while demand for trips between different neighborhoods may lead to the selection of a circular route connecting them.Anticipated service coverage,transfers,desirable route shapes,and available resources usually determine the structure of the route network.Route shapes areusually constrained by their length and directness_route directness implies that route shapes are as straight as possible between connected points_,the usage of given roads,and the overlapping with other transit routes.The desirable outcome is a set of routesconnecting locations within a service area,conforming to given design criteria.For each route, frequencies and bus types are the operational characteristics typically determined through design. Calculations are based on expected passenger volumes along routes that are estimated empirically or by applying transit assignmenttechniques,under frequency requirement constraints_minimum and maximum allowedfrequencies guaranteeing safety and tolerable waiting times,respectively_,desired load factors, fleet size,and availability.These steps as well as the overall design.process have been largely based upon practical guidelines,the expert judgment of transit planners,and operators experience_Baaj and Mahmassani1991_.Two handbooks by Black _1995_and Vuchic_2004_outline frameworks to be followed by planners when designing a public transportation network that include:_1_establishing the objectives for the network;_2_ defining the operational environment of the network_road structure,demand patterns,and characteristics_;_3_developing;and_4_evaluating alternative public transportation networks.Despite the extensive use of practical guidelines and experience for designing transit networks,researchers have argued that empirical rules may not be sufficient for designing an efficient transit network and improvements may lead to better quality and more efficient services. For example,Fan and Machemehl_2004_noted that researchers and practitioners have been realizing that systematic and integrated approaches are essential for designing economically and operationally efficient transit networks.A systematic design process implies clear and consistent steps and associated techniques for designing a public transportation network,which is the scope of the TRNDP.TRNDP:OverviewResearch has extensively examined the TRNDP since the late1960s.In1979,Newell discussed previous research on the optimal design of bus routes and Hasselström_1981_ analyzed relevant studies and identified the major features of the TRNDP as demand characteristics,objective functions,constraints,passengerbehavior,solution techniques,and computational time for solving the problem.An extensive review of existing work on transit network design was provided by Chua_1984_who reported five types of transit system planning:_1_manual;_2_marketanalysis;_3_systems analysis;_4_systems analysis with interactive graphics;and_5_ mathematical optimization approach.Axhausemm and Smith_1984_analyzed existing heuristic algorithms for formulating the TRNDP in Europe,tested them,anddiscussed their potential implementation in the United States.Ceder and Wilson_1986_reportedprior work on the TRNDP and distinguished studies into those that deal with idealized networks and to those that focus on actual routes,suggesting that the main features of the TRNDP include demand characteristics,objectivesand constraints,and solution methods.At the same period,Van Nes et al._1988_grouped TRNDP models into six categories:_1_ analytical models for relating parameters of the public transportation system;_2_models determining the links to be used for public transportation route construction;_3_models determining routes only;_4_models assigning frequencies to a set of routes;_5_two-stage models for constructing routes and then assigning frequencies;and_6_models for simultaneously determining routes and frequencies.Spacovic et al._1994_and Spacovic and Schonfeld_1994_proposed a matrix organization and classified each study according to design parameters examined,objectives anticipated,network geometry,and demand characteristics. Ceder and Israeli_1997_suggested broad categorizations for TRNDP models into passenger flow simulation and mathematical programming models.Russo_1998_adopted the same categorization and noted that mathematical programming models guarantee optimal transit network design but sacrifice the level of detail in passenger representation and design parameters, while simulation models address passenger behavior but use heuristic procedures obtaining a TRNDP solution.Ceder_2001_enhanced his earlier categorization by classifying TRNDP models into simulation,ideal network,and mathematical programming models.Finally,in a recent series of studies,Fan and Machemehl_2004,2006a,b_divided TRNDP approaches into practical approaches,analytical optimization models for idealized conditions,and metaheuristic procedures for practical problems.The TRNDP is an optimization problem where objectives are defined,its constraints are determined,and a methodology is selected and validated for obtaining an optimal solution.The TRNDP is described by the objectives of the public transportation network service to be achieved, the operational characteristics and environment under which the network will operate,and the methodological approach for obtaining the optimal network design.Based on this description of the TRNDP,we propose a three-layer structure for organizing TRNDP approaches_Objectives, Parameters,and Methodology_.Each layer includes one or more items that characterize each study.The“Objectives”layer incorporates the goals set when designing a public transportation system such as the minimization of the costs of the system or the maximization of the quality of services provided.The“Parameters”layer describes the operating environment and includes both the design variables expected to be derived for the transit network_route layouts,frequencies_as well as environmental and operational parameters affecting and constraining that network_for example,allowable frequencies,desired load factors,fleet availability,demand characteristics and patterns,and so on_.Finally,the“Methodology”layer covers the logical–mathematical framework and algorithmic tools necessary to formulate and solve the TRNDP.The proposed structure follows the basic concepts toward setting up a TRNDP:deciding upon the objectives, selecting the transit network items and characteristics to be designed,setting the necessary constraints for the operating environment,and formulating and solving the problem. TRNDP:ObjectivesPublic transportation serves a very important social role while attempting to do this at the lowest possible operating cost.Objectives for designing daily operations of a public transportation system should encompass both angles.The literature suggests that most studies actually focus on both the service and economic efficiency when designing such a system. Practical goals for the TRNDP can be briefly summarized as follows_Fielding1987;van Oudheudsen et al.1987;Black1995_:_1_user benefit maximization;_2_operator cost minimization;_3_total welfare maximization;_4_capacity maximization;_5_energy conservation—protection of the environment;and_6_individual parameter optimization.Mandl_1980_indicated that public transportation systems have different objectives to meet. He commented,“even a single objective problem is difficult to attack”_p.401_.Often,these objectives are controversial since cutbacks in operating costs may require reductions in the quality of services.Van Nes and Bovy_2000_pointed out that selected objectives influence the attractiveness and performance of a public transportation network.According to Ceder and Wilson_1986_,minimization of generalized cost or time or maximization of consumer surplus were the most common objectives selected when developing transit network design models. Berechman_1993_agreed that maximization of total welfare is the most suitable objective for designing a public transportation system while Van Nes and Bovy_2000_argued that the minimization of total user and system costs seem the most suit able and less complicatedobjective_compared to total welfare_,while profit maximization leads to nonattractive public transportation networks.As can be seen in Table1,most studies seek to optimize total welfare,which incorporates benefits to the user and to the er benefits may include travel,access and waiting cost minimization,minimization of transfers,and maximization of coverage,while benefits for the system are maximum utilization and quality of service,minimization of operating costs, maximization of profits,and minimization of the fleet size used.Most commonly,total welfare is represented by the minimization of user and system costs.Some studies address specific objectives from the user,theoperator,or the environmental perspective.Passenger convenience,the number of transfers, profit and capacity maximization,travel time minimization,and fuel consumption minimization are such objectives.These studies either attempt to simplify the complex objective functions needed to setup the TRNDP_Newell1979;Baaj and Mahmassani1991;Chakroborty and Dwivedi2002_,or investigate specific aspects of the problem,such as objectives_Delle Site and Fillipi2001_,and the solution methodology_Zhao and Zeng2006;Yu and Yang2006_.Total welfare is,in a sense,a compromise between objectives.Moreover,as reported by some researchers such as Baaj and Mahmassani_1991_,Bielli et al._2002_,Chackroborty and Dwivedi_2002_,and Chakroborty_2003_,transit network design is inherently a multiobjective problem.Multiobjective models for solving the TRNDP have been based on the calculation of indicators representing different objectives for the problem at hand,both from the user and operator perspectives,such as travel and waiting times_user_,and capacity and operating costs _operator_.In their multiobjective model for the TRNDP,Baaj and Majmassani_1991_relied on the planner’s judgment and experience for selecting the optimal public transportation network,based on a set of indicators.In contrast,Bielli et al._2002_and Chakroborty and Dwivedi_2002_,combined indicators into an overall,weighted sum value, which served as the criterion for determining the optimaltransit network.TRNDP:ParametersThere are multiple characteristics and design attributes to consider for a realistic representation of a public transportation network.These form the parameters for the TRNDP.Part of these parameters is the problem set of decision variables that define its layout and operational characteristics_frequencies,vehicle size,etc._.Another set of design parameters represent the operating environment_network structure,demand characters,and patterns_, operational strategies and rules,and available resources for the public transportation network. These form the constraints needed to formulate the TRNDP and are,a-priori fixed,decided upon or assumed.Decision VariablesMost common decision variables for the TRNDP are the routes and frequencies of the public transportation network_Table1_.Simplified early studies derived optimal route spacing between predetermined parallel or radial routes,along with optimal frequencies per route_Holroyd1967; Byrne and Vuchic1972;Byrne1975,1976;Kocur and Hendrickson1982;Vaughan1986_,while later models dealt with the development of optimal route layouts and frequency determination. Other studies,additionally,considered fares_Kocur and Hendrickson1982;Morlok and Viton 1984;Chang and Schonfeld1991;Chien and Spacovic2001_,zones_Tsao and Schonfeld1983; Chang and Schonfeld1993a_,stop locations_Black1979;Spacovic and Schonfeld1994; Spacovic et al.1994;Van Nes2003;Yu and Yang2006_and bus types_Delle Site and Filippi 2001_.Network StructureSome early studies focused on the design of systems in simplified radial_Byrne1975;Black 1979;Vaughan1986_,or rectangular grid road networks_Hurdle1973;Byrne and Vuchic1972; Tsao and Schonfeld1984_.However,most approaches since the1980s were either applied to realistic,irregular grid networks or the network structure was of no importance for the proposed model and therefore not specified at all.Demand PatternsDemand patterns describe the nature of the flows of passengers expected to be accommodated by the public transportation network and therefore dictate its structure.For example,transit trips from a number of origins_for example,stops in a neighborhood_to a single destination_such as a bus terminal in the CBD of a city_and vice-versa,are characterized as many-to-one_or one-tomany_transit demand patterns.These patterns are typically encountered in public transportation systems connecting CBDs with suburbs and imply a structure of radial orparallel routes ending at a single point;models for patterns of that type have been proposed by Byrne and Vuchic_1972_,Salzborn_1972_,Byrne_1975,1976_,Kocur and Hendrickson _1982_,Morlok and Viton_1984_,Chang and Schonfeld_1991,1993a_,Spacovic and Schonfeld_1994_,Spacovic et al._1994_,Van Nes_2003_,and Chien et al._2003_.On the other hand,many-to-many demand patterns correspond to flows between multiple origins and destinations within an urban area,suggesting that the public transportation network is expected to connect various points in an area.Demand CharacteristicsDemand can be characterized either as“fixed”_or“inelastic”_or“elastic”;the later meaning that demand is affected by the performance and services provided by the public transportation network.Lee and Vuchic_2005_distinguished between two types of elastic demand:_1_demand per mode affected by transportation services,with total demand for travel kept constant;and_2_total demand for travel varying as a result of the performance of the transportation system and its modes.Fan and Machemehl_2006b_noted that the complexity of the TRNDP has led researchers intoassuming fixed demand,despite its inherent elastic nature.However,since the early1980s, studies included aspects of elastic demand in modeling the TRNDP_Hasselstrom1981;Kocur and Hendrickson1982_.Van Nes et al._1988_applied a simultaneous distribution-modal split model based on transit deterrence for estimatingdemand for public transportation.In a series of studies,Chang and Schonfeld_1991,1993a,b_ and Spacovic et al._1994_estimated demand as a direct function of travel times and fares with respect to their elasticities,while Chien and Spacovic2001_,followed the same approach assuming that demand is additionally affected by headways,route spacing and fares.Finally, studies by Leblanc_1988_,Imam_1998_,Cipriani et al._2005_,Lee and Vuchic_2005_;and Fan and Machemehl_2006a_based demand estimation on mode choice models for estimating transit demand as a function of total demand for travel.中文译文:公交路线网络设计问题:回顾摘要:公共交通网络的有效设计让交通理论与实践成为众人关注的焦点,随之发展出了很多规划相关公交路线网络设计问题(TRNDP)的模型与方法。

毕业设计(论文)外文资料翻译

毕业设计(论文)外文资料翻译

1、外文原文(复印件)2、外文资料翻译译文节能智能照明控制系统Sherif Matta and Syed Masud Mahmud, SeniorMember, IEEE Wayne State University, Detroit,Michigan 48202Sherif.Matta@,smahmud@摘要节约能源已成为当今最具挑战性的问题之一。

最浪费能源的来自低效利用的电能消耗的人工光源设备(灯具或灯泡)。

本文提出了一种通过把人工照明的强度控制到令人满意的水平,来节约电能,并且有详细设计步骤的系统。

在白天使用照明设备时,尽可能的节约。

当记录超过预设的照明方案时,引入改善日光采集和控制的调光系统。

设计原理是,如果它可以通过利用日光这样的一种方式,去控制百叶窗或窗帘。

否则,它使用的是人工建筑内部的光源。

光通量是通过控制百叶窗帘的开启角度来控制,同时,人工光源的强度的控制,通过控制脉冲宽度来调制(PWM)对直流灯的发电量或剪切AC灯泡的AC波。

该系统采用控制器区域网络(CAN),作为传感器和致动器通信用的介质。

该系统是模块化的,可用来跨越大型建筑物。

该设计的优点是,它为用户提供了一个单点操作,而这个正是用户所希望的光的亮度。

该控制器的功能是确定一种方法来满足所需的最小能量消耗光的量。

考虑的主要问题之一是系统组件的易于安装和低成本。

该系统显示出了显著节省的能源量,和在实际中实施的可行性。

关键词:智能光控系统,节能,光通量,百叶帘控制,控制器区域网络(CAN),光强度的控制一简介多年来,随着建筑物的数量和建筑物房间内的数量急剧增加,能源的浪费、低效光控制和照明分布难以管理。

此外,依靠用户对光的手动控制,来节省能源是不实际的。

很多技术和传感器最近已经向管理过多的能量消耗转变,例如在一定区域内的检测活动采用运动检测。

当有人进入房间时,自动转向灯为他们提供了便利。

他们通过在最后人员离开房间后不久关闭转向灯来减少照明能源的使用。

本科毕业设计外文文献翻译

本科毕业设计外文文献翻译

(本科毕业设计外文文献翻译学校代码: 10128学 号:题 目:Shear wall structural design of high-level framework 学生姓名: 学 院:土木工程学院 系 别:建筑工程系 专 业:土木工程专业(建筑工程方向) 班 级:土木08-(5)班 指导教师: (副教授)Shear wall structural design of high-level frameworkWu JichengAbstract: In this paper the basic concepts of manpower from the frame shear wall structure, analysis of the structural design of the content of the frame shear wall, including the seismic wall shear span ratio design, and a concrete structure in the most commonly used frame shear wall structure the design of points to note.Keywords: concrete; frame shear wall structure; high-rise buildings The wall is a modern high-rise buildings is an important building content, the size of the frame shear wall must comply with building regulations. The principle is that the larger size but the thickness must be smaller geometric features should be presented to the plate, the force is close to cylindrical. The wall shear wall structure is a flat component. Its exposure to the force along the plane level of the role of shear and moment, must also take into account the vertical pressure. Operate under the combined action of bending moments and axial force and shear force by the cantilever deep beam under the action of the force level to look into the bottom mounted on the basis of. Shear wall is divided into a whole wall and the associated shear wall in the actual project, a whole wall for example, such as general housing construction in the gable or fish bone structure film walls and small openings wall. Coupled Shear walls are connected by the coupling beam shear wall. But because thegeneral coupling beam stiffness is less than the wall stiffness of the limbs, so. Wall limb alone is obvious. The central beam of the inflection point to pay attention to the wall pressure than the limits of the limb axis. Will form a short wide beams, wide column wall limb shear wall openings too large component at both ends with just the domain of variable cross-section rod in the internal forces under the action of many Wall limb inflection point Therefore, the calculations and construction shouldAccording to approximate the frame structure to consider. The design of shear walls should be based on the characteristics of a variety of wall itself, and different mechanical characteristics and requirements, wall of the internal force distribution and failure modes of specific and comprehensive consideration of the design reinforcement and structural measures. Frame shear wall structure design is to consider the structure of the overall analysis for both directions of the horizontal and vertical effects. Obtain the internal force is required in accordance with the bias or partial pull normal section force calculation. The wall structure of the frame shear wall structural design of the content frame high-rise buildings, in the actual project in the use of the most seismic walls have sufficient quantities to meet the limits of the layer displacement, the location is relatively flexible. Seismic wall for continuous layout, full-length through. Should be designed to avoid the wall mutations in limb length and alignment is not up and down the hole. The same time. The inside of thehole margins column should not be less than 300mm in order to guarantee the length of the column as the edge of the component and constraint edge components. The bi-directional lateral force resisting structural form of vertical and horizontal wall connected. Each other as the affinity of the shear wall. For one, two seismic frame shear walls, even beam high ratio should not greater than 5 and a height of not less than 400mm. Midline column and beams, wall midline should not be greater than the column width of 1/4, in order to reduce the torsional effect of the seismic action on the column. Otherwise can be taken to strengthen the stirrup ratio in the column to make up. If the shear wall shear span than the big two. Even the beam cross-height ratio greater than 2.5, then the design pressure of the cut should not make a big 0.2. However, if the shear wall shear span ratio of less than two coupling beams span of less than 2.5, then the shear compression ratio is not greater than 0.15. The other hand, the bottom of the frame shear wall structure to enhance the design should not be less than 200mm and not less than storey 1/16, other parts should not be less than 160mm and not less than storey 1/20. Around the wall of the frame shear wall structure should be set to the beam or dark beam and the side column to form a border. Horizontal distribution of shear walls can from the shear effect, this design when building higher longer or frame structure reinforcement should be appropriately increased, especially in the sensitive parts of the beam position or temperature,stiffness change is best appropriately increased, then consideration should be given to the wall vertical reinforcement, because it is mainly from the bending effect, and take in some multi-storey shear wall structure reinforced reinforcement rate - like less constrained edge of the component or components reinforcement of the edge component. References: [1 sad Hayashi, He Yaming. On the short shear wall high-rise building design [J].Keyuan, 2008, (O2).高层框架剪力墙结构设计吴继成摘要: 本文从框架剪力墙结构设计的基本概念人手,分析了框架剪力墙的构造设计内容,包括抗震墙、剪跨比等的设计,并出混凝土结构中最常用的框架剪力墙结构设计的注意要点。

毕业设计(论文)外文文献原文及译文

毕业设计(论文)外文文献原文及译文

毕业设计(论文)外文文献原文及译文Chapter 11. Cipher Techniques11.1 ProblemsThe use of a cipher without consideration of the environment in which it is to be used may not provide the security that the user expects. Three examples will make this point clear.11.1.1 Precomputing the Possible MessagesSimmons discusses the use of a "forward search" to decipher messages enciphered for confidentiality using a public key cryptosystem [923]. His approach is to focus on the entropy (uncertainty) in the message. To use an example from Section 10.1(page 246), Cathy knows that Alice will send one of two messages—BUY or SELL—to Bob. The uncertainty is which one Alice will send. So Cathy enciphers both messages with Bob's public key. When Alice sends the message, Bob intercepts it and compares the ciphertext with the two he computed. From this, he knows which message Alice sent.Simmons' point is that if the plaintext corresponding to intercepted ciphertext is drawn from a (relatively) small set of possible plaintexts, the cryptanalyst can encipher the set of possible plaintexts and simply search that set for the intercepted ciphertext. Simmons demonstrates that the size of the set of possible plaintexts may not be obvious. As an example, he uses digitized sound. The initial calculations suggest that the number of possible plaintexts for each block is 232. Using forward search on such a set is clearly impractical, but after some analysis of the redundancy in human speech, Simmons reduces the number of potential plaintexts to about 100,000. This number is small enough so that forward searches become a threat.This attack is similar to attacks to derive the cryptographic key of symmetric ciphers based on chosen plaintext (see, for example, Hellman's time-memory tradeoff attack [465]). However, Simmons' attack is for public key cryptosystems and does not reveal the private key. It only reveals the plaintext message.11.1.2 Misordered BlocksDenning [269] points out that in certain cases, parts of a ciphertext message can be deleted, replayed, or reordered.11.1.3 Statistical RegularitiesThe independence of parts of ciphertext can give information relating to the structure of the enciphered message, even if the message itself is unintelligible. The regularity arises because each part is enciphered separately, so the same plaintext always produces the same ciphertext. This type of encipherment is called code book mode, because each part is effectively looked up in a list of plaintext-ciphertext pairs.11.1.4 SummaryDespite the use of sophisticated cryptosystems and random keys, cipher systems may provide inadequate security if not used carefully. The protocols directing how these cipher systems are used, and the ancillary information that the protocols add to messages and sessions, overcome these problems. This emphasizes that ciphers and codes are not enough. The methods, or protocols, for their use also affect the security of systems.11.2 Stream and Block CiphersSome ciphers divide a message into a sequence of parts, or blocks, and encipher each block with the same key.Definition 11–1. Let E be an encipherment algorithm, and let Ek(b) bethe encipherment of message b with key k. Let a message m = b1b2…, whereeach biis of a fixed length. Then a block cipher is a cipher for whichE k (m) = Ek(b1)Ek(b2) ….Other ciphers use a nonrepeating stream of key elements to encipher characters of a message.Definition 11–2. Let E be an encipherment algorithm, and let Ek(b) bethe encipherment of message b with key k. Let a message m = b1b2…, whereeach bi is of a fixed length, and let k = k1k2…. Then a stream cipheris a cipher for which Ek (m) = Ek1(b1)Ek2(b2) ….If the key stream k of a stream cipher repeats itself, it is a periodic cipher.11.2.1 Stream CiphersThe one-time pad is a cipher that can be proven secure (see Section 9.2.2.2, "One-Time Pad"). Bit-oriented ciphers implement the one-time pad by exclusive-oring each bit of the key with one bit of the message. For example, if the message is 00101 and the key is 10010, the ciphertext is01||00||10||01||10 or 10111. But how can one generate a random, infinitely long key?11.2.1.1 Synchronous Stream CiphersTo simulate a random, infinitely long key, synchronous stream ciphers generate bits from a source other than the message itself. The simplest such cipher extracts bits from a register to use as the key. The contents of the register change on the basis of the current contents of the register.Definition 11–3. An n-stage linear feedback shift register (LFSR)consists of an n-bit register r = r0…rn–1and an n-bit tap sequence t =t 0…tn–1. To obtain a key bit, ris used, the register is shifted one bitto the right, and the new bit r0t0⊕…⊕r n–1t n–1 is inserted.The LFSR method is an attempt to simulate a one-time pad by generating a long key sequence from a little information. As with any such attempt, if the key is shorter than the message, breaking part of the ciphertext gives the cryptanalyst information about other parts of the ciphertext. For an LFSR, a known plaintext attack can reveal parts of the key sequence. If the known plaintext is of length 2n, the tap sequence for an n-stage LFSR can be determined completely.Nonlinear feedback shift registers do not use tap sequences; instead, the new bit is any function of the current register bits.Definition 11–4. An n-stage nonlinear feedback shift register (NLFSR)consists of an n-bit register r = r0…rn–1. Whenever a key bit is required,ris used, the register is shifted one bit to the right, and the new bitis set to f(r0…rn–1), where f is any function of n inputs.NLFSRs are not common because there is no body of theory about how to build NLFSRs with long periods. By contrast, it is known how to design n-stage LFSRs with a period of 2n– 1, and that period is maximal.A second technique for eliminating linearity is called output feedback mode. Let E be an encipherment function. Define k as a cryptographic key,(r) and define r as a register. To obtain a bit for the key, compute Ekand put that value into the register. The rightmost bit of the result is exclusive-or'ed with one bit of the message. The process is repeated until the message is enciphered. The key k and the initial value in r are the keys for this method. This method differs from the NLFSR in that the register is never shifted. It is repeatedly enciphered.A variant of output feedback mode is called the counter method. Instead of using a register r, simply use a counter that is incremented for every encipherment. The initial value of the counter replaces r as part of the key. This method enables one to generate the ith bit of the key without generating the bits 0…i – 1. If the initial counter value is i, set. In output feedback mode, one must generate all the register to i + ithe preceding key bits.11.2.1.2 Self-Synchronous Stream CiphersSelf-synchronous ciphers obtain the key from the message itself. The simplest self-synchronous cipher is called an autokey cipher and uses the message itself for the key.The problem with this cipher is the selection of the key. Unlike a one-time pad, any statistical regularities in the plaintext show up in the key. For example, the last two letters of the ciphertext associated with the plaintext word THE are always AL, because H is enciphered with the key letter T and E is enciphered with the key letter H. Furthermore, if theanalyst can guess any letter of the plaintext, she can determine all successive plaintext letters.An alternative is to use the ciphertext as the key stream. A good cipher will produce pseudorandom ciphertext, which approximates a randomone-time pad better than a message with nonrandom characteristics (such as a meaningful English sentence).This type of autokey cipher is weak, because plaintext can be deduced from the ciphertext. For example, consider the first two characters of the ciphertext, QX. The X is the ciphertext resulting from enciphering some letter with the key Q. Deciphering, the unknown letter is H. Continuing in this fashion, the analyst can reconstruct all of the plaintext except for the first letter.A variant of the autokey method, cipher feedback mode, uses a shift register. Let E be an encipherment function. Define k as a cryptographic(r). The key and r as a register. To obtain a bit for the key, compute Ek rightmost bit of the result is exclusive-or'ed with one bit of the message, and the other bits of the result are discarded. The resulting ciphertext is fed back into the leftmost bit of the register, which is right shifted one bit. (See Figure 11-1.)Figure 11-1. Diagram of cipher feedback mode. The register r is enciphered with key k and algorithm E. The rightmost bit of the result is exclusive-or'ed with one bit of the plaintext m i to produce the ciphertext bit c i. The register r is right shifted one bit, and c i is fed back into the leftmost bit of r.Cipher feedback mode has a self-healing property. If a bit is corrupted in transmission of the ciphertext, the next n bits will be deciphered incorrectly. But after n uncorrupted bits have been received, the shift register will be reinitialized to the value used for encipherment and the ciphertext will decipher properly from that point on.As in the counter method, one can decipher parts of messages enciphered in cipher feedback mode without deciphering the entire message. Let the shift register contain n bits. The analyst obtains the previous n bits of ciphertext. This is the value in the shift register before the bit under consideration was enciphered. The decipherment can then continue from that bit on.11.2.2 Block CiphersBlock ciphers encipher and decipher multiple bits at once, rather than one bit at a time. For this reason, software implementations of block ciphers run faster than software implementations of stream ciphers. Errors in transmitting one block generally do not affect other blocks, but as each block is enciphered independently, using the same key, identical plaintext blocks produce identical ciphertext blocks. This allows the analyst to search for data by determining what the encipherment of a specific plaintext block is. For example, if the word INCOME is enciphered as one block, all occurrences of the word produce the same ciphertext.To prevent this type of attack, some information related to the block's position is inserted into the plaintext block before it is enciphered. The information can be bits from the preceding ciphertext block [343] or a sequence number [561]. The disadvantage is that the effective block size is reduced, because fewer message bits are present in a block.Cipher block chaining does not require the extra information to occupy bit spaces, so every bit in the block is part of the message. Before a plaintext block is enciphered, that block is exclusive-or'ed with the preceding ciphertext block. In addition to the key, this technique requires an initialization vector with which to exclusive-or the initial plaintext block. Taking Ekto be the encipherment algorithm with key k, and I to be the initialization vector, the cipher block chaining technique isc 0 = Ek(m⊕I)c i = Ek(mi⊕ci–1) for i > 011.2.2.1 Multiple EncryptionOther approaches involve multiple encryption. Using two keys k and k' toencipher a message as c = Ek' (Ek(m)) looks attractive because it has aneffective key length of 2n, whereas the keys to E are of length n. However, Merkle and Hellman [700] have shown that this encryption technique can be broken using 2n+1encryptions, rather than the expected 22n(see Exercise 3).Using three encipherments improves the strength of the cipher. There are several ways to do this. Tuchman [1006] suggested using two keys k and k':c = Ek (Dk'(Ek(m)))This mode, called Encrypt-Decrypt-Encrypt (EDE) mode, collapses to a single encipherment when k = k'. The DES in EDE mode is widely used in the financial community and is a standard (ANSI X9.17 and ISO 8732). It is not vulnerable to the attack outlined earlier. However, it is vulnerable to a chosen plaintext and a known plaintext attack. If b is the block size in bits, and n is the key length, the chosen plaintext attacktakes O(2n) time, O(2n) space, and requires 2n chosen plaintexts. The known plaintext attack requires p known plaintexts, and takes O(2n+b/p) time and O(p) memory.A second version of triple encipherment is the triple encryption mode [700]. In this mode, three keys are used in a chain of encipherments.c = Ek (Ek'(Ek''(m)))The best attack against this scheme is similar to the attack on double encipherment, but requires O(22n) time and O(2n) memory. If the key length is 56 bits, this attack is computationally infeasible.11.3 Networks and CryptographyBefore we discuss Internet protocols, a review of the relevant properties of networks is in order. The ISO/OSI model [990] provides an abstract representation of networks suitable for our purposes. Recall that the ISO/OSI model is composed of a series of layers (see Figure 11-2). Each host, conceptually, has a principal at each layer that communicates with a peer on other hosts. These principals communicate with principals at the same layer on other hosts. Layer 1, 2, and 3 principals interact only with similar principals at neighboring (directly connected) hosts. Principals at layers 4, 5, 6, and 7 interact only with similar principals at the other end of the communication. (For convenience, "host" refers to the appropriate principal in the following discussion.)Figure 11-2. The ISO/OSI model. The dashed arrows indicate peer-to-peer communication. For example, the transport layers are communicating with each other. The solid arrows indicate the actual flow of bits. For example, the transport layer invokes network layer routines on the local host, which invoke data link layer routines, which put the bits onto the network. The physical layer passes the bits to the next "hop," or host, on the path. When the message reaches the destination, it is passed up to the appropriatelevel.Each host in the network is connected to some set of other hosts. They exchange messages with those hosts. If host nob wants to send a message to host windsor, nob determines which of its immediate neighbors is closest to windsor (using an appropriate routing protocol) and forwards the message to it. That host, baton, determines which of its neighbors is closest to windsor and forwards the message to it. This process continues until a host, sunapee, receives the message and determines that windsor is an immediate neighbor. The message is forwarded to windsor, its endpoint.Definition 11–5. Let hosts C0, …, Cnbe such that Ciand Ci+1are directlyconnected, for 0 i < n. A communications protocol that has C0 and Cnasits endpoints is called an end-to-end protocol. A communications protocolthat has Cj and Cj+1as its endpoints is called a link protocol.The difference between an end-to-end protocol and a link protocol is that the intermediate hosts play no part in an end-to-end protocol other than forwarding messages. On the other hand, a link protocol describes how each pair of intermediate hosts processes each message.The protocols involved can be cryptographic protocols. If the cryptographic processing is done only at the source and at the destination, the protocol is an end-to-end protocol. If cryptographic processing occurs at each host along the path from source to destination, the protocolis a link protocol. When encryption is used with either protocol, we use the terms end-to-end encryption and link encryption, respectively.In link encryption, each host shares a cryptographic key with its neighbor. (If public key cryptography is used, each host has its neighbor's public key. Link encryption based on public keys is rare.) The keys may be set on a per-host basis or a per-host-pair basis. Consider a network with four hosts called windsor, stripe, facer, and seaview. Each host is directly connected to the other three. With keys distributed on a per-host basis, each host has its own key, making four keys in all. Each host has the keys for the other three neighbors, as well as its own. All hosts use the same key to communicate with windsor. With keys distributed on a per-host-pair basis, each host has one key per possible connection, making six keys in all. Unlike the per-host situation, in the per-host-pair case, each host uses a different key to communicate with windsor. The message is deciphered at each intermediate host, reenciphered for the next hop, and forwarded. Attackers monitoring the network medium will not be able to read the messages, but attackers at the intermediate hosts will be able to do so.In end-to-end encryption, each host shares a cryptographic key with each destination. (Again, if the encryption is based on public key cryptography, each host has—or can obtain—the public key of each destination.) As with link encryption, the keys may be selected on a per-host or per-host-pair basis. The sending host enciphers the message and forwards it to the first intermediate host. The intermediate host forwards it to the next host, and the process continues until the message reaches its destination. The destination host then deciphers it. The message is enciphered throughout its journey. Neither attackers monitoring the network nor attackers on the intermediate hosts can read the message. However, attackers can read the routing information used to forward the message.These differences affect a form of cryptanalysis known as traffic analysis.A cryptanalyst can sometimes deduce information not from the content ofthe message but from the sender and recipient. For example, during the Allied invasion of Normandy in World War II, the Germans deduced which vessels were the command ships by observing which ships were sending and receiving the most signals. The content of the signals was not relevant; their source and destination were. Similar deductions can reveal information in the electronic world.第十一章密码技术11.1问题在没有考虑加密所要运行的环境时,加密的使用可能不能提供用户所期待的安全。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

xxxxxxxxx毕业设计(论文)外文文献翻译(本科学生用)题目:Poduct Line Engineering: The State of the Practice生产线工程:实践的形态学生姓名:学号:学部(系):专业年级:指导教师:职称或学位:2011年3月10日外文文献翻译(译成中文1000字左右):【主要阅读文献不少于5篇,译文后附注文献信息,包括:作者、书名(或论文题目)、出版社(或刊物名称)、出版时间(或刊号)、页码。

提供所译外文资料附件(印刷类含封面、封底、目录、翻译部分的复印件等,网站类的请附网址及原文】Requirements engineering practicesA precise requirements engineering process— a main driver for successful software development —is even more important for product line engineering. Usually, the product line’s scope addresses various domains simultaneously. This makes requirements engineering more complex. Furthermore, SPL development involves more tasks than single-product development. Many product line requirements are complex, interlinked, and divided into common and product-specific requirements. So, several requirements engineering practices are important specifically in SPL development:♉ Domain identification and modeling, as well as commonalities and variations across product instancesSeparate specification and verification for platform and product requirements♉ Management of integrating future requirements into the platform and products♉ Identification, modeling, and management of requirement dependenciesThe first two practices are specific to SPL engineering. The latter two are common to software development but have much higher importance for SPLs.Issues with performing these additional activities can severely affect the product line’s long-term success. During the investigation, we found that most organizations today apply organizational and procedural measures to master these challenges. The applicability of more formal requirements engineering techniques and tools appeared rather limited, partly because such techniques are not yet designed to cope with product line evelopment’s inherent complexities. The investigation determined that the following three SPL requirements engineering practices were most important to SPL success.Domain analysis and domain description.Before starting SPL development, organizations should perform a thorough domain analysis. A well-understood domain is a prerequisite for defining a suitable scope for the product line. It’s the foundation for efficiently identifying and distinguishing platform and product requirements. Among the five participants in our investigation, three explicitly modeled the product line requirements. The others used experienced architects and domain experts to develop the SPL core assets without extensive requirements elicitation. Two organizations from the first group established a continuous requirements management that maintained links between product line and product instance requirements. The three other organizations managed their core assets’ evolution using change management procedures and versioning concepts. Their business did not force them to maintain more detailed links between the requirements on core assets and product instances.The impact of architectural decisions on requirements negotiations.A stable but flexible architecture is important for SPL development. However, focusing SPL evolution too much on architectural issues will lead to shallow or even incorrect specifications. It can cause core assets to ignore important SPL requirements so that the core assets lose relevance for SPL development. Organizations can avoid this problem by establishing clear responsibilities for requirements management in addition to architectural roles.The work group participants reported that a suitable organizational tool for balancing requirements and architecture is roundtable meetings in which requirements engineers,lead architects, and marketing and sales personnel discuss SPL implementation. Also,integrating the architects into customer negotiations will solve many problems that can arise from conflicting requirements. Another measure is to effectively document requirements and architectural vision so that product marketing and SPL architects can understand each other and agree on implementation.Effective tool supportWe often discussed tool support for SPL requirements engineering during the investigation. Because requirements engineering for SPL can become highly complex, effective tool support is important. Existing tools don’t satisfactorily support aspects such as variability management, version management for requirements collections, management of different views on requirements, or dependency modeling and evolution. So, an SPL organization must design custom solutions for these issues. Specifically, the two participants in the investigation that had established continuous requirements management had to maintain expensive customization and support infrastructures for their tool environment. The other organizations tried to avoid these costs by mitigating insufficient tool support through organizational measures such as strict staging of the requirements specification.工程实践要求精确的需求工程过程,它是一个成功的软件开发的主要动力,更是对产品线工程的重要。

相关文档
最新文档