毕设外文资料翻译.

合集下载

毕业设计外文文献翻译

毕业设计外文文献翻译

毕业设计(论文)外文资料翻译系别:专业:班级:姓名:学号:外文出处:附件: 1. 原文; 2. 译文2013年03月附件一:A Rapidly Deployable Manipulator SystemChristiaan J.J. Paredis, H. Benjamin Brown, Pradeep K. KhoslaAbstract:A rapidly deployable manipulator system combines the flexibility of reconfigurable modular hardware with modular programming tools, allowing the user to rapidly create a manipulator which is custom-tailored for a given task. This article describes two main aspects of such a system, namely, the Reconfigurable Modular Manipulator System (RMMS)hardware and the corresponding control software.1 IntroductionRobot manipulators can be easily reprogrammed to perform different tasks, yet the range of tasks that can be performed by a manipulator is limited by mechanicalstructure.Forexample, a manipulator well-suited for precise movement across the top of a table would probably no be capable of lifting heavy objects in the vertical direction. Therefore, to perform a given task,one needs to choose a manipulator with an appropriate mechanical structure.We propose the concept of a rapidly deployable manipulator system to address the above mentioned shortcomings of fixed configuration manipulators. As is illustrated in Figure 1, a rapidly deployable manipulator system consists of software and hardware that allow the user to rapidly build and program a manipulator which is customtailored for a given task.The central building block of a rapidly deployable system is a Reconfigurable Modular Manipulator System (RMMS). The RMMS utilizes a stock of interchangeable link and joint modules of various sizes and performance specifications. One such module is shown in Figure 2. By combining these general purpose modules, a wide range of special purpose manipulators can be assembled. Recently, there has been considerable interest in the idea of modular manipulators [2, 4, 5, 7, 9, 10, 14], for research applications as well as for industrial applications. However, most of these systems lack the property of reconfigurability, which is key to the concept of rapidly deployable systems. The RMMS is particularly easy toreconfigure thanks to its integrated quick-coupling connectors described in Section 3.Effective use of the RMMS requires, Task Based Design software. This software takes as input descriptions of the task and of the available manipulator modules; it generates as output a modular assembly configuration optimally suited to perform the given task. Several different approaches have been used successfully to solve simpli-fied instances of this complicated problem.A third important building block of a rapidly deployable manipulator system is a framework for the generation of control software. To reduce the complexity of softwaregeneration for real-time sensor-based control systems, a software paradigm called software assembly has been proposed in the Advanced Manipulators Laboratory at CMU.This paradigm combines the concept of reusable and reconfigurable software components, as is supported by the Chimera real-time operating system [15], with a graphical user interface and a visual programming language, implemented in OnikaA lthough the software assembly paradigm provides thesoftware infrastructure for rapidly programming manipulator systems, it does not solve the programming problem itself. Explicit programming of sensor-based manipulator systems is cumbersome due to the extensive amount of detail which must be specified for the robot to perform the task. The software synthesis problem for sensor-based robots can be simplified dramatically, by providing robust robotic skills, that is, encapsulated strategies for accomplishing common tasks in the robots task domain [11]. Such robotic skills can then be used at the task level planning stage without having to consider any of the low-level detailsAs an example of the use of a rapidly deployable system,consider a manipulator in a nuclear environment where it must inspect material and space for radioactive contamination, or assemble and repair equipment. In such an environment, widely varied kinematic (e.g., workspace) and dynamic (e.g., speed, payload) performance is required, and these requirements may not be known a priori. Instead of preparing a large set of different manipulators to accomplish these tasks—an expensive solution—one can use a rapidly deployable manipulator system. Consider the following scenario: as soon as a specific task is identified, the task based design software determinesthe task. This optimal configuration is thenassembled from the RMMS modules by a human or, in the future, possibly by anothermanipulator. The resulting manipulator is rapidly programmed by using the software assembly paradigm and our library of robotic skills. Finally,the manipulator is deployed to perform its task.Although such a scenario is still futuristic, the development of the reconfigurable modular manipulator system, described in this paper, is a major step forward towards our goal of a rapidly deployable manipulator system.Our approach could form the basis for the next generation of autonomous manipulators, in which the traditional notion of sensor-based autonomy is extended to configuration-based autonomy. Indeed, although a deployed system can have all the sensory and planning information it needs, it may still not be able to accomplish its task because the task is beyond the system’s physical capabilities. A rapidly deployable system, on the other hand, could adapt its physical capabilities based on task specifications and, with advanced sensing, control, and planning strategies, accomplish the task autonomously.2 Design of self-contained hardware modulesIn most industrial manipulators, the controller is a separate unit housing the sensor interfaces, power amplifiers, and control processors for all the joints of the manipulator.A large number of wires is necessary to connect this control unit with the sensors, actuators and brakes located in each of the joints of the manipulator. The large number of electrical connections and the non-extensible nature of such a system layout make it infeasible for modular manipulators. The solution we propose is to distribute the control hardware to each individual module of the manipulator. These modules then become self-contained units which include sensors, an actuator, a brake, a transmission, a sensor interface, a motor amplifier, and a communication interface, as is illustrated in Figure 3. As a result, only six wires are requiredfor power distribution and data communication.2.1 Mechanical designThe goal of the RMMS project is to have a wide variety of hardware modules available. So far, we have built four kinds of modules: the manipulator base, a link module, three pivot joint modules (one of which is shown in Figure 2), and one rotate joint module. The base module and the link module have no degrees-of-freedom; the joint modules have onedegree-of-freedom each. The mechanical design of the joint modules compactly fits aDC-motor, a fail-safe brake, a tachometer, a harmonic drive and a resolver.The pivot and rotate joint modules use different outside housings to provide the right-angle or in-line configuration respectively, but are identical internally. Figure 4 shows in cross-section the internal structure of a pivot joint. Each joint module includes a DC torque motor and 100:1 harmonic-drive speed reducer, and is rated at a maximum speed of 1.5rad/s and maximum torque of 270Nm. Each module has a mass of approximately 10.7kg. A single, compact, X-type bearing connects the two joint halves and provides the needed overturning rigidity. A hollow motor shaft passes through all the rotary components, and provides achannel for passage of cabling with minimal flexing.2.2 Electronic designThe custom-designed on-board electronics are also designed according to the principle of modularity. Each RMMS module contains a motherboard which provides the basic functionality and onto which daughtercards can be stacked to add module specific functionality.The motherboard consists of a Siemens 80C166 microcontroller, 64K of ROM, 64K of RAM, an SMC COM20020 universal local area network controller with an RS-485 driver, and an RS-232 driver. The function of the motherboard is to establish communication with the host interface via an RS-485 bus and to perform the lowlevel control of the module, as is explained in more detail in Section 4. The RS-232 serial bus driver allows for simple diagnostics and software prototyping.A stacking connector permits the addition of an indefinite number of daughtercards with various functions, such as sensor interfaces, motor controllers, RAM expansion etc. In our current implementation, only modules with actuators include a daughtercard. This card contains a 16 bit resolver to digital converter, a 12 bit A/D converter to interface with the tachometer, and a 12 bit D/A converter to control the motor amplifier; we have used an ofthe-shelf motor amplifier (Galil Motion Control model SSA-8/80) to drive the DC-motor. For modules with more than one degree-of-freedom, for instance a wrist module, more than one such daughtercard can be stacked onto the same motherboard.3 Integrated quick-coupling connectorsTo make a modular manipulator be reconfigurable, it is necessary that the modules can be easily connected with each other. We have developed a quick-coupling mechanism with which a secure mechanical connection between modules can be achieved by simply turning a ring handtight; no tools are required. As shown in Figure 5, keyed flanges provide precise registration of the two modules. Turning of the locking collar on the male end produces two distinct motions: first the fingers of the locking ring rotate (with the collar) about 22.5 degrees and capture the fingers on the flanges; second, the collar rotates relative to the locking ring, while a cam mechanism forces the fingers inward to securely grip the mating flanges. A ball- transfer mechanism between the collar and locking ring automatically produces this sequence of motions.At the same time the mechanical connection is made,pneumatic and electronic connections are also established. Inside the locking ring is a modular connector that has 30 male electrical pins plus a pneumatic coupler in the middle. These correspond to matching female components on the mating connector. Sets of pins are wired in parallel to carry the 72V-25A power for motors and brakes, and 48V–6A power for the electronics. Additional pins carry signals for two RS-485 serial communication busses and four video busses. A plastic guide collar plus six alignment pins prevent damage to the connector pins and assure proper alignment. The plastic block holding the female pins can rotate in the housing to accommodate the eight different possible connection orientations (8@45 degrees). The relative orientation is automatically registered by means of an infrared LED in the female connector and eight photodetectors in the male connector.4 ARMbus communication systemEach of the modules of the RMMS communicates with a VME-based host interface over a local area network called the ARMbus; each module is a node of the network. The communication is done in a serial fashion over an RS-485 bus which runs through the length of the manipulator. We use the ARCNET protocol [1] implemented on a dedicated IC (SMC COM20020). ARCNET is a deterministic token-passing network scheme which avoids network collisions and guarantees each node its time to access the network. Blocks ofinformation called packets may be sent from any node on the network to any one of the other nodes, or to all nodes simultaneously (broadcast). Each node may send one packet each time it gets the token. The maximum network throughput is 5Mb/s.The first node of the network resides on the host interface card, as is depicted in Figure 6. In addition to a VME address decoder, this card contains essentially the same hardware one can find on a module motherboard. The communication between the VME side of the card and the ARCNET side occurs through dual-port RAM.There are two kinds of data passed over the local area network. During the manipulator initialization phase, the modules connect to the network one by one, starting at the base and ending at the end-effector. On joining the network, each module sends a data-packet to the host interface containing its serial number and its relative orientation with respect to the previous module. This information allows us to automatically determine the current manipulator configuration.During the operation phase, the host interface communicates with each of the nodes at 400Hz. The data that is exchanged depends on the control mode—centralized or distributed. In centralized control mode, the torques for all the joints are computed on the VME-based real-time processing unit (RTPU), assembled into a data-packet by the microcontroller on the host interface card and broadcast over the ARMbus to all the nodes of the network. Each node extracts its torque value from the packet and replies by sending a data-packet containing the resolver and tachometer readings. In distributed control mode, on the other hand, the host computer broadcasts the desired joint values and feed-forward torques. Locally, in each module, the control loop can then be closed at a frequency much higher than 400Hz. The modules still send sensor readings back to the host interface to be used in the computation of the subsequent feed-forward torque.5 Modular and reconfigurable control softwareThe control software for the RMMS has been developed using the Chimera real-time operating system, which supports reconfigurable and reusable software components [15]. The software components used to control the RMMS are listed in Table 1. The trjjline, dls, and grav_comp components require the knowledge of certain configuration dependent parametersof the RMMS, such as the number of degrees-of-freedom, the Denavit-Hartenberg parameters etc. During the initialization phase, the RMMS interface establishes contact with each of the hardware modules to determine automatically which modules are being used and in which order and orientation they have been assembled. For each module, a data file with a parametric model is read. By combining this information for all the modules, kinematic and dynamic models of the entire manipulator are built.After the initialization, the rmms software component operates in a distributed control mode in which the microcontrollers of each of the RMMS modules perform PID control locally at 1900Hz. The communication between the modules and the host interface is at 400Hz, which can differ from the cycle frequency of the rmms software component. Since we use a triple buffer mechanism [16] for the communication through the dual-port RAM on the ARMbus host interface, no synchronization or handshaking is necessary.Because closed form inverse kinematics do not exist for all possible RMMS configurations, we use a damped least-squares kinematic controller to do the inverse kinematics computation numerically..6 Seamless integration of simulationTo assist the user in evaluating whether an RMMS con- figuration can successfully complete a given task, we have built a simulator. The simulator is based on the TeleGrip robot simulation software from Deneb Inc., and runs on an SGI Crimson which is connected with the real-time processing unit through a Bit3 VME-to-VME adaptor, as is shown in Figure 6.A graphical user interface allows the user to assemble simulated RMMS configurations very much like assembling the real hardware. Completed configurations can be tested and programmed using the TeleGrip functions for robot devices. The configurations can also be interfaced with the Chimera real-time softwarerunning on the same RTPUs used to control the actual hardware. As a result, it is possible to evaluate not only the movements of the manipulator but also the realtime CPU usage and load balancing. Figure 7 shows an RMMS simulation compared with the actual task execution.7 SummaryWe have developed a Reconfigurable Modular Manipulator System which currently consists of six hardware modules, with a total of four degrees-of-freedom. These modules can be assembled in a large number of different configurations to tailor the kinematic and dynamic properties of the manipulator to the task at hand. The control software for the RMMS automatically adapts to the assembly configuration by building kinematic and dynamic models of the manipulator; this is totally transparent to the user. To assist the user in evaluating whether a manipulator configuration is well suited for a given task, we have also built a simulator.AcknowledgmentThis research was funded in part by DOE under grant DE-F902-89ER14042, by Sandia National Laboratories under contract AL-3020, by the Department of Electrical and Computer Engineering, and by The Robotics Institute, Carnegie Mellon University.The authors would also like to thank Randy Casciola, Mark DeLouis, Eric Hoffman, and Jim Moody for their valuable contributions to the design of the RMMS system.附件二:可迅速布置的机械手系统作者:Christiaan J.J. Paredis, H. Benjamin Brown, Pradeep K. Khosla摘要:一个迅速可部署的机械手系统,可以使再组合的标准化的硬件的灵活性用标准化的编程工具结合,允许用户迅速建立为一项规定的任务来通常地控制机械手。

毕设外文原文及译文

毕设外文原文及译文

北京联合大学毕业设计(论文)任务书题目:OFDM调制解调技术的设计与仿真实现专业:通信工程指导教师:张雪芬学院:信息学院学号:2011080331132班级:1101B姓名:徐嘉明一、外文原文Evolution Towards 5G Multi-tier Cellular WirelessNetworks:An Interference ManagementPerspectiveEkram Hossain, Mehdi Rasti, Hina Tabassum, and Amr AbdelnasserAbstract—The evolving fifth generation (5G) cellular wireless networks are envisioned to overcome the fundamental challenges of existing cellular networks, e.g., higher data rates, excellent end-to-end performance and user-coverage in hot-spots and crowded areas with lower latency, energy consumption and cost per information transfer. To address these challenges, 5G systems will adopt a multi-tier architecture consisting of macrocells, different types of licensed small cells, relays, and device-to-device (D2D) networks to serve users with different quality-of-service (QoS) requirements in a spectrum and energy-efficient manner. Starting with the visions and requirements of 5G multi-tier networks, this article outlines the challenges of interference management (e.g., power control, cell association) in these networks with shared spectrum access (i.e., when the different network tiers share the same licensed spectrum). It is argued that the existing interference management schemes will not be able to address the interference management problem in prioritized 5G multitier networks where users in different tiers have different priorities for channel access. In this context, a survey and qualitative comparison of the existing cell association and power control schemes is provided to demonstrate their limitations for interference management in 5G networks. Open challenges are highlighted and guidelines are provided to modify the existing schemes in order to overcome these limitations and make them suitable for the emerging 5G systems.Index Terms—5G cellular wireless, multi-tier networks, interference management, cell association, power control.I. INTRODUCTIONTo satisfy the ever-increasing demand for mobile broadband communications, the IMT-Advanced (IMT-A) standards have been ratified by the International Telecommunications Union (ITU) in November 2010 and the fourth generation (4G) wireless communication systems are currently being deployed worldwide. The standardization for LTE Rel-12, also known as LTE-B, is also ongoing and expected to be finalized in 2014. Nonetheless, existing wireless systems will not be able to deal with the thousand-fold increase in total mobile broadband data [1] contributed by new applications and services such as pervasive 3D multimedia, HDTV, VoIP, gaming, e-Health, and Car2x communication. In this context, the fifth generation (5G) wireless communication technologies are expected to attain 1000 times higher mobile data volume per unit area,10-100 times higher number of connecting devices and user data rate, 10 times longer battery life and 5 times reduced latency [2]. While for 4G networks the single-user average data rate is expected to be 1 Gbps, it is postulated that cell data rate of theorder of 10 Gbps will be a key attribute of 5G networks.5G wireless networks are expected to be a mixture of network tiers of different sizes, transmit powers, backhaul connections, different radio access technologies (RATs) that are accessed by an unprecedented numbers of smart and heterogeneous wireless devices. This architectural enhancement along with the advanced physical communications technology such as high-order spatial multiplexing multiple-input multiple-output (MIMO) communications will provide higher aggregate capacity for more simultaneous users, or higher level spectral efficiency, when compared to the 4G networks. Radio resource and interference management will be a key research challenge in multi-tier and heterogeneous 5G cellular networks. The traditional methods for radio resource and interference management (e.g., channel allocation, power control, cell association or load balancing) in single-tier networks (even some of those developed for two-tier networks) may not be efficient in this environment and a new look into the interference management problem will be required.First, the article outlines the visions and requirements of 5G cellular wireless systems. Major research challenges are then highlighted from the perspective of interference management when the different network tiers share the same radio spectrum. A comparative analysis of the existing approaches for distributed cell association and power control (CAPC) is then provided followed by a discussion on their limitations for5G multi-tier cellular networks. Finally, a number of suggestions are provided to modifythe existing CAPC schemes to overcome these limitations.II. VISIONS AND REQUIREMENTS FOR 5G MULTI-TIERCELLULAR NETWORKS5G mobile and wireless communication systems will require a mix of new system concepts to boost the spectral and energy efficiency. The visions and requirements for 5G wireless systems are outlined below.·Data rate and latency: For dense urban areas, 5G networks are envisioned to enable an experienced data rate of 300 Mbps and 60 Mbps in downlink and uplink, respectively, in 95% of locations and time [2]. The end-to- end latencies are expected to be in the order of 2 to 5 milliseconds. The detailed requirements for different scenarios are listed in [2].·Machine-type Communication (MTC) devices: The number of traditional human-centric wireless devices with Internet connectivity (e.g., smart phones, super-phones, tablets) may be outnumbered by MTC devices which can be used in vehicles, home appliances, surveillance devices, and sensors.·Millimeter-wave communication: To satisfy the exponential increase in traffic and the addition of different devices and services, additional spectrum beyond what was previously allocated to 4G standard is sought for. The use of millimeter-wave frequency bands (e.g., 28 GHz and 38 GHz bands) is a potential candidate to overcome the problem of scarce spectrum resources since it allows transmission at wider bandwidths than conventional 20 MHz channels for 4G systems.·Multiple RATs: 5G is not about replacing the existing technologies, but it is about enhancing and supporting them with new technologies [1]. In 5G systems, the existing RATs, including GSM (Global System for Mobile Communications), HSPA+ (Evolved High-Speed Packet Access), and LTE, will continue to evolve to provide a superior system performance. They will also be accompanied by some new technologies (e.g., beyondLTE-Advanced).·Base station (BS) densification: BS densification is an effective methodology to meet the requirements of 5G wireless networks. Specifically, in 5G networks, there will be deployments of a large number of low power nodes, relays, and device-to-device (D2D) communication links with much higher density than today’s macrocell networks.Fig. 1 shows such a multi-tier network with a macrocell overlaid by relays, picocells, femtocells, and D2D links. The adoption of multiple tiers in the cellular networkarchitecture will result in better performance in terms of capacity, coverage, spectral efficiency, and total power consumption, provided that the inter-tier and intratier interferences are well managed.·Prioritized spectrum access: The notions of both trafficbased and tier-based Prioriti -es will exist in 5G networks. Traffic-based priority arises from the different requirements of the users (e.g., reliability and latency requirements, energy constraints), whereas the tier-based priority is for users belonging to different network tiers. For example, with shared spectrum access among macrocells and femtocells in a two-tier network, femtocells create ―dead zones‖ around them in the downlink for macro users. Protection should, thus, be guaranteed for the macro users. Consequently, the macro and femtousers play the role of high-priority users (HPUEs) and lowpriority users (LPUEs), respectively. In the uplink direction, the macrocell users at the cell edge typically transmit with high powers which generates high uplink interference to nearby femtocells. Therefore, in this case, the user priorities should get reversed. Another example is a D2D transmission where different devices may opportunistically access the spectrum to establish a communication link between them provided that the interference introduced to the cellular users remains below a given threshold. In this case, the D2D users play the role of LPUEs whereas the cellular users play the role of HPUEs.·Network-assisted D2D communication: In the LTE Rel- 12 and beyond, focus will be on network controlled D2D communications, where the macrocell BS performs control signaling in terms of synchronization, beacon signal configuration and providing identity and security management [3]. This feature will extend in 5G networks to allow other nodes, rather than the macrocell BS, to have the control. For example, consider a D2D link at the cell edge and the direct link between the D2D transmitter UE to the macrocell is in deep fade, then the relay node can be responsible for the control signaling of the D2Dlink (i.e., relay-aided D2D communication).·Energy harvesting for energy-efficient communication: One of the main challenges in 5G wireless networks is to improve the energy efficiency of the battery-constrained wireless devices. To prolong the battery lifetime as well as to improve the energy efficiency, an appealing solution is to harvest energy from environmental energy sources (e.g., solar and wind energy). Also, energy can be harvested from ambient radio signals (i.e., RF energy harvesting) with reasonable efficiency over small distances. The havested energy could be used for D2D communication or communication within a small cell. Inthis context, simultaneous wireless information and power transfer (SWIPT) is a promising technology for 5G wireless networks. However, practical circuits for harvesting energy are not yet available since the conventional receiver architecture is designed for information transfer only and, thus, may not be optimal for SWIPT. This is due to the fact that both information and power transfer operate with different power sensitivities at the receiver (e.g., -10dBm and -60dBm for energy and information receivers, respectively) [4]. Also, due to the potentially low efficiency of energy harvesting from ambient radio signals, a combination of different energy harvesting technologies may be required for macrocell communication.III. INTERFERENCE MANAGEMENT CHALLENGES IN 5GMULTI-TIER NETWORKSThe key challenges for interference management in 5G multi-tier networks will arise due to the following reasons which affect the interference dynamics in the uplink and downlink of the network: (i) heterogeneity and dense deployment of wireless devices, (ii) coverage and traffic load imbalance due to varying transmit powers of different BSs in the downlink, (iii) public or private access restrictions in different tiers that lead to diverse interference levels, and (iv) the priorities in accessing channels of different frequencies and resource allocation strategies. Moreover, the introduction of carrier aggregation, cooperation among BSs (e.g., by using coordinated multi-point transmission (CoMP)) as well as direct communication among users (e.g., D2D communication) may further complicate the dynamics of the interference. The above factors translate into the following key challenges.·Designing optimized cell association and power control (CAPC) methods for multi-tier networks: Optimizing the cell associations and transmit powers of users in the uplink or the transmit powers of BSs in the downlink are classical techniques to simultaneously enhance the system performance in various aspects such as interference mitigation, throughput maximization, and reduction in power consumption. Typically, the former is needed to maximize spectral efficiency, whereas the latter is required to minimize the power (and hence minimize the interference to other links) while keeping theFig. 1. A multi-tier network composed of macrocells, picocells, femtocells, relays, and D2D links.Arrows indicate wireless links, whereas the dashed lines denote the backhaul connections. desired link quality. Since it is not efficient to connect to a congested BS despite its high achieved signal-to-interference ratio (SIR), cell association should also consider the status of each BS (load) and the channel state of each UE. The increase in the number of available BSs along with multi-point transmissions and carrier aggregation provide multiple degrees of freedom for resource allocation and cell-selection strategies. For power control, the priority of different tiers need also be maintained by incorporating the quality constraints of HPUEs. Unlike downlink, the transmission power in the uplink depends on the user’s batt ery power irrespective of the type of BS with which users are connected. The battery power does not vary significantly from user to user; therefore, the problems of coverage and traffic load imbalance may not exist in the uplink. This leads to considerable asymmetries between the uplink and downlink user association policies. Consequently, the optimal solutions for downlink CAPC problems may not be optimal for the uplink. It is therefore necessary to develop joint optimization frameworks that can provide near-optimal, if not optimal, solutions for both uplink and downlink. Moreover, to deal with this issue of asymmetry, separate uplink and downlink optimal solutions are also useful as far as mobile users can connect with two different BSs for uplink and downlink transmissions which is expected to be the case in 5G multi-tier cellular networks [3].·Designing efficient methods to support simultaneous association to multiple BSs: Compared to existing CAPC schemes in which each user can associate to a singleBS, simultaneous connectivity to several BSs could be possible in 5G multi-tier network. This would enhance the system throughput and reduce the outage ratio by effectively utilizing the available resources, particularly for cell edge users. Thus the existing CAPCschemes should be extended to efficiently support simultaneous association of a user to multiple BSs and determine under which conditions a given UE is associated to which BSs in the uplink and/or downlink.·Designing efficient methods for cooperation and coordination among multiple tiers: Cooperation and coordination among different tiers will be a key requirement to mitigate interference in 5G networks. Cooperation between the macrocell and small cells was proposed for LTE Rel-12 in the context of soft cell, where the UEs are allowed to have dual connectivity by simultaneously connecting to the macrocell and the small cell for uplink and downlink communications or vice versa [3]. As has been mentioned before in the context of asymmetry of transmission power in uplink and downlink, a UE may experience the highest downlink power transmission from the macrocell, whereas the highest uplink path gain may be from a nearby small cell. In this case, the UE can associate to the macrocell in the downlink and to the small cell in the uplink. CoMP schemes based on cooperation among BSs in different tiers (e.g., cooperation between macrocells and small cells) can be developed to mitigate interference in the network. Such schemes need to be adaptive and consider user locations as well as channel conditions to maximize the spectral and energy efficiency of the network. This cooperation however, requires tight integration of low power nodes into the network through the use of reliable, fast andlow latency backhaul connections which will be a major technical issue for upcoming multi-tier 5G networks. In the remaining of this article, we will focus on the review of existing power control and cell association strategies to demonstrate their limitations for interference management in 5G multi-tier prioritized cellular networks (i.e., where users in different tiers have different priorities depending on the location, application requirements and so on). Design guidelines will then be provided to overcome these limitations. Note that issues such as channel scheduling in frequency domain, timedomain interference coordination techniques (e.g., based on almost blank subframes), coordinated multi-point transmission, and spatial domain techniques (e.g., based on smart antenna techniques) are not considered in this article.IV. DISTRIBUTED CELL ASSOCIATION AND POWERCONTROL SCHEMES: CURRENT STATE OF THE ARTA. Distributed Cell Association SchemesThe state-of-the-art cell association schemes that are currently under investigation formulti-tier cellular networks are reviewed and their limitations are explained below.·Reference Signal Received Power (RSRP)-based scheme [5]: A user is associated with the BS whose signal is received with the largest average strength. A variant of RSRP, i.e., Reference Signal Received Quality (RSRQ) is also used for cell selection in LTE single-tier networks which is similar to the signal-to-interference (SIR)-based cell selection where a user selects a BS communicating with which gives the highest SIR. In single-tier networks with uniform traffic, such a criterion may maximize the network throughput. However, due to varying transmit powers of different BSs in the downlink of multi-tier networks, such cell association policies can create a huge traffic load imbalance. This phenomenon leads to overloading of high power tiers while leaving low power tiers underutilized.·Bias-based Cell Range Expansion (CRE) [6]: The idea of CRE has been emerged as a remedy to the problem of load imbalance in the downlink. It aims to increase the downlink coverage footprint of low power BSs by adding a positive bias to their signal strengths (i.e., RSRP or RSRQ). Such BSs are referred to as biased BSs. This biasing allows more users to associate with low power or biased BSs and thereby achieve a better cell load balancing. Nevertheless, such off-loaded users may experience unfavorable channel from the biased BSs and strong interference from the unbiased high-power BSs. The trade-off between cell load balancing and system throughput therefore strictly depends on the selected bias values which need to be optimized in order to maximize the system utility. In this context, a baseline approach in LTE-Advanced is to ―orthogonalize‖ the transmissions of the biased and unbiased BSs in time/frequency domain such that an interference-free zone is created.·Association based on Almost Blank Sub-frame (ABS) ratio [7]: The ABS technique uses time domain orthogonalization in which specific sub-frames are left blank by the unbiased BS and off-loaded users are scheduled within these sub-frames to avoid inter-tier interference. This improves the overall throughput of the off-loaded users by sacrificing the time sub-frames and throughput of the unbiased BS. The larger bias values result in higher degree of offloading and thus require more blank subframes to protect the offloaded users. Given a specific number of ABSs or the ratio of blank over total number of sub-frames (i.e., ABS ratio) that ensures the minimum throughput of the unbiased BSs, this criterion allows a user to select a cell with maximum ABS ratio and may even associate with the unbiased BS if ABS ratio decreases significantly. A qualitative comparison amongthese cell association schemes is given in Table I. The specific key terms used in Table I are defined as follows: channel-aware schemes depend on the knowledge of instantaneous channel and transmit power at the receiver. The interference-aware schemes depend on the knowledge of instantaneous interference at the receiver. The load-aware schemes depend on the traffic load information (e.g., number of users). The resource-aware schemes require the resource allocation information (i.e., the chance of getting a channel or the proportion of resources available in a cell). The priority-aware schemes require the information regarding the priority of different tiers and allow a protection to HPUEs. All of the above mentioned schemes are independent, distributed, and can be incorporated with any type of power control scheme. Although simple and tractable, the standard cell association schemes, i.e., RSRP, RSRQ, and CRE are unable to guarantee the optimum performance in multi-tier networks unless critical parameters, such as bias values, transmit power of the users in the uplink and BSs in the downlink, resource partitioning, etc. are optimized.B. Distributed Power Control SchemesFrom a user’s point of view, the objective of power control is to support a user with its minimum acceptable throughput, whereas from a system’s point of view it is t o maximize the aggregate throughput. In the former case, it is required to compensate for the near-far effect by allocating higher power levels to users with poor channels as compared to UEs with good channels. In the latter case, high power levels are allocated to users with best channels and very low (even zero) power levels are allocated to others. The aggregate transmit power, the outage ratio, and the aggregate throughput (i.e., the sum of achievable rates by the UEs) are the most important measures to compare the performance of different power control schemes. The outage ratio of a particular tier can be expressed as the ratio of the number of UEs supported by a tier with their minimum target SIRs and the total number of UEs in that tier. Numerous power control schemes have been proposed in the literature for single-tier cellular wireless networks. According to the corresponding objective functions and assumptions, the schemes can be classified into the following four types.·Target-SIR-tracking power control (TPC) [8]: In the TPC, each UE tracks its own predefined fixed target-SIR. The TPC enables the UEs to achieve their fixed target-TABLE IQUALITATIVE COMPARISON OF EXISTING CELL ASSOCIATION SCHEMESFOR MULTI-TIER NETWORKSSIRs at minimal aggregate transmit power, assuming thatthe target-SIRs are feasible. However, when the system is infeasible, all non-supported UEs (those who cannot obtain their target-SIRs) transmit at their maximum power, which causes unnecessary power consumption and interference to other users, and therefore, increases the number of non-supported UEs.·TPC with gradual removal (TPC-GR) [9], [10], and [11]:To decrease the outage ra -tio of the TPC in an infeasiblesystem, a number of TPC-GR algorithms were proposedin which non-supported users reduce their transmit power[10] or are gradually removed [9], [11].·Opportunistic power control (OPC) [12]: From the system’s point of view, OPC allocates high power levels to users with good channels (experiencing high path-gains and low interference levels) and very low power to users with poor channels. In this algorithm, a small difference in path-gains between two users may lead to a large difference in their actual throughputs [12]. OPC improves the system performance at the cost of reduced fairness among users.·Dynamic-SIR tracking power control (DTPC) [13]: When the target-SIR requirements for users are feasible, TPC causes users to exactly hit their fixed target-SIRs even if additional resources are still available that can otherwise be used to achieve higher SIRs (and thus better throughputs). Besides, the fixed-target-SIR assignment is suitable only for voice service for which reaching a SIR value higher than the given target value does not affect the service quality significantly. In contrast, for data services, a higher SIR results in a better throughput, which is desirable. The DTPC algorithm was proposed in [13] to address the problem of system throughput maximization subject to a given feasible lower bound for the achieved SIRs of all users in cellular networks. In DTPC, each user dynamically sets its target-SIR by using TPC and OPC in a selective manner. It was shown that when the minimum acceptable target-SIRs are feasible, the actual SIRs received by some users can be dynamically increased (to a value higher than their minimum acceptabletarget-SIRs) in a distributed manner so far as the required resources are available and the system remains feasible (meaning that reaching the minimum target-SIRs for the remaining users are guaranteed). This enhances the system throughput (at the cost of higher power consumption) as compared to TPC. The aforementioned state-of-the-art distributed power control schemes for satisfying various objectives in single-tier wireless cellular networks are unable to address the interference management problem in prioritized 5G multi-tier networks. This is due to the fact that they do not guarantee that the total interference caused by the LPUEs to the HPUEs remain within tolerable limits, which can lead to the SIR outage of some HPUEs. Thus there is a need to modify the existing schemes such that LPUEs track their objectives while limiting their transmit power to maintain a given interference threshold at HPUEs. A qualitative comparison among various state-of-the-art power control problems with different objectives and constraints and their corresponding existing distributed solutions are shown in Table II. This table also shows how these schemes can be modified and generalized for designing CAPC schemes for prioritized 5G multi-tier networks.C. Joint Cell Association and Power Control SchemesA very few work in the literature have considered the problem of distributed CAPC jointly (e.g., [14]) with guaranteed convergence. For single-tier networks, a distributed framework for uplink was developed [14], which performs cell selection based on the effective-interference (ratio of instantaneous interference to channel gain) at the BSs and minimizes the aggregate uplink transmit power while attaining users’ desire d SIR targets. Following this approach, a unified distributed algorithm was designed in [15] for two-tier networks. The cell association is based on the effective-interference metric and is integrated with a hybrid power control (HPC) scheme which is a combination of TPC and OPC power control algorithms.Although the above frameworks are distributed and optimal/ suboptimal with guaranteed convergence in conventional networks, they may not be directly compatible to the 5G multi-tier networks. The interference dynamics in multi-tier networks depends significantly on the channel access protocols (or scheduling), QoS requirements and priorities at different tiers. Thus, the existing CAPC optimization problems should be modified to include various types of cell selection methods (some examples are provided in Table I) and power control methods with different objectives and interference constraints (e.g., interference constraints for macro cell UEs, picocell UEs, or D2Dreceiver UEs). A qualitative comparison among the existing CAPC schemes along with the open research areas are highlighted in Table II. A discussion on how these open problems can be addressed is provided in the next section.V. DESIGN GUIDELINES FOR DISTRIBUTED CAPCSCHEMES IN 5G MULTI-TIER NETWORKSInterference management in 5G networks requires efficient distributed CAPC schemes such that each user can possibly connect simultaneously to multiple BSs (can be different for uplink and downlink), while achieving load balancing in different cells and guaranteeing interference protection for the HPUEs. In what follows, we provide a number of suggestions to modify the existing schemes.A. Prioritized Power ControlTo guarantee interference protection for HPUEs, a possible strategy is to modify the existing power control schemes listed in the first column of Table II such that the LPUEs limit their transmit power to keep the interference caused to the HPUEs below a predefined threshold, while tracking their own objectives. In other words, as long as the HPUEs are protected against existence of LPUEs, the LPUEs could employ an existing distributed power control algorithm to satisfy a predefined goal. This offers some fruitful direction for future research and investigation as stated in Table II. To address these open problems in a distributed manner, the existing schemes should be modified so that the LPUEs in addition to setting their transmit power for tracking their objectives, limit their transmit power to keep their interference on receivers of HPUEs below a given threshold. This could be implemented by sending a command from HPUEs to its nearby LPUEs (like a closed-loop power control command used to address the near-far problem), when the interference caused by the LPUEs to the HPUEs exceeds a given threshold. We refer to this type of power control as prioritized power control. Note that the notion of priority and thus the need of prioritized power control exists implicitly in different scenarios of 5G networks, as briefly discussed in Section II. Along this line, some modified power control optimization problems are formulated for 5G multi-tier networks in second column of Table II.To compare the performance of existing distributed power control algorithms, let us consider a prioritized multi-tier cellular wireless network where a high-priority tier consisting of 3×3 macro cells, each of which covers an area of 1000 m×1000 m, coexists with a low-priority tier consisting of n small-cells per each high-priority macro cell, each。

毕业设计论文外文文献翻译

毕业设计论文外文文献翻译

毕业设计(论文)外文文献翻译院系:财务与会计学院年级专业:201*级财务管理姓名:学号:132148***附件: 财务风险管理【Abstract】Although financial risk has increased significantly in recent years risk and risk management are not contemporary issues。

The result of increasingly global markets is that risk may originate with events thousands of miles away that have nothing to do with the domestic market。

Information is available instantaneously which means that change and subsequent market reactions occur very quickly。

The economic climate and markets can be affected very quickly by changes in exchange rates interest rates and commodity prices。

Counterparties can rapidly become problematic。

As a result it is important to ensure financial risks are identified and managed appropriately. Preparation is a key component of risk management。

【Key Words】Financial risk,Risk management,YieldsI. Financial risks arising1.1What Is Risk1.1.1The concept of riskRisk provides the basis for opportunity. The terms risk and exposure have subtle differences in their meaning. Risk refers to the probability of loss while exposure is the possibility of loss although they are often used interchangeably。

毕业设计外文参考资料及译文

毕业设计外文参考资料及译文

Fundamental information, including the effects of porosity, water-to-cement ratio, cement paste characteristic, volume fraction of coarse aggregates, size of coarse aggregates on pervious concrete strength, had been studied [3, 9−12]. However, for the reason that the porosity played a key role in the functional and structural performances of pervious concretes [13 − 14], there was still a need to understand more about the mechanical responses of pervious concretes proportioned for desired levels of porosities. Although it was possible to have widely different pore structure features for a given porosity, or similar pore structure features for varied porosities in pervious concrete, it was imperative to focus on the mechanical responses of pervious concrete at different designed porosities. However, compared with the related research on conventional concrete, very limited study had been conducted on the fracture and fatigue behaviors of pervious concrete, which were especially important for pavement concrete subjected to heavy traffic and to severe seasonal temperature change. The presented work outlined the raw materials and mixing proportions to produce high-strength supplementary cementitious material (SCM) modified pervious concrete (SPC) and polymer-intensified pervious concrete (PPC) at different porosities within the range of 15%−25%. Then, the mechanical properties of pervious concrete, including the compressive and flexural strengths, fracture energy, as well as fatigue property, were investigated in details.

毕设外文文献+翻译1

毕设外文文献+翻译1

毕设外文文献+翻译1外文翻译外文原文CHANGING ROLES OF THE CLIENTS、ARCHITECTSAND CONTRACTORS THROUGH BIMAbstract:Purpose –This paper aims to present a general review of the practical implications of building information modelling (BIM) based on literature and case studies. It seeks to address the necessity for applying BIM and re-organising the processes and roles in hospital building projects. This type of project is complex due to complicated functional and technical requirements, decision making involving a large number of stakeholders, and long-term development processes.Design/methodology/approach–Through desk research and referring to the ongoing European research project InPro, the framework for integrated collaboration and the use of BIM are analysed.Findings –One of the main findings is the identification of the main factors for a successful collaboration using BIM, which can be recognised as “POWER”: product information sharing (P),organisational roles synergy (O), work processes coordination (W), environment for teamwork (E), and reference data consolidation (R).Originality/value –This paper contributes to the actual discussion in science and practice on the changing roles and processes that are required to develop and operate sustainable buildings with the support of integrated ICT frameworks and tools. It presents the state-of-the-art of European research projects and some of the first real cases of BIM application inhospital building projects.Keywords:Europe, Hospitals, The Netherlands, Construction works, Response flexibility, Project planningPaper type :General review1. IntroductionHospital building projects, are of key importance, and involve significant investment, and usually take a long-term development period. Hospital building projects are also very complex due to the complicated requirements regarding hygiene, safety, special equipments, and handling of a large amount of data. The building process is very dynamic and comprises iterative phases and intermediate changes. Many actors with shifting agendas, roles and responsibilities are actively involved, such as: the healthcare institutions, national and local governments, project developers, financial institutions, architects, contractors, advisors, facility managers, and equipment manufacturers and suppliers. Such building projects are very much influenced, by the healthcare policy, which changes rapidly in response to the medical, societal and technological developments, and varies greatly between countries (World Health Organization, 2000). In The Netherlands, for example, the way a building project in the healthcare sector is organised is undergoing a major reform due to a fundamental change in the Dutch health policy that was introduced in 2008.The rapidly changing context posts a need for a building with flexibility over its lifecycle. In order to incorporate life-cycle considerations in the building design, construction technique, and facility management strategy, a multidisciplinary collaboration is required. Despite the attempt for establishing integrated collaboration, healthcare building projects still facesserious problems in practice, such as: budget overrun, delay, and sub-optimal quality in terms of flexibility, end-user?s dissatisfaction, and energy inefficiency. It is evident that the lack of communication and coordination between the actors involved in the different phases of a building project is among the most important reasons behind these problems. The communication between different stakeholders becomes critical, as each stakeholder possesses different setof skills. As a result, the processes for extraction, interpretation, and communication of complex design information from drawings and documents are often time-consuming and difficult. Advanced visualisation technologies, like 4D planning have tremendous potential to increase the communication efficiency and interpretation ability of the project team members. However, their use as an effective communication tool is still limited and not fully explored. There are also other barriers in the information transfer and integration, for instance: many existing ICT systems do not support the openness of the data and structure that is prerequisite for an effective collaboration between different building actors or disciplines.Building information modelling (BIM) offers an integrated solution to the previously mentioned problems. Therefore, BIM is increasingly used as an ICT support in complex building projects. An effective multidisciplinary collaboration supported by an optimal use of BIM require changing roles of the clients, architects, and contractors; new contractual relationships; and re-organised collaborative processes. Unfortunately, there are still gaps in the practical knowledge on how to manage the building actors to collaborate effectively in their changing roles, and todevelop and utilise BIM as an optimal ICT support of the collaboration.This paper presents a general review of the practical implications of building information modelling (BIM) based on literature review and case studies. In the next sections, based on literature and recent findings from European research project InPro, the framework for integrated collaboration and the use of BIM are analysed. Subsequently, through the observation of two ongoing pilot projects in The Netherlands, the changing roles of clients, architects, and contractors through BIM application are investigated. In conclusion, the critical success factors as well as the main barriers of a successful integrated collaboration using BIM are identified.2. Changing roles through integrated collaboration and life-cycle design approachesA hospital building project involves various actors, roles, and knowledge domains. In The Netherlands, the changing roles of clients, architects, and contractors in hospital building projects are inevitable due the new healthcare policy. Previously under the Healthcare Institutions Act (WTZi), healthcare institutions were required to obtain both a license and a building permit for new construction projects and major renovations. The permit was issued by the Dutch Ministry of Health. The healthcare institutions were then eligible to receive financial support from the government. Since 2008, new legislation on the management of hospital building projects and real estate has come into force. In this new legislation, a permit for hospital building project under the WTZi is no longer obligatory, nor obtainable (Dutch Ministry of Health, Welfare and Sport, 2008). This change allows more freedom from the state-directed policy, and respectively,allocates more responsibilities to the healthcare organisations to deal with the financing and management of their real estate. The new policy implies that the healthcare institutions are fully responsible to man age and finance their building projects and real estate. The government?s support for the costs of healthcare facilities will no longer be given separately, but will be included in the fee for healthcare services. This means that healthcare institutions must earn back their investment on real estate through their services. This new policy intends to stimulate sustainable innovations in the design, procurement and management of healthcare buildings, which will contribute to effective and efficient primary healthcare services.The new strategy for building projects and real estate management endorses an integrated collaboration approach. In order to assure the sustainability during construction, use, and maintenance, the end-users, facility managers, contractors and specialist contractors need to be involved in the planning and design processes. The implications of the new strategy are reflected in the changing roles of the building actors and in the new procurement method.In the traditional procurement method, the design, and its details, are developed by the architect, and design engineers. Then, the client (the healthcare institution) sends an application to the Ministry of Healthto obtain an approval on the building permit and the financial support from the government. Following this, a contractor is selected through a tender process that emphasises the search for the lowest-price bidder. During the construction period, changes often take place due to constructability problems of the design and new requirements from the client.Because of the high level of technical complexity, and moreover, decision-making complexities, the whole process from initiation until delivery of a hospital building project can take up to ten years time. After the delivery, the healthcare institution is fully in charge of the operation of the facilities. Redesigns and changes also take place in the use phase to cope with new functions and developments in the medical world.The integrated procurement pictures a new contractual relationship between the parties involved in a building project. Instead of a relationship between the client and architect for design, and the client and contractor for construction, in an integrated procurement the client only holds a contractual relationship with the main party that is responsible for both design and construction. The traditional borders between tasks and occupational groups become blurred since architects, consulting firms, contractors, subcontractors, and suppliers all stand on the supply side in the building process while the client on the demand side. Such configuration puts the architect, engineer and contractor in a very different position that influences not only their roles, but also their responsibilities, tasks and communication with the client, the users, the team and other stakeholders.The transition from traditional to integrated procurement method requires a shift of mindset of the parties on both the demand and supply sides. It is essential for the client and contractor to have a fair and open collaboration in which both can optimally use their competencies. The effectiveness of integrated collaboration is also determined by the client?s capacity and strategy to organize innovative tendering procedures.A new challenge emerges in case of positioning an architect in a partnership with the contractor instead of with the client. In case of the architect enters a partnership with the contractor, an important issues is how to ensure the realisation of the architectural values as well as innovative engineering through an efficient construction process. In another case, the architect can stand at the client?s side in a strategic advisory role instead of being the designer. In this case, the architect?s responsibility is translating client?s requirements and wishes into the architectural values to be included in the design specification, and evaluating the contractor?s proposal against this. In any of this new role, the architect holds the responsibilities as stakeholder interest facilitator, custodian of customer value and custodian of design models.The transition from traditional to integrated procurement method also brings consequences in the payment schemes. In the traditional building process, the honorarium for the architect is usually based on a percentage of the project costs; this may simply mean that the more expensive the building is, the higher the honorarium will be. The engineer receives the honorarium based on the complexity of the design and the intensity of the assignment. A highly complex building, which takes a number of redesigns, is usually favourable for the engineers in terms of honorarium. A traditional contractor usually receives the commission based on the tender to construct the building at the lowest price by meeting the minimum specifications given by the client. Extra work due to modifications is charged separately to the client. After the delivery, the contractor is no longer responsible for the long-term use of the building. In the traditional procurement method, all risks are placed with theclient.In integrated procurement method, the payment is based on the achieved building performance; thus, the payment is non-adversarial. Since the architect, engineer and contractor have a wider responsibility on the quality of the design and the building, the payment is linked to a measurement system of the functional and technical performance of the building over a certain period of time. The honorarium becomes an incentive to achieve the optimal quality. If the building actors succeed to deliver a higher added-value thatexceed the minimum client?s requirements, they will receive a bonus in accordance to the client?s extra gain. The level of transparency is also improved. Open book accounting is an excellent instrument provided that the stakeholders agree on the information to be shared and to its level of detail (InPro, 2009).Next to the adoption of integrated procurement method, the new real estate strategy for hospital building projects addresses an innovative product development and life-cycle design approaches. A sustainable business case for the investment and exploitation of hospital buildings relies on dynamic life-cycle management that includes considerations and analysis of the market development over time next to the building life-cycle costs (investment/initial cost, operational cost, and logistic cost). Compared to the conventional life-cycle costing method, the dynamic life-cycle management encompasses a shift from focusing only on minimizing the costs to focusing on maximizing the total benefit that can be gained. One of the determining factors for a successful implementation of dynamic life-cycle management is the sustainable design of the building and building components, which means that the design carriessufficient flexibility to accommodate possible changes in the long term (Prins, 1992).Designing based on the principles of life-cycle management affects the role of the architect, as he needs to be well informed about the usage scenarios and related financial arrangements, the changing social and physical environments, and new technologies. Design needs to integrate people activities and business strategies over time. In this context, the architect is required to align the design strategies with the organisational, local and global policies on finance, business operations, health and safety, environment, etc.The combination of process and product innovation, and the changing roles of the building actors can be accommodated by integrated project delivery or IPD (AIA California Council, 2007). IPD is an approach that integrates people, systems, business structures and practices into a process that collaboratively harnesses the talents and insights of all participants to reduce waste and optimize efficiency through all phases of design, fabrication and construction. IPD principles can be applied to a variety of contractual arrangements. IPD teams will usually include members well beyond the basic triad of client, architect, and contractor. At a minimum, though, an Integrated Project should include a tight collaboration between the client, the architect, and the main contractor ultimately responsible for construction of the project, from the early design until the project handover. The key to a successful IPD is assembling a team that is committed to collaborative processes and is capable of working together effectively. IPD is built on collaboration. As a result, it can only be successful if the participants share and apply common values and goals.3. Changing roles through BIM applicationBuilding information model (BIM) comprises ICT frameworks and tools that can support the integrated collaboration based on life-cycle design approach. BIM is a digital representation of physical and functional characteristics of a facility. As such it serves as a shared knowledge resource for information about a facility forming a reliable basis for decisions during its lifecycle from inception onward (National Institute of Building Sciences NIBS, 2007). BIM facilitates time and place independent collaborative working. A basic premise of BIM is collaboration by different stakeholders at different phases of the life cycle of a facility to insert, extract, update or modify information in the BIM to support and reflect the roles of that stakeholder. BIM in its ultimate form, as a shared digital representation founded on open standards for interoperability, can become a virtual information model to be handed from the design team to the contractor and subcontractors and then to the client.BIM is not the same as the earlier known computer aided design (CAD). BIM goes further than an application to generate digital (2D or 3D) drawings. BIM is an integrated model in which all process and product information is combined, stored, elaborated, and interactively distributed to all relevant building actors. As a central model for all involved actors throughout the project lifecycle, BIM develops andevolves as the project progresses. Using BIM, the proposed design and engineering solutions can be measured against the client?s requirements and expected building performance. The functionalities of BIM to support the design process extend to multidimensional (nD), including: three-dimensional visualisation and detailing, clash detection, material schedule, planning, costestimate, production and logistic information, and as-built documents. During the construction process, BIM can support the communication between the building site, the factory and the design office– which is crucial for an effective and efficient prefabrication and assembly processes as well as to prevent or solve problems related to unforeseen errors or modifications. When the building is in use, BIM can be used in combination with the intelligent building systems to provide and maintain up-to-date information of the building performance, including the life-cycle cost.To unleash the full potential of more efficient information exchange in the AEC/FM industry in collaborative working using BIM, both high quality open international standards and high quality implementations of these standards must be in place. The IFC open standard is generally agreed to be of high quality and is widely implemented in software. Unfortunately, the certification process allows poor quality implementations to be certified and essentially renders the certified software useless for any practical usage with IFC. IFC compliant BIM is actually used less than manual drafting for architects and contractors, and show about the same usage for engineers. A recent survey shows that CAD (as a closed-system) is still the major form of technique used in design work (over 60 per cent) while BIM is used in around 20 percent of projects for architects and in around 10 per cent of projects for engineers and contractors.The application of BIM to support an optimal cross-disciplinary and cross-phase collaboration opens a new dimension in the roles and relationships between the building actors. Several most relevant issues are: the new role of a model manager; the agreement on the access right and IntellectualProperty Right (IPR); the liability and payment arrangement according to the type of contract and in relation to the integrated procurement; and the use of open international standards.Collaborative working using BIM demands a new expert role of a model manager who possesses ICT as well as construction process know-how (InPro, 2009). The model manager deals with the system as well as with the actors. He provides and maintains technological solutions required for BIM functionalities, manages the information flow, and improves the ICT skills of the stakeholders. The model manager does not take decisions on design and engineering solutions, nor the organisational processes, but his roles in the chain of decision making are focused on:the development of BIM, the definition of the structure and detail level of the model, and the deployment of relevant BIM tools, such as for models checking, merging, and clash detections;the contribution to collaboration methods, especially decision making and communication protocols, task planning, and risk management;and the management of information, in terms of data flow and storage, identification of communication errors, and decision or process (re-)tracking.Regarding the legal and organisational issues, one of the actual questions is: “In what way does the intellectual property right (IPR) in collaborative working using BIM differ from the IPR in a traditional teamwork?”. In terms of combine d work, the IPR of each element is at tached to its creator. Although it seems to be a fully integrated design, BIM actually resulted from a combination of works/elements; for instance: the outline of the building design, is created by the architect, the design for theelectrical system, is created by the electrical contractor, etc. Thus, in case of BIM as a combined work, the IPR is similar to traditional teamwork. Working with BIM with authorship registration functionalities may actually make it easier to keep track of the IPR.How does collaborative working, using BIM, effect the contractual relationship? On the one hand,collaborative working using BIM does not necessarily change the liability position in the contract nor does it obligate an alliance contract. The General Principles of BIM A ddendum confirms: …This does not effectuate or require a restructuring of contractual relationships or shifting of risks between or among the Project Participants other than as specifically required per the Protocol Addendum and its Attachments? (ConsensusDOCS, 2008). On the other hand, changes in terms of payment schemes can be anticipated. Collaborative processes using BIM will lead to the shifting of activities from to the early design phase. Much, if not all, activities in the detailed engineering and specification phase will be done in the earlier phases. It means that significant payment for the engineering phase, which may count up to 40 per cent of the design cost, can no longer be expected. As engineering work is done concurrently with the design, a new proportion of the payment in the early design phase is necessary.4. Review of ongoing hospital building projects using BIMIn The Netherlands, the changing roles in hospital building projects are part of the strategy, which aims at achieving a sustainable real estate in response to the changing healthcare policy. Referring to literature and previous research, the main factors that influence the success of the changing roles can be concluded as: the implementation of an integrated procurementmethod and a life-cycle design approach for a sustainable collaborative process; the agreement on the BIM structure and the intellectual rights; and the integration of the role of a model manager. The preceding sections have discussed the conceptual thinking on how to deal with these factors effectively. This current section observes two actual projects and compares the actual practice with the conceptual view respectively.The main issues, which are observed in the case studies, are: the selected procurement method and the roles of the involved parties within this method;the implementation of the life-cycle design approach;the type, structure, and functionalities of BIM used in the project;the openness in data sharing and transfer of the model, and the intended use of BIM in the future; and the roles and tasks of the model manager.The pilot experience of hospital building projects using BIM in the Netherlands can be observed at University Medical Centre St Radboud (further referred as UMC) and Maxima Medical Centre (further referred as MMC). At UMC, the new building project for the Faculty of Dentistry in the city of Nijmegen has been dedicated as a BIM pilot project. At MMC, BIM is used in designing new buildings for Medical Simulation and Mother-and-Child Centre in the city of Veldhoven.The first case is a project at the University Medical Centre (UMC) St Radboud. UMC is more than just a hospital. UMC combines medical services, education and research. More than 8500 staff and 3000 students work at UMC. As a part of the innovative real estate strategy, UMC has considered to use BIM for its building projects. The new development of the Faculty ofDentistry and the surrounding buildings on the Kapittelweg in Nijmegen has been chosen as a pilot project to gather practical knowledge and experience on collaborative processes with BIM support.The main ambition to be achieved through the use of BIM in the building projects at UMC can be summarised as follows: using 3D visualisation to enhance the coordination and communication among the building actors, and the user participation in design;integrating the architectural design with structural analysis, energy analysis, cost estimation, and planning;interactively evaluating the design solutions against the programme of requirements and specifications;reducing redesign/remake costs through clash detection during the design process; andoptimising the management of the facility through the registration of medical installations andequipments, fixed and flexible furniture, product and output specifications, and operational data.The second case is a project at the Maxima Medical Centre (MMC). MMC is a large hospital resulted from a merger between the Diaconessenhuis in Eindhoven and St Joseph Hospital in Veldhoven. Annually the 3,400 staff of MMC provides medical services to more than 450,000 visitors and patients. A large-scaled extension project of the hospital in Veldhoven is a part of its real estate strategy. A medical simulation centre and a women-and-children medical centre are among the most important new facilities within this extension project. The design has been developed using 3D modelling with several functionalities of BIM.The findings from both cases and the analysis are as follows.Both UMC and MMC opted for a traditional procurement method in which the client directly contracted an architect, a structural engineer, and a mechanical, electrical and plumbing (MEP) consultant in the design team. Once the design and detailed specifications are finished, a tender procedure will follow to select a contractor. Despite the choice for this traditional method, many attempts have been made for a closer and more effective multidisciplinary collaboration. UMC dedicated a relatively long preparation phase with the architect, structural engineer and MEP consultant before the design commenced. This preparation phase was aimed at creating a common vision on the optimal way for collaboration using BIM as an ICT support. Some results of this preparation phase are: a document that defines the common ambition for the project and the collaborative working process and a semi-formal agreement that states the commitment of the building actors for collaboration. Other than UMC, MMC selected an architecture firm with an in-house engineering department. Thus, the collaboration between the architect and structural engineer can take place within the same firm using the same software application.Regarding the life-cycle design approach, the main attention is given on life-cycle costs, maintenance needs, and facility management. Using BIM, both hospitals intend to get a much better insight in these aspects over the life-cycle period. The life-cycle sustainability criteria are included in the assignments for the design teams. Multidisciplinary designers and engineers are asked to collaborate more closely and to interact with the end-users to address life-cycle requirements. However, ensuring the building actors to engage in an integrated collaboration to generate sustainable design solutions that meet the life-cycle。

毕业设计外文文献翻译(原文+译文)

毕业设计外文文献翻译(原文+译文)

Environmental problems caused by Istanbul subway excavation and suggestionsfor remediation伊斯坦布尔地铁开挖引起的环境问题及补救建议Ibrahim Ocak Abstract:Many environmental problems caused by subway excavations have inevitably become an important point in city life. These problems can be categorized as transporting and stocking of excavated material, traffic jams, noise, vibrations, piles of dust mud and lack of supplies. Although these problems cause many difficulties,the most pressing for a big city like Istanbul is excava tion,since other listed difficulties result from it. Moreover, these problems are environmentally and regionally restricted to the period over which construction projects are underway and disappear when construction is finished. Currently, in Istanbul, there are nine subway construction projects in operation, covering approximately 73 km in length; over 200 km to be constructed in the near future. The amount of material excavated from ongoing construction projects covers approximately 12 million m3. In this study, problems—primarily, the problem with excavation waste(EW)—caused by subway excavation are analyzed and suggestions for remediation are offered.摘要:许多地铁开挖引起的环境问题不可避免地成为城市生活的重要部分。

毕业设计(论文)外文参考资料及译文

毕业设计(论文)外文参考资料及译文

英文原文:Java is a simple, object-oriented, distributed, interpreted, robust security, structure-neutral, portable, high performance, multithreaded dynamic language. The main advantage of Java language, Java applications across hardware platforms and operating systems for transplant - this is because the JVM is installed on each platform can understand the same byte code. Java language and platform scalability is very strong. At the low end, Java language is the first open standards technology support enterprise one, support the use of XML and Web service can not stride business lines to share information and applications Cheng Xu.There are three versions of Java platform, which makes software developers, service providers and equipment manufacturers can target specific market development:1. Java SE form applications. Java SE includes support for Java Web services development classes, and for the Java Platform, Enterprise Edition (Java EE) to provide a basis. Most Java developers use Java SE 5, also known as Java 5.0 or "Tiger".2. Java EE formerly known as J2EE. Enterprise Edition to help develop and deploy portable, robust, scalable and secure server-side Java applications. Java SE Java EE is built on the foundation, which provides Web services, component model, management and communication API, can be used to achieve enterprise-class service-oriented architecture and Web 2.0 applications.3. Java ME formerly known as J2ME. Java ME devices in mobile and embedded applications running on a robust and flexible environment. Java ME includes flexible user interfaces, robust security model, and many built-in network protocols and networking that can be dynamically downloaded and extensive support for offline applications. Java ME-based application specification only write once and can be used in many devices and can use the native features of each device.Java language is simple. Java language syntax and the C language and C ++ language is very close, Java discarded the C++, rarely used, hard to understand the characteristics, such as operator overloading, multiple inheritance, the mandatory automatic type conversion. Java language does not use pointers, and provides automated waste collection. Java is an object-oriented language. Java language provides classes, interfaces and inheritance of the original language, for simplicity, only supports single inheritance between classes, but support multiple inheritance between interfaces and support classes and interfaces to achieve between the mechanism (keyword implements) . Java language fully supports dynamic binding, and C ++ language used only for dynamic binding of virtual functions. In short, Java language is a pure object-oriented programming language. Java language is distributed. Java language support for Internet application development, Java's RMI (remote method activation) mechanism is also an important means of developing distributed applications. Java language is robust. Java's strong type system, exception handling, automated waste collection is an important guarantee robust Java programs. Java language is safe. Java is often used in network environment, this, Java provides a security mechanism to prevent malicious code attacks.Java language is portable. This portability comes from the architecture neutrality. Java system itself is highly portable. Java language is multi-threaded. In the Java language, the thread is a special object, it must Thread class or the son (Sun) class to create. Java language support simultaneous execution of multiple threads, and provide synchronization mechanisms between threads (keyword synchronized).Java language features make Java an excellent application of unparalleled robustness and reliability, which also reduced application maintenance costs. Java on the full support of object technology and Java Platform API embedded applications to reduce development time and reduce costs. Java's compile once, run everywhere feature can make it anywhere available to provide an open architecture and multi-platform, low-cost way of transmitting information between. Hibernate Hibernate is a lightweight JDBC object package. It is an independent object persistence framework, and the App Server, and EJB is no necessary link. Hibernate can use JDBC can be used in any occasion, such as Java application, database access code, DAO interface implementation class, or even access the database inside a BMP code. In this sense, Hibernate, and EB is not a category of things that did not exist either-or relationship.Hibernate and JDBC is a closely related framework, the Hibernate and JDBC driver compatibility, and databases have some relationship, but the Java program and use it, and the App Server does not have any relationship, there was no compatibility issues. 1614Hibernate provides two Cache, first-level cache is a Session-level cache, which cache belongs to the scope of services. This level of cache by the hibernate managed without the need for intervention under normal circumstances; second-level cache is SessionFactory-level cache, it belongs to the process of range or scope of the cache cluster. This level of cache can be configured and changed, and can be dynamically loaded and unloaded. Hibernate query results also provide a query cache, it depends on the second level cache.When an application called Session's save (), update (), saveOrUpdate (), get () or load (), and the query interface call list (), iterate () or filter () method, if the Session cache does not exist a corresponding object, Hibernate will put the object to the first level cache. When cleaning the cache, Hibernate objects according to the state of the cache changes to synchronize update the database. Session for the application provides two methods of managing the cache: evict (Object obj): removed from the cache parameters of the specified persistent object. clear (): Empty the cache of all persistent objects.Hibernate second-level cache strategy general process is as follows:1) The condition when a query is always issued a select * from table_name where .... (Select all fields) such as SQL statement to query the database, an access to all of the data object.2) all the data objects to be placed under the ID to the second level cache.3) When the Hibernate object-based ID to access the data, the first check from the Session a cache; finding out, if the configuration of the secondary cache, then the secondary cache from the investigation; finding out, and then query the database, the results in accordance with the ID into the cache.4) remove, update and increase the time data, while updating the cache. Hibernate second against the conditions of the Query Cache.Hibernate object-relational mapping for the delay and non-delay object initialization. Non-lazy when reading an object and the object will be all read out together with other objects. This sometimes results in hundreds (if not thousands of words) select statement when reading the object implementation. This problem sometimes occurs when using the two-way relationship, often leading to the databases to be read during the initialization phase out. Of course, you can take the trouble to examine each object and other objects of Guanxi, and to the most expensive of the Shan Chu, but in the last, we may therefore lose Le ORM tool this Xiangzai obtained Bian Li.A cache and secondary cache of comparison: the first level cache second level cache data stored in the form of interrelated persistent objects the object of bulk data cache range of the scope of services, each transaction has a separate first-level cache process range or scope of the cluster, the cache is the same process or cluster to share on all matters within the concurrent access policies because each transaction has a separate first-level cache, concurrency problem does not occur without the need to provide concurrent access policy will be a number of matters simultaneous access to the same second-level cache data, it is necessary to provide appropriate concurrent access policies, to ensure that a particular transaction isolation level data expiration policies did not provide data expiration policies. Object in a cache will never expire, unless the application explicitly clear the cache or clear a specific object must provide data expiration policies, such as memory cache based on the maximum number of objects, allowing objects in the cache of the most a long time, and allowing the object in the cache the longest idle time of physical memory and hard disk memory storage medium. First of all bulk data objects stored in the memory-based cache, when the number of objects in memory to data expiration policy specified limit, the remaining objects will be written on the hard disk cache. Caching software implementation of the Hibernate Session is included in the realization of the cache provided by third parties, Hibernate provides only a cache adapter (CacheProvider). Used to plug into a particular cache in Hibernate. Way cache enabled applications by as long as the Session interface implementation save, update, delete, data loading and query the database operations, Hibernate will enable first-level cache, the data in the database in the form of an object copied to the cache For batch updates and bulk delete operations, if you do not want to enable first-level cache, you can bypass the Hibernate API, JDBC API directly to perform that operation. Users can type in a single class or a single set of second-level cache size on the configuration. If the instance of the class are frequently read but rarely modified, you can consider using a second-level cache. Only for a class or set of second-level cache is configured, Hibernate will run when an instance of it to the second-level cache. User management means the first level cache of physical media for the memory cache, because the memory capacity is limited, must pass the appropriate search strategies and retrieval methods to limit the number of objects loaded. Session of the evit () method can explicitly clear the cache a specific object, but this method is not recommended. Second-level cache memory andthe physical media can be a hard disk, so the second-level cache can store large amounts of data, data expiration policy maxElementsInMemory property values can control the number of objects in memory. Second-level cache management mainly includes two aspects: Select to use the second-level cache of persistent classes, set the appropriate concurrency strategy: Select the cache adapter, set the appropriate data expiration policies.One obvious solution is to use Hibernate's lazy loading mechanism provided. This initialization strategy is only invoked in an object-to-many or many to many relationship between its relationship only when read out of the object. This process is transparent to the developer, and only had a few requests for database operations, it will be more obvious performance have open. This will be by using the DAO pattern abstracts the persistence time of a major problem. Persistence mechanisms in order to completely abstract out all of the database logic, including open or closed session, can not appear in the application layer. The most common is the realization of the simple interface of some DAO implementation class to encapsulate the database logic completely. A fast but clumsy solution is to give up DAO mode, the database connection logic to add the application layer. This may be an effective small applications, but in large systems, this is a serious design flaw, preventing the system scalability.Struts2Struts2 is actually not a stranger to the Web frameworks, Struts2 is Webwork design ideas as the core, absorb Struts1 advantages, so that the Struts2 is the product of the integration Struts1 and Webwork.MVC Description: Struts2 WebWork is compatible with the MVC framework Struts1 and since, that the MVC framework on the MVC framework will have to make a brief, limited to a brief, if want to learn more about MVC can view the related knowledge document, or to find a Struts1 books, I believe the above is not rare on the length of MVC. Closer to home, in fact, Java the present situation of these frameworks, its ultimate goal is to contact coupling, whether Spring, Hibernate or the MVC framework, are designed to increase contact with coupling reuse. MVC contact with the coupling between View and Model. MVC consists of three basic parts: Model, View and Controller, these three parts work together to minimize the coupling to increase the scalability of the program and maintainability. Various parts of the implementation technology can be summarized as follows:1) Model: JavaBean, EJB's EntityBean2) View: JSP, Struts in TagLib3) Controller: Struts the ActionServlet, ActionTo sum up the advantages of MVC mainly about aspects:1) corresponds to multiple views can be a model. By MVC design pattern, a model that corresponds to multiple views, you can copy the code and the code to reduce the maintenance amount, if model changes, but also easy to maintain2) model the data returned and display logic separate. Model data can be applied to any display technology, for example, use the JSP page, Velocity templates, or directly from Excel documents, etc.3) The application is separated into three layers, reducing the coupling between the layers, providing application scalability4) The concept of layers is also very effective, because it put the different models and different views together, to complete a different request. Therefore, the control layer can be said to include the concept of user requests permission5) MVC more software engineering management. Perform their duties in different layers, each layer has the same characteristics of the components is beneficial tool by engineering and production management of program codeStruts2 Introduction: Struts2 Struts1 development appears to come from, but in fact Struts1 Struts2 and design ideas in the framework of the above is very different, Struts2 WebWork's design is based on the core, why not follow the Struts1 Struts2 design ideas After all, Struts1 in the current enterprise applications market is still very big in the, Struts1 some shortcomings:1) support the performance of a single layer2) coupled with the Servlet API serious, this could be the Execute method from the Action Statement which you can see them3) The code depends Struts1 API, there are invasive, this can be written when the Action class and look out FormBean, Action Struts in Action class must implement The reason for Struts2 WebWork's design for the core point is the recent upward trend of WebWork and play WebWork not Struts1 above those shortcomings, more MVC design ideas, and more conducive to reuse the code. Based on the above description can be read out, Struts2 architecture and architecture Struts1 very different, Struts1 is to use the ActionServlet as its central processor, Struts2 is using an interceptor (FilterDispatcher) as its central processor, so One benefit is to make Action class and Servlet API was isolated.Struts2 simple process flow is as follows:1) browser sends a request2) the processor to find the corresponding file under struts.xml the Action class to process the request3) WebWork interceptor chain applications automatically request common functions, such as: WorkFlow, Validation functions4) If Struts.xml Method configuration file parameters, then call the corresponding Action Method parameters in the Method class method, or call the Execute method to deal with common user request5) Action class method returns the results of the corresponding response to the browserStruts2 and Struts1 contrast:1) Action class impleme achieve the time to achieve any classes and interfaces, while providing a ActionSupport class Struts2, however, not required.2) Struts1 the Action class is the singleton pattern, must be designed into the thread-safe, Struts2 was generated for each request for an instance3) Struts1 the Action class dependence and the Servlet API, execute the method from its signature can be seen, execute method has two parameters Servlet HttpServletRequest and HttpServletResponse, Struts2 is not dependent on the ServletAPI4) Struts1 depends on the Servlet API the Web elements, therefore, of Action Struts1 when testing is difficult, it needs with other testing tools, Struts2 in Action can be as testing a number of other classes as Service Model layer test5) Struts1 of Action and the View through the ActionForm or its sub-class of data transmission, although there LazyValidationForm this ActionForm appearance, but still can not like the other levels as a simple POJO data transfer, and Struts2 would like expect change becomes a reality6) Struts1 binding of the JSTL, the preparation of convenience for the page, Struts2 integrates ONGL, you can use JSTL, Therefore, Struts2 is more powerful expression language underCompared with Struts2 WebWork: Struts2 actually WebWork2.3, however, Struts2 WebWork, or with a little difference:1) Struts2 IOC no longer support the built-in containers, use Spring's IOC container2) Struts2 Ajax for Webwork features some of the label to use Dojo to be replacedServletServlet is a server-side Java application, platform and protocol independent features that can generate dynamic Web pages. Customer requests to play it (Web browser or other HTTP client) and server response (HTTP server, database or application) of the middle layer. Servlet Web server is located inside the server-side Java applications started from the command line with the traditional application of different Java, Servlet loaded by the Web server, the Web server must include the Java Virtual Machine to support Servlet.HTTP Servlet using a HTML form to send and receive data. To create an HTTP Servlet, need to extend the HttpServlet class, the class is a special way to handle HTML forms GenericServlet a subclass. HTML form is <FORM> and </ FORM> tag definition. Form typically includes input fields (such as text input fields, check boxes, radio buttons and selection lists) and a button for submitting data. When submitting information, they also specify which server should implement the Servlet (or other program). HttpServlet class contains the init (), destroy (), service () and other methods. Where init () and destroy () method is inherited.init () method: In the Servlet life period, only run once init () method. It is executed when the server load Servlet. You can configure the server to start the server or the client's first visit to Servlet fashion into the Servlet. No matter how many clients to access Servlet, will not repeat the init (). The default init () method is usually to meet the requirements, but can also use custom init () method to overwrite it, typically the management server-side resources. For example, you may write a custom init () to be used only once a load GIF images, GIF images and improve the Servlet returns with the performance of multiple clients request. Another example is to initialize the database connection. The default init () method sets the Servlet initialization parameters, and use it's ServletConfig object parameter to start the configuration, all covered by init () method of the Servlet should call super.init () to ensure that stillperform these tasks. In the call to service () method before, make sure you have completed the init () method.service () method: service () method is the core of Servlet. Whenever a client requests a HttpServlet object, the object of the service () method must be called, and passed to this method a "request" (ServletRequest) objects and a "response" (ServletResponse) object as a parameter. Already exists in the HttpServlet service () method. The default service function is invoked with the HTTP request method to do the corresponding functions. For example, if the HTTP request method is GET, the default on the call to doGet (). Servlet Servlet support should do HTTP method override function. Because HttpServlet.service () method checks whether the request method calls the appropriate treatment, unnecessary coverage service () method. Just do cover the corresponding method on it.Servlet response to the following types: an output stream, the browser based on its content type (such as text / HTML) to explain; an HTTP error response, redirect to another URL, servlet, JSP.doGet () method: When a client through the HTML form to send a HTTP GET request or when a direct request for a URL, doGet () method is called. Parameters associated with the GET request to the URL of the back, and send together with this request. When the server does not modify the data, you should use doGet () method. doPost () method: When a client through the HTML form to send a HTTP POST request, doPost () method is called. Parameters associated with the POST request as a separate HTTP request from the browser to the server. When the need to modify the server-side data, you should use the doPost () method.destroy () method: destroy () method is only executed once, that is, stop and uninstall the server to execute the method of Servlet. Typically, the Servlet as part of the process server to shut down. The default destroy () method is usually to meet the requirements, but can also cover it, and typically manage server-side resources. For example, if the Servlet will be accumulated in the run-time statistics, you can write a destroy () method is used in Servlet will not load the statistics saved in the file. Another example is to close the database connection.When the server uninstall Servlet, it will in all service () method call is completed, or at a specified time interval after the call to destroy () method. Running a Servlet service () method may have other threads, so make sure the call destroy () method, the thread has terminated or completed.GetServletConfig () method: GetServletConfig () method returns a ServletConfig object, which used to return the initialization parameters and ServletContext. ServletContext interface provides information about servlet environment. GetServletInfo () method: GetServletInfo () method is an alternative method, which provides information on the servlet, such as author, version, copyright.When the server calls sevlet of Service (), doGet () and doPost () of these three methods are needed "request" and "response" object as a parameter. "Request" object to provide the requested information, and the "response" object to provide a response message will be returned to the browser as a communications channel.javax.servlet packages in the relevant classes for the ServletResponse andServletRequest, while the javax.servlet.http package of related classes for the HttpServletRequest and HttpServletResponse. Servlet communication with the server through these objects and ultimately communicate with the client. Servlet through call "request" object approach informed the client environment, server environment, information and all information provided by the client. Servlet can call the "response" object methods to send response, the response is ready to send back to clientJSPJavaServerPages (JSP) technology provides a simple and fast way to create a display content dynamically generated Web pages. Leading from the industry, Sun has developed technology related to JSP specification that defines how the server and the interaction between the JSP page, the page also describes the format and syntax.JSP pages use XML tags and scriptlets (a way to use script code written in Java), encapsulates the logic of generating page content. It labels in various formats (HTML or XML) to respond directly passed back to the page. In this way, JSP pages to achieve a logical page design and display their separation.JSP technology is part of the Java family of technologies. JSP pages are compiled into a servlet, and may call JavaBeans components (beans) or EnterpriseJavaBeans components (enterprise beans), so that server-side processing. Therefore, JSP technology in building scalable web-based applications play an important role.JSP page is not confined to any particular platform or web server. JSP specification in the industry with a wide range of adaptability.JSP technology is the result of collaboration with industry, its design is an open, industry standards, and support the vast majority of servers, browsers and related tools. The use of reusable components and tags replaced on the page itself relies heavily on scripting languages, JSP technology has greatly accelerated the pace of development. Support the realization of all the JSP to Java programming language-based scripting language, it has inherent adaptability to support complex operations.JqueryjQuery is the second prototype followed by a good Javascrīpt framework. Its purpose is: to write less code, do more.It is lightweight js library (compressed only 21k), which is less than the other js library which, it is compatible CSS3, is also compatible with all browsers (IE 6.0 +, FF 1.5 +, Safari 2.0 +, Opera 9.0 +).jQuery is a fast, simple javaScript library, allowing users to more easily dealwith HTML documents, events, to achieve animation effects, and provide easy AJAX for interactive web site.jQuery also has a larger advantage is that it is all documented, and various applications are very detailed, as well as many mature plug-ins available.jQuery's html page to allow users to maintain separate code and html content, that is, no need to insert in the html inside a pile of js to call the command, and you can just define id.jQuery is the second prototype followed by a good Javascrīpt framework. On theprototype I use small, simple and understood. However, after using the jquery immediately attracted by her elegance. Some people use such a metaphor to compare the prototype and jquery: prototype like Java, and jquery like a ruby. In fact I prefer java (less contact with Ruby Bale), but a simple jquery does have considerable practical appeal ah! I put the project in the framework jquery as its the only class package. Use the meantime there is also a little bit of experience, in fact, these ideas, in the jquery documentation above may also be speaking, but still it down to stop notes.译文:Java是一种简单的,面向对象的,分布式的,解释型的,健壮安全的,结构中立的,可移植的,性能优异、多线程的动态语言。

毕业设计(论文)外文资料翻译(学生用)

毕业设计(论文)外文资料翻译(学生用)

毕业设计外文资料翻译学院:信息科学与工程学院专业:软件工程姓名: XXXXX学号: XXXXXXXXX外文出处: Think In Java (用外文写)附件: 1.外文资料翻译译文;2.外文原文。

附件1:外文资料翻译译文网络编程历史上的网络编程都倾向于困难、复杂,而且极易出错。

程序员必须掌握与网络有关的大量细节,有时甚至要对硬件有深刻的认识。

一般地,我们需要理解连网协议中不同的“层”(Layer)。

而且对于每个连网库,一般都包含了数量众多的函数,分别涉及信息块的连接、打包和拆包;这些块的来回运输;以及握手等等。

这是一项令人痛苦的工作。

但是,连网本身的概念并不是很难。

我们想获得位于其他地方某台机器上的信息,并把它们移到这儿;或者相反。

这与读写文件非常相似,只是文件存在于远程机器上,而且远程机器有权决定如何处理我们请求或者发送的数据。

Java最出色的一个地方就是它的“无痛苦连网”概念。

有关连网的基层细节已被尽可能地提取出去,并隐藏在JVM以及Java的本机安装系统里进行控制。

我们使用的编程模型是一个文件的模型;事实上,网络连接(一个“套接字”)已被封装到系统对象里,所以可象对其他数据流那样采用同样的方法调用。

除此以外,在我们处理另一个连网问题——同时控制多个网络连接——的时候,Java内建的多线程机制也是十分方便的。

本章将用一系列易懂的例子解释Java的连网支持。

15.1 机器的标识当然,为了分辨来自别处的一台机器,以及为了保证自己连接的是希望的那台机器,必须有一种机制能独一无二地标识出网络内的每台机器。

早期网络只解决了如何在本地网络环境中为机器提供唯一的名字。

但Java面向的是整个因特网,这要求用一种机制对来自世界各地的机器进行标识。

为达到这个目的,我们采用了IP(互联网地址)的概念。

IP以两种形式存在着:(1) 大家最熟悉的DNS(域名服务)形式。

我自己的域名是。

所以假定我在自己的域内有一台名为Opus的计算机,它的域名就可以是。

毕设文档(外文翻译)

毕设文档(外文翻译)

外文原文Modern database managementThe database is become more and more huge, more and more complicated.Database already no longer only saving tradition up simple data type,such as character list, numeral, date etc., of structure, and return saving data type of the structure complications,such as audio frequency, video frequency, portrait, and mix with text file etc.;The need that increase increasingly that turn the processing to the business intelligence also just cause excesssive inflation of the data warehouse even lost control, the database therefore becomes to add more huge.The database of today is much more than only a saving data, return saving function in the process of the data.The saving process that the database management system control, trigger the machine, customer from the definition function etc. has become new contents within the database management.The database management system is was used for more situation, erupt to flick more functions.The database was place in compare the more diverse terrace before up, such as the large machine, medium-sized machine, work station, personal computer even handheld PC in.At the same time, expand continuously appliedly along with the electronic commerce, more and more databases and Internet connect with each other.The database management therefore became to add complications more.The all these put forward the new request towards being engaged in the modern database managing person of the database management.The work mode of the database management system(DBMS)Accept the data of the applied procedure claim and handle the claim.Become the dataclaim(deluxe instruction) conversion of the customer the complicated machine code(the low class instruction) realization to databaseOf operation, from to database of in operation accept the search result, logarithms according to as a result carry on the processing(format conversion), placeManage the result returns to the customer.Main function of the data management systemThe definition function of the database.The DBMS provides the mode DDL(the data of the description concept mode definition language) definition database Of x-rated structure, two classes reflect elephant, the integrity of the definition data control, keep secret the restrict etc. control.Therefore, in the DBMS Medium should include the DDL to edit and translate the procedure.The database manipulates the function.The DBMS provides the DML(the data manipulates the language) to carry out the logarithms according to of operation.Basic data The operation have two type:Inspect(search) and renewal.(include to put the person and delete, renew)Therefore, in the DBMS should Include the DML edits and translate the procedure or explains the procedure.According to the Class of the language, the DML can be divided into the process DML and non- processes again Sex DML is two kinds of.The protection function of the database.The protection of DBMS to database mainly passes four aspect realizations:1, the instauration of the database.. At the database was break or the data inaccuracy, the system contain a database of ability recover the right appearance.2, number Erupt the control according to the database.At several customer at the same time to same piece according to carry on the operation, the system should can take into the control Prevent from break data in the DB.3, complete sex control of data.In the assurance database the accuracy of the data and the language righteousness and have.Effect, prevent°from any logarithms according to result in the operation of the mistake.4, the data safety control.Prevent°from the customer of have not yet the authorization,Data within the access database, to avoid the data to reveal, change or break.The maintenance function of the database.This part includes the data of the database to carry the person and convert, turn to keep, the database reshuffle.And the function supervision etc. function.Data dictionary.It deposit the database of the x-rated structure definition to be called the number in the database system.According to the dictionary(DD).All wanting to pass to the operations of the database DD then can carry out.Still deposit the hour of the database movement in the DD of Statistics the information, for example record piece, interview number of times etc..The top is the function that the general DBMS have, usually at big, medium-sized compute the function of DBMS of the on board realization stronger and more whole, the function of DBMS carry out on the microcomputer is weaker.The DBMS the mold piece constitute . Observe from a structure of mold, the DBMS is constitute by the two greatest parts:Search the processor and saving management machines.Search the processor contain four main compositions:The DDL edits and translate the machine, the DML edits and translate the machine, person the type DML prepares to edit and translate the machine and searchesCirculate the core procedure.The saving management machine contain four main compositions:Legal power and complete sex management machine, the business management machine, document management machine and the buffer area management machine.A, Internet to the influence that database manage.At the contemporary, the application of the Internet is omnipresent, the influence that rarely have the business activity to be free from the Internet technique to fly to develop soon.Really, the electronic commerce and traditional business activities compare its property to have already taken place the very big change;The successful electronic commerce should always be on-line, all-weather, at any time wait for providing the service for the customer.The all these none is not the foundation that the establishment is managing the system with allied strong database of Internet on.The traditional database managing person is responsible for the movement, function of database only excellent to turn, and guarantee to carry out it as far as possible high can use bine withInternet, increased the job of the modern database managing person and the difficulty that it work doubtless.The first mission that they face shortens the database to stop using the period(downtime) with maximum limit.Stop using to expect to is divided into the plan to stop using outside with plan inside stop using.Plan the outside stops using to include hardware breakdown, the procedure mistake or virus etc.s;Plan inside stop using to then include the software upgrade, database modification and usually maintenance.The industry analyst calculate plan outside stop using to expect to be up to 80% is from the application software of the mistake of the breakdown and factitiousness is output, the breakdown of the hardware and the breakup of the operate systems combine to seldom see.This kind of stops using to expect to be take place in more multifariously to input the business by mistake, not the importation document or parameters that criticize the processing operation and use the mistake fittinglies circulate the procedure etc. situation next.These problems can pass to use the daily record and softwares to develop the relevant project that the company provide to be resolve;Adopt the high-speed business instauration project, may make break down at not affect the database and can use sex premise under can expel.Stop using to compare outside with the plan, plan inside stop using to take place more multifarious, can use sex influence to the database larger, as a result even need to get the database managing person's attention.The on-line database management should shorten to stop using the period as far as possible and had better avoid it take place.For example usually the database carries on the reorganization need the pause movement;But to on-line database, make use of the new technique and can reorganize the data to a mirror elephant to copy, exchange again to copy after complete, make database kept on-line always, avoided stop using the period occurrence.These new techniques still include the on-line backup of the database and add to carry.Sometimes still need the system parameter of modify the on-line database;Usually, must re- start the system after changing these parameters, but in the electronic commerce activity, this can't accept.Now applies the new technique, need not the heavy system of , reborn address space and can immediately complete the modification toward the system parameter.Another important method that shortens to stop using to expect is the automation that the database supports usually.For example, the change watch structure is not an easy work, usually causing the extension of stop using the period.Apply the database modification tool of the automation, can at not produce the big interference under carry on to the relation database to modify arbitrarily.This kind of tool can produce the script of oneself.This kind of automation avoided the occurrence of the mistake, shortenning to stop using the period, raised the on-line database and can use sex biggest.While stopping using to expect and can'ting avoid, should complete the mission to shorten by make an effort to stop using the period as soon as possible.The possibly applied speed is quickest and come amiss the minimal technique, for example, when come from the third square's instauration, add to carry and reorganize the technique if consume the traditionaldatabase to complete the same work only need time of 1/3-1/2, then can consider to transplant this technique.Stopping using to expect allly not can avoid, for example the breakdown of the saving chip or the disk actuator.The occurrence that hardware break down can't predict.Pass the saving or automatic backup of redundancy may avoid some breakdowns result in of lose, but hard remove this kind of occurrence of problem completely.At this time, the beard adopts the instauration tool possibly, shortenning to stop using the period against time.Stop using unique problem that expect not modern database managing person's face.They still request to control the Java.Java main advantage is it can transplant sex, can make the procedure that the developer plait write circulate but did not need the characteristic of consider the hardware or operate system on any terrace.The Java is suitable for to create the dynamic state with the alternant page of Web.Executive procedure of Java on the Web, be called the applets, can download to combine the au to carry out automatically.Java still a kind of not be depend on in the Web of, can used for all-purpose plait distance language that develops the general applied procedure.Along with the Internet applied increasingly extensive, the Java will spread more.In recent years the Java is on the application of increase to almost carry on synchronously with the growth of the electronic commerce e the Web of the Java development application procedure, all is related with the interview of the database generally.So be the database the managing person must control Java basic knowledge at least.Most database managing persons all participated the database to apply the design of the procedure or adjust to try.Develop the company successfully before push the software product toward market always insure all procedure codes to get the full test and valuation.The test valuation insured the efficiency, result and applicabilities of the procedure.Most experts think the database application procedure the problem take place in the movement contain 70-80% write because of the SQL not appropriate cause with the logic mistake.Converting into the merchandise to go forward to go the valuation in the procedure is wise.If a database managing person does not understand the Java, then can't participate the procedure to adjust to try with excellent turn the match of the applied procedure and the relation database.The Java is popular of another a reason is it can strengthen the applied procedure and can use sex.The plait write the good procedure of Java to need to be edit and translate, but edit and translate the machine output's is not pure and can carry out the code, but byte code of Java.The byte code of Java is explained the performance by the Virtual Machine of Java, the different terrace have each from of Virtual Machine of Java.The code of Java can be the application a part of the procedure to circulate, can't to apply the procedure creation influence because the variety of the code of Java.Exactly by this kind of, the Java strengthenned the applied procedure and can use sex.In addition, the Java simplified the complicated renewal process of applied procedure, and make applied procedure of customer/ server in the dynamic state chain connects thedocument(DLL) to release and manage to become to be easy to carry on.Because the procedure of Java needs the solid hour to explain and then can carry out, its speed want to be slow doubtless in edit and translate a procedure.The adoption edits and translate the machine technique(JIT) or the high performance Javas to edit and translate the performance speed that the machine(HPJ) can raise the procedure of Java immediately.Applied technique of JIT, the byte code can edit and translate for the machine code first in the terrace that make selection, this movement that can accelerate the procedure of Java.But the technique of JIT is still a kind of explain the process, so slow in edit and translate a speed of procedure.The applied HPJ edit and translate the machine and can convert byte code real machine code, explain the depletion of the byte code while avoiding circulate.But the JIT or HPJs didn't°yet get the complete support of Java.The modern database managing person returns the beard how the understanding use the interview database of Java.Have the JDBC and SQLJ two kinds of methods with the Java interview database.The JDBC is the applied plait distance of the Java interview relation database to connect, similar in ODBC, include a series of type and connect.Any persons who acquaint with the applied plait distance and ODBC can very quickly master the JDBC.The JDBC provided the SQL interview relation database of the dynamic e the JDBC for the applied procedure that a kind of terrace weave to write, can be apply in other terraces.The JDBC of the adoption exactitude drives the procedure, application the procedure should can transplant.The SQLJ provided for the Java inside the SQL of the static state that plant.Translate the machine must use the processing Java procedure of SQLJ.To DB2 governor, this like prepare to edit and translate the procedure of COBOL.All databases provide the company with same in general use translation machine of the plan adoption.Translate the machine to separate the SQL language sentence and make it excellent to turn into the mold piece that database request from the code of Java.Also can increase the code of Java to adjust to use to the SQL by act for in the procedure of Java.Now the whole procedure can edit and translate for the byte code, and pass to bind to settle for the born pack of SQL.Should basis concrete circumstance choice adoption JDBC or SQLJ.The SQLJ can pass the efficiency that the SQL of the usage static state strengthens the procedure performance, this may have the important meaning to Java.The procedure of SQL that SQLJ is similar to transplant, the JDBC is similar then in adjust to connect a people's procedure with the class.The developer the other party method acquaints with the degree is also a factor that need to be consider.The applied Java is before develop, database the managing person must control the Java, and comprehend the JDBD and SQLJ differentiations.Two, the process management of the database.In addition to develop save, the core functions such as management and the interview data etc., the modern database management system still provided the additional function, integrating theprocess logic.The process logic of the typical model includes the saving process and triggers the machine, customer from the definition function, they manage the system with database to combine very close.The managing person of the database needs to carry on the management, design to these new functions, operation.Sometimes database the managing person also undertakes the mission of code these objects, but this always is not the best solution.The saving process can be think is a database management system in live procedure.The saving process transfered applied procedure code from customer's work station database server.A process with saving SQL can include several SQL language sentences.Change the quantity and conditions through an usage, can use the saving process to build up the very huge but complicated search, with the very complicated way renewal database.The saving process performance speed is quick, and after carrying out for the first time, they passed by in the database excellent turn and edit and translate to keep to save slowly in the high speed keep both in, be the customer sends out the performance again saving process of claim, can direct from the high speed saves slowly in the performance, the province went to excellent turn and edit and translate the stage, save to carry out a great deal of time of the process, and reduced the network the correspondence quantity.Trigger the machine is saving to manage the process that affairs in the system drive in the database, belonging to the particular form.Trigger the machine can be see make is deluxe form of the rule or engagement that writes with the process logic.It can't be adjusted to use or carry out directly, but the result that be the behavior is managed the system by the database to carry out automatically of.Once some on triggering the machine to is establish, always at satisfy the renewal and insert, delete etc. the condition of the occurrence carry out.Customer from the definition function is the commutability standard of, inside place of the procedure of the SQL function performance.The customer provides from the definition function according to the result that a series input the value, can the elephant is any inside place of function of SQL is similar to be used for SQL language sentence in.Saving process, trigger the machine, customer are similar to be managed the system control by the database like other database objects of form, index...etc. from the definition function.But they are processes rather than avowal, so again different from the latter have.These object whether the physics ground halts to stay to manage in the database in the system, managing the dissimilarity but the dissimilarity of the system according to the database.But they were always register to manage the system in the database, and keep with it of connection.The main reason of the applied process logic replies for the sake of the exaltation with sex.Their manies of commutability apply the repetition in the procedure code, halting to stay in same position of the database server.This may make lately applied procedure of development of workload has the decrease, and kept coherent sex of the applied procedure.If all apply to many operations of database the process logicrather than several codes of repetition segment, then can insure to operate each time all at carry out the same code.Saving process, trigger the machine, customer from the definition function the function that etc. provide is important doubtless, but they alsoed give management bring the difficulty.The managing person need know when and how carry on the test to them.The management data object is the managing person's basic mission, but can't the expectation data and database the expert will adjust to try out the C, COBOL even the function and the procedure that the SQL plait write.Although many companies request the database, the managing person is the expert of SQL also, usually they are really not so.But the modern database managing person must adapt the management to the process logic, this request has the knowledge of the both side of the database and the plait distance, although don't request to participate to weave the distance, shoulds lead the evaluation to the code at least to combine the usage toward the database process logic to carry on the management.《Modern database management》Prentice Hall; 6 edition,2002中文翻译JSP相关技术Sun公司的Java Servlet平台直接解决了CGI程序的两个主要缺点。

毕业设计(论文)外文资料翻译【范本模板】

毕业设计(论文)外文资料翻译【范本模板】

南京理工大学紫金学院毕业设计(论文)外文资料翻译系:机械系专业:车辆工程专业姓名:宋磊春学号:070102234外文出处:EDU_E_CAT_VBA_FF_V5R9(用外文写)附件:1。

外文资料翻译译文;2.外文原文.附件1:外文资料翻译译文CATIA V5 的自动化CATIA V5的自动化和脚本:在NT 和Unix上:脚本允许你用宏指令以非常简单的方式计划CATIA。

CATIA 使用在MS –VBScript中(V5.x中在NT和UNIX3。

0 )的共用部分来使得在两个平台上运行相同的宏。

在NT 平台上:自动化允许CATIA像Word/Excel或者Visual Basic程序那样与其他外用分享目标。

ATIA 能使用Word/Excel对象就像Word/Excel能使用CATIA 对象。

在Unix 平台上:CATIA将来的版本将允许从Java分享它的对象。

这将提供在Unix 和NT 之间的一个完美兼容。

CATIA V5 自动化:介绍(仅限NT)自动化允许在几个进程之间的联系:CATIA V5 在NT 上:接口COM:Visual Basic 脚本(对宏来说),Visual Basic 为应用(适合前:Word/Excel ),Visual Basic。

COM(零部件目标模型)是“微软“标准于几个应用程序之间的共享对象。

Automation 是一种“微软“技术,它使用一种解释环境中的COM对象。

ActiveX 组成部分是“微软“标准于几个应用程序之间的共享对象,即使在解释环境里。

OLE(对象的链接与嵌入)意思是资料可以在一个其他应用OLE的资料里连结并且可以被编辑的方法(在适当的位置编辑).在VBScript,VBA和Visual Basic之间的差别:Visual Basic(VB)是全部的版本。

它能产生独立的计划,它也能建立ActiveX 和服务器。

它可以被编辑。

VB中提供了一个补充文件名为“在线丛书“(VB的5。

毕业设计(论文)外文资料翻译

毕业设计(论文)外文资料翻译

毕业设计(论文)外文资料翻译学院:艺术学院专业:环境设计姓名:学号:外文出处: The Swedish Country House附件: 1.外文资料翻译译文;2.外文原文附件1:外文资料翻译译文室内装饰简述一室内装饰设计要素1 空间要素空间的合理化并给人们以美的感受是设计基本的任务。

要勇于探索时代、技术赋于空间的新形象,不要拘泥于过去形成的空间形象。

2 色彩要求室内色彩除对视觉环境产生影响外,还直接影响人们的情绪、心理。

科学的用色有利于工作,有助于健康。

色彩处理得当既能符合功能要求又能取得美的效果。

室内色彩除了必须遵守一般的色彩规律外,还随着时代审美观的变化而有所不同。

3 光影要求人类喜爱大自然的美景,常常把阳光直接引入室内,以消除室内的黑暗感和封闭感,特别是顶光和柔和的散射光,使室内空间更为亲切自然。

光影的变换,使室内更加丰富多彩,给人以多种感受。

4 装饰要素室内整体空间中不可缺少的建筑构件、如柱子、墙面等,结合功能需要加以装饰,可共同构成完美的室内环境。

充分利用不同装饰材料的质地特征,可以获得千变完化和不同风格的室内艺术效果,同时还能体现地区的历史文化特征。

5 陈设要素室内家具、地毯、窗帘等,均为生活必需品,其造型往往具有陈设特征,大多数起着装饰作用。

实用和装饰二者应互相协调,求的功能和形式统一而有变化,使室内空间舒适得体,富有个性。

6 绿化要素室内设计中绿化以成为改善室内环境的重要手段。

室内移花栽木,利用绿化和小品以沟通室内外环境、扩大室内空间感及美化空间均起着积极作用。

二室内装饰设计的基本原则1 室内装饰设计要满足使用功能要求室内设计是以创造良好的室内空间环境为宗旨,使室内环境合理化、舒适化、科学化;要考虑人们的活动规律处理好空间关系,空间尺寸,空间比例;合理配置陈设与家具,妥善解决室内通风,采光与照明,注意室内色调的总体效果。

2 室内装饰设计要满足精神功能要求室内设计的精神就是要影响人们的情感,乃至影响人们的意志和行动,所以要研究人们的认识特征和规律;研究人的情感与意志;研究人和环境的相互作用。

毕业设计外文资料翻译——翻译译文

毕业设计外文资料翻译——翻译译文

毕业设计外文资料翻译(二)外文出处:Jules Houde 《Sustainable development slowed down by bad construction practices and natural and technological disasters》2、外文资料翻译译文混凝土结构的耐久性即使是工程师认为的最耐久和最合理的混凝土材料,在一定的条件下,混凝土也会由于开裂、钢筋锈蚀、化学侵蚀等一系列不利因素的影响而易受伤害。

近年来报道了各种关于混凝土结构耐久性不合格的例子。

尤其令人震惊的是混凝土的结构过早恶化的迹象越来越多。

每年为了维护混凝土的耐久性,其成本不断增加。

根据最近在国内和国际中的调查揭示,这些成本在八十年代间翻了一番,并将会在九十年代变成三倍。

越来越多的混凝土结构耐久性不合格的案例使从事混凝土行业的商家措手不及。

混凝土结构不仅代表了社会的巨大投资,也代表了如果耐久性问题不及时解决可能遇到的成本,更代表着,混凝土作为主要建筑材料,其耐久性问题可能导致的全球不公平竞争以及行业信誉等等问题。

因此,国际混凝土行业受到了强烈要求制定和实施合理的措施以解决当前耐久性问题的双重的挑战,即:找到有效措施来解决现有结构剩余寿命过早恶化的威胁。

纳入新的结构知识、经验和新的研究结果,以便监测结构耐久性,从而确保未来混凝土结构所需的服务性能。

所有参与规划、设计和施工过程的人,应该具有获得对可能恶化的过程和决定性影响参数的最低理解的可能性。

这种基本知识能力是要在正确的时间做出正确的决定,以确保混凝土结构耐久性要求的前提。

加固保护混凝土中的钢筋受到碱性的钝化层(pH值大于12.5)保护而阻止了锈蚀。

这种钝化层阻碍钢溶解。

因此,即使所有其它条件都满足(主要是氧气和水分),钢筋受到锈蚀也都是不可能的。

混凝土的碳化作用或是氯离子的活动可以降低局部面积或更大面积的pH值。

当加固层的pH值低于9或是氯化物含量超过一个临界值时,钝化层和防腐保护层就会失效,钢筋受腐蚀是可能的。

毕业设计外文翻译例文

毕业设计外文翻译例文

大连科技学院毕业设计(论文)外文翻译学生姓名专业班级指导教师职称所在单位教研室主任完成日期 2016年4月15日Translation EquivalenceDespite the fact that the world is becoming a global village, translation remains a major way for languages and cultures to interact and influence each other. And name translation, especially government name translation, occupies a quite significant place in international exchange.Translation is the communication of the meaning of a source-language text by means of an equivalent target-language text. While interpreting—the facilitating of oral or sign-language communication between users of different languages—antedates writing, translation began only after the appearance of written literature. There exist partial translations of the Sumerian Epic of Gilgamesh (ca. 2000 BCE) into Southwest Asian languages of the second millennium BCE. Translators always risk inappropriate spill-over of source-language idiom and usage into the target-language translation. On the other hand, spill-overs have imported useful source-language calques and loanwords that have enriched the target languages. Indeed, translators have helped substantially to shape the languages into which they have translated. Due to the demands of business documentation consequent to the Industrial Revolution that began in the mid-18th century, some translation specialties have become formalized, with dedicated schools and professional associations. Because of the laboriousness of translation, since the 1940s engineers have sought to automate translation (machine translation) or to mechanically aid the human translator (computer-assisted translation). The rise of the Internet has fostered a world-wide market for translation services and has facilitated language localizationIt is generally accepted that translation, not as a separate entity, blooms into flower under such circumstances like culture, societal functions, politics and power relations. Nowadays, the field of translation studies is immersed with abundantly diversified translation standards, with no exception that some of them are presented by renowned figures and are rather authoritative. In the translation practice, however, how should we select the so-called translation standards to serve as our guidelines in the translation process and how should we adopt the translation standards to evaluate a translation product?In the macro - context of flourish of linguistic theories, theorists in the translation circle, keep to the golden law of the principle of equivalence. The theory of Translation Equivalence is the central issue in western translation theories. And the presentation of this theory gives great impetus to the development and improvement of translation theory. It‟s not diffi cult for us to discover that it is the theory of Translation Equivalence that serves as guidelines in government name translation in China. Name translation, as defined, is the replacement of thename in the source language by an equivalent name or other words in the target language. Translating Chinese government names into English, similarly, is replacing the Chinese government name with an equivalent in English.Metaphorically speaking, translation is often described as a moving trajectory going from A to B along a path or a container to carry something across from A to B. This view is commonly held by both translation practitioners and theorists in the West. In this view, they do not expect that this trajectory or something will change its identity as it moves or as it is carried. In China, to translate is also understood by many people normally as “to translate the whole text sentence by sentence and paragraph by paragraph, without any omission, addition, or other changes. In both views, the source text and the target text must be “the same”. This helps explain the etymological source for the term “translation equivalence”. It is in essence a word which describes the relationship between the ST and the TT.Equivalence means the state or fact or property of being equivalent. It is widely used in several scientific fields such as chemistry and mathematics. Therefore, it comes to have a strong scientific meaning that is rather absolute and concise. Influenced by this, translation equivalence also comes to have an absolute denotation though it was first applied in translation study as a general word. From a linguistic point of view, it can be divided into three sub-types, i.e., formal equivalence, semantic equivalence, and pragmatic equivalence. In actual translation, it frequently happens that they cannot be obtained at the same time, thus forming a kind of relative translation equivalence in terms of quality. In terms of quantity, sometimes the ST and TT are not equivalent too. Absolute translation equivalence both in quality and quantity, even though obtainable, is limited to a few cases.The following is a brief discussion of translation equivalence study conducted by three influential western scholars, Eugene Nida, Andrew Chesterman and Peter Newmark. It‟s expected that their studies can instruct GNT study in China and provide translators with insightful methods.Nida‟s definition of translation is: “Translation consists in reproducing in the receptor language the closest natural equivalent of the source language message, first in terms of meaning and secondly in terms of style.” It i s a replacement of textual material in one language〔SL〕by equivalent textual material in another language(TL). The translator must strive for equivalence rather than identity. In a sense, this is just another way of emphasizing the reproducing of the message rather than the conservation of the form of the utterance. The message in the receptor language should match as closely as possible the different elements in the source language to reproduce as literally and meaningfully as possible the form and content of the original. Translation equivalence is an empirical phenomenon discovered bycomparing SL and TL texts and it‟s a useful operational concept like the term “unit of translati on”.Nida argues that there are two different types of equivalence, namely formal equivalence and dynamic equivalence. Formal correspondence focuses attention on the message itself, in both form and content, whereas dynamic equivalence is based upon “the principle of equivalent effect”.Formal correspondence consists of a TL item which represents the closest equivalent of a ST word or phrase. Nida and Taber make it clear that there are not always formal equivalents between language pairs. Therefore, formal equivalents should be used wherever possible if the translation aims at achieving formal rather than dynamic equivalence. The use of formal equivalents might at times have serious implications in the TT since the translation will not be easily understood by the target readership. According to Nida and Taber, formal correspondence distorts the grammatical and stylistic patterns of the receptor language, and hence distorts the message, so as to cause the receptor to misunderstand or to labor unduly hard.Dyn amic equivalence is based on what Nida calls “the principle of equivalent effect” where the relationship between receptor and message should be substantially the same as that which existed between the original receptors and the message. The message has to be modified to the receptor‟s linguistic needs and cultural expectation and aims at complete naturalness of expression. Naturalness is a key requirement for Nida. He defines the goal of dynamic equivalence as seeking the closest natural equivalent to the SL message. This receptor-oriented approach considers adaptations of grammar, of lexicon and of cultural references to be essential in order to achieve naturalness; the TL should not show interference from the SL, and the …foreignness …of the ST setting is minimized.Nida is in favor of the application of dynamic equivalence, as a more effective translation procedure. Thus, the product of the translation process, that is the text in the TL, must have the same impact on the different readers it was addressing. Only in Nida and Taber's edition is it clearly stated that dynamic equivalence in translation is far more than mere correct communication of information.As Andrew Chesterman points out in his recent book Memes of Translation, equivalence is one of the five element of translation theory, standing shoulder to shoulder with source-target, untranslatability, free-vs-literal, All-writing-is-translating in importance. Pragmatically speaking, observed Chesterman, “the only true examples of equivalence (i.e., absolute equivalence) are those in which an ST item X is invariably translated into a given TL as Y, and vice versa. Typical examples would be words denoting numbers (with the exceptionof contexts in which they have culture-bound connotations, such as “magic” or “unlucky”), certain technical terms (oxygen, molecule) and the like. From this point of view, the only true test of equivalence would be invariable back-translation. This, of course, is unlikely to occur except in the case of a small set of lexical items, or perhaps simple isolated syntactic structure”.Peter Newmark. Departing from Nida‟s receptor-oriented line, Newmark argues that the success of equivalent effect is “illusory “and that the conflict of loyalties and the gap between emphasis on source and target language will always remain as the overriding problem in translation theory and practice. He suggests narrowing the gap by replacing the old terms with those of semantic and communicative translation. The former attempts to render, as closely as the semantic and syntactic structures of the second language allow, the exact contextual meaning of the original, while the latter “attempts to produce on its readers an effect as close as possible to that obtained on the readers of the original.” Newmark‟s description of communicative translation resembles Nida‟s dynamic equivalence in the effect it is trying to create on the TT reader, while semantic translation has similarities to Nida‟s formal equivalence.Meanwhile, Newmark points out that only by combining both semantic and communicative translation can we achieve the goal of keeping the …spirit‟ of the original. Semantic translation requires the translator retain the aesthetic value of the original, trying his best to keep the linguistic feature and characteristic style of the author. According to semantic translation, the translator should always retain the semantic and syntactic structures of the original. Deletion and abridgement lead to distortion of the author‟s intention and his writing style.翻译对等尽管全世界正在渐渐成为一个地球村,但翻译仍然是语言和和文化之间的交流互动和相互影响的主要方式之一。

毕业设计外文文献翻译

毕业设计外文文献翻译

毕业设计外文文献翻译Graduation design of foreign literature translation 700 words Title: The Impact of Artificial Intelligence on the Job Market Abstract:With the rapid development of artificial intelligence (AI), concerns arise about its impact on the job market. This paper explores the potential effects of AI on various industries, including healthcare, manufacturing, and transportation, and the implications for employment. The findings suggest that while AI has the potential to automate repetitive tasks and increase productivity, it may also lead to job displacement and a shift in job requirements. The paper concludes with a discussion on the importance of upskilling and retraining for workers to adapt to the changing job market.1. IntroductionArtificial intelligence (AI) refers to the development of computer systems that can perform tasks that typically require human intelligence. AI has made significant advancements in recent years, with applications in various industries, such as healthcare, manufacturing, and transportation. As AI technology continues to evolve, concerns arise about its impact on the job market. This paper aims to explore the potential effects of AI on employment and discuss the implications for workers.2. Potential Effects of AI on the Job Market2.1 Automation of Repetitive TasksOne of the major impacts of AI on the job market is the automation of repetitive tasks. AI systems can perform tasks faster and moreaccurately than humans, particularly in industries that involve routine and predictable tasks, such as manufacturing and data entry. This automation has the potential to increase productivity and efficiency, but also poses a risk to jobs that can be easily replicated by AI.2.2 Job DisplacementAnother potential effect of AI on the job market is job displacement. As AI systems become more sophisticated and capable of performing complex tasks, there is a possibility that workers may be replaced by machines. This is particularly evident in industries such as transportation, where autonomous vehicles may replace human drivers, and customer service, where chatbots can handle customer inquiries. While job displacement may lead to short-term unemployment, it also creates opportunities for new jobs in industries related to AI.2.3 Shifting Job RequirementsWith the introduction of AI, job requirements are expected to shift. While AI may automate certain tasks, it also creates a demand for workers with the knowledge and skills to develop and maintain AI systems. This shift in job requirements may require workers to adapt and learn new skills to remain competitive in the job market.3. Implications for EmploymentThe impact of AI on employment is complex and multifaceted. On one hand, AI has the potential to increase productivity, create new jobs, and improve overall economic growth. On the other hand, it may lead to job displacement and a shift in job requirements. To mitigate the negative effects of AI on employment, it is essentialfor workers to upskill and retrain themselves to meet the changing demands of the job market.4. ConclusionIn conclusion, the rapid development of AI has significant implications for the job market. While AI has the potential to automate repetitive tasks and increase productivity, it may also lead to job displacement and a shift in job requirements. To adapt to the changing job market, workers should focus on upskilling and continuous learning to remain competitive. Overall, the impact of AI on employment will depend on how it is integrated into various industries and how workers and policymakers respond to these changes.。

(完整版)_毕业设计外文资料翻译_21913767

(完整版)_毕业设计外文资料翻译_21913767

毕业设计外文资料翻译学院:电气信息学院专业:电气工程及其自动化姓名:房哲学号:外文出处: 840- IEEE附件: 1.外文资料翻译译文;2.外文原文。

附件1:外文资料翻译译文Applications of DSP&ARM for Microprocessor Protection Device in DistanceProtectionNing Yang, Wanjian Zhao,Yaoliang Xu,Shaocheng Zhang,Yi Zhu Faculty of Electric and Automatic Engineering, Shanghai University of Electric Power, Shanghai, PR ChinaAbstract—By studying the development of microprocessor protection and features of the protection devices, the requirement of the device under the multi-mission of monitoring, protecting, controlling and communicating is discussed in this paper, and a microprocessor protection device of dual CPU structure with ARM & DSP is designed. DSP TMS320F2812 and ARM S3C2410 are used in this design. The print circuit boards of this device are finished and fast Fourier transform is chosen. According to the distance protection scheme in 110 kV transmission based on MATLAB, the tests of the device are done. The results show that the design of DSP & ARM in this paper is feasible in microprocessor protection, And it software and , ARM, DSP, Distance protectionI. INTRODUCTIONWith the rapid development of electric power system in China, the operating environment of microprocessor protection isgetting more and complexity, and need some devices with urgent affair that to research and design new devices based on DSP. The designer uses the TMS320F2812 chip as main control chip, so that the acquisition and the processing of microprocessor protection can be realized. Otherwise, the Shandong University developed a kind of microprocessor protection devices based on ARM processor. This device uses DEVICEARM2200 which made by Zhouligong Company as the protective CPU. However, both the forward two devices still exist defects whether in accuracy and speed. Therefore, in order to improve the above defects, in this article, the this design. The paper will be mainly concerned with the accuracy and speed of the system and organized as follows. In section I, the researching backgrounds are provided. In section II, a brief introduction to the structure of system is given. The next two sections specifically describe the implementation of section V, followed by concluding remarks in the final section. II. SYSMEM STRUCTUREA dual CPU is used in this system, which is undertaken to collect and transform the electric quantum, control logic operations, print output, communicate with is adopted in this design. The new protection functions are easy to be exploited in the device. Overall system structure diagram is shown in figure1. Figure1. System structure diagram In the system, fault data is sent to the DSP, and theanalog of failure data is converted to digital by AD conversion [1]. The protection of algorithm implementation is also done in the DSP. Output results of logic judgment part from DSP are sent to the ARM [2]. ARM chip is used to display results. Parameters can be modified and the key processing can be done in the ARM. Switch signal can be directly connected to the DSP chip through the photoelectric isolation, and the trip is achieved based on protection algorithm [3].III. SYSTEM HARDWARE IMPLEMENTATIONA. The function and design of DSP subsystem Data acquisition, AD conversion, calculation and the implementation of protection algorithm are completed in this part. Output results of logic judgment part are sent to the ARM.a) DSP core: TMS320F2812, which is chosen in this device, is a be directly connected to the DSP chip through the photoelectric isolation, thus the parallel IO interface chip is not necessary and the reliability of binary inputsoutputs is improved in the protection deviceIV. STEM SOFTWARE IMPLEMENTATIONDSP&ARM are required to complete the initializationafter powering-on.1) The programmable interface for initialized, and the port function is set, and the output port is given the initial value. All relay exports are not under action state.2) All equipments are in good condition thought a comprehensive self-test of system, when they are put into service. Otherwise it will be shutdown. All the digital input status need to read and saved, and flag word is cleared, while sampling unit is initialized, and the pointer position and the sampling time interval are set for the DSP. Figure7 is ARM subsystem operation and management program flow. It can receive and display the DSP’s data through the SPI communication interrupt program. It is determined by the needs whether the alarm is demanded and the in this device. If it is used, E2PROM must be expanded.Figure7. ARM subsystem operation and management program flow MATLAB power system toolbox (PSB) is used to establish a 110 KV line model in this paper [6]. Based on this simulation tool, fault phenomenon can be specified through the establishment of accurate and complete fault model [7]. The design of protection program is verified by more truthful and accurate data, and the effectiveness of microcomputer protection device is tested in this paper.Figure8. 110 KV transmission line schematic The most common transmission lines are the the power system and the 110 KV transmission line schematic is shown in figure8 [8]. According to the circuit network separation theory and the equivalent substitution theory, as theintermediate links of power transmission network, neutral grounded power system is separated from the entire network [9]. This article factors, and the line, and110KV transmission line model is built [11]. MATLAB model is shown in figure9, and the down time occurred at 0.04 seconds.Figure9. 110 kV power line MATLAB simulation graph The majority protection principle is based on fundamental component of fault signal in the power system[12]. The fault can be diagnosed according to fundamental voltagecurrent component or combination of both. One of the key issues of microcomputer is protection algorithmand the accuracy and speed are main considered. In addition, the influence of as the basis algorithm for the protection algorithm in this paper. Fourier algorithm is analyzed by the one phase fault current according to the figureure9. The analog of the sampled voltage andcurrent is not a sine wave, but a periodic function of time in practical situations. Fourier algorithm is the calculation of transforming from time domain to the frequency domain. The amplitude of sine (a ) and cosineb ) of fundamental component can eliminate the influence of DC component and process. So this algorithm formula of the effective value of fundamental component isX = a+b, andthe phase angle is obtained byarctan(). While the sampling frequency is N times than the fundamental frequency (N is an integer), in real microcomputer protection system, the Data acquisition module: First, in analog input module, the strong current signal of current transformer and voltage transformer is converted to weak electric signal which is used in digital protection and monitor device in the power system, and then weak electric signal is transformed into analog signal which is coincided AD converter, finally these analog signals are transformed into digital which can be identified by CPU. The input signal can be calculated and judged [4].c) Digital modules: Digital input circuit diagram is shown in figure2. It is a monitoring status of contact (closed or open), including the circuit breaker and the auxiliary contacts of disconnector. Outer device includes contact input circuit of blocking reclosing and position input circuit of splices. All these parts are isolated through the optcoupler before entering the DSP [5].Figure2. Digital input circuit diagram Digital output includes outlets of trip and the local central signals. The way of parallel output is generally adopted to control the relay contact and the optcoupler is adopted to achieve separation between computer system and circuit breakers in the outlet links. Digital output circuit diagram is shown as figure3.Figure3. Digital output circuit diagramB. The function and design of ARM subsystem Considering the factors of real-time operating system, embedded Ethernet and large amounts of data management, ARM microprocessor is used to assist DSP in this device. ARM subsystem is charged for the clock reference, communication, LCD display, keyboard in the design. The LPC2478 module: The communication device can be divided into internal communication and external communication device. Internal communication refers to the data exchange processes between the dual-processor ARM&DSP. External communications include RS485 and Ethernet. RS485 schematic is shown in figure4.Figure4. RS485 schematicc) Man-machine communication module: The keyboard circuit diagram is shown in figure5. The ways of keypad and menu work are used in the device and the key is connected to the ARM chip through ZLG7290. The chip contains register with batter processing, and it not only can distinguish between click and batter which can prevent some misoperations, but also can modify some features.Figure5. Keyboard circuit diagram Based on the above design, the printed circuit board of the system is finished. The PSB graph is shown in figure6.Figure6. PSB graph 228approximate calculationof integral with trapezoidal method x represents a sample value in the k sampling point within one frequency.) And the equation (1) is replaced toThe Amplitude and phase of Nth-degree be obtained, and the will be guaranteed. Of course, the figure11 is gained through analysis and treatment of figure10.Figure11. Outcome of Fast Fourier operation In figure10, Vertical axis is the percentage of each , and the effective value of current fundamental component is 17.9595.V. SYSTEM DEBUGGINGA system debugging platform is built based on the above work, and used to test the performance of the device. It is shownin figure12. Figure12. System debugging graph . In the figure, A is the Debugging computer, and it is used to debug DSP and ARM;B is the the substation and is used to receive the information from protection devices; C is the DSP emulator which is used to connect the debugging computer (A) with DSP-JTAG interface and take charge the DSP debugging; D is the ARM emulator that is used to connect the debugging computer (A) with DSP-JTAG interface and take charge the ARM debugging; E is the liquid crystal display and it is used to display the basic electrical parameters of voltage and current; F is the relay which is used to test the device whether its trip is correct; G is the RS485 interface which is connected with the the device and flow of power transmission lines, when the circuit comesinto the conclusion of "line faults" quickly according to the current and voltage data, and switch status can be obtained by calculation, analysis and sampling. Then, when the trip command is sent and the breaker is disconnected, the fault line will be cut off. Therefore, the security of power system is guaranteed. Generally speaking, the circuit breaker is in the substation of 110V system, and it is connected with corresponding nodes in the operation box. Operation loop isconnected with the tripping signal panel of the microcomputer protection. The relay on the tripping signal panel is normally open. The relay is closed immediately when it gets the tripping command, and the breaker is disconnected by the operation loop, and the trip action will be finished. The debugging data of the system comes from 110 kV simulation model, and it will be imported into the storage space of DSP through the CCS program. The result of the debugging shows that the relay can be closed when the fault data is analyzed. So it can be proved that the device is effective and the protection scheme is feasible. VI. CONCLUSION Base on the development of microprocessor protection, DSP TMS320F2812 of TI Company and ARM S3C2410 ofSamsung Company are chosen in this article, and a dual CPU structure with ARM & DSP is designed. Furthermore, the print circuit boards of this device are made. The fast Fourier algorithm is chosen as the basic algorithm for the protection algorithm. Thesimulation results show that the design is feasible and reliable in microprocessor protection. The are diverse and the algorithms are emerge in endlessly, so the research prospects are extensive. ACKNOWLEDGMENT This work was supported by the National Natural Science Foundation of China under grant and Supported by Innovation Program of Shanghai Municipal Education Commission under grant 11ZZ170, and Leading Academic Discipline Project of Shanghai Municipal Education Commission, Project Number: J51301. REFERENCES[1] Texas Instruments, “TMS320C28x DSP CPU and Instruction Set Reference Guide,” Texas Instruments, October 2003[2] McLaren P.G., Kuffel R, Wierckx R., Giesbrecht J., and Arendt L, “A real time digital simulator for testing relays,” Power Delivery, IEEE Transactions on,Volume:7Issue:1,Jan.2003:207~213[3] NARI-Relays Electric Co.Ltd, "Technical and Operation Manual of RCS-900 Series Protection Relay for Generator-Transformer Unit," Dec.2001.[4] Xinmin Yang, Junlin Yang, "Training Materials of Microprocessor-based Protection Relays in Power System," 3rd ed., Beijing: Electric Power Press of China[5] Fisher, A.G., Harpley R.M., 1987, "New Options for 110kV Urban Network Design." Power Engineering Journal..[6] Williams, A., Warren R.H.J., 1984, "Methods ofUsing Data from Computer Simulations to test protection equipment,” Proc. IEE 131, Pt.C., 7, 149 – 156[7] “The Method of Electromagnetic Disturbance Assessment and Its Influence on Electronics.” AmSu publishers. Herold of Amur State University[8] M.M. Saha, K. Wikstrom and S. Lindahl, “A new Approach to Fast Distance Protection with Adaptive Features”, Companion paper, Sixth International conference on Developments in power system protection, University of Nottingham,UK,25-27 March,1997.[9] B. Bachmann, D.G. Hart, Y. Hu, and M. Saha, “Algorithms for Locating Faults on Series Compensated Lines Using Neural Network and De terministic Methods,” IEEE Winter Meeting, 96 WM 021-6 PWRD, Baltimore, 1996. 3.[10] F. Anderson and W. Elmore, “Overview of Series-Compensated Line Protection Philosophies”, Western Relay Protective Conference, Washington State University, Spokane, Washington, October 1990.[11] G. Nimmersjo, M.M. Saha. “A New Approach to High Speed relaying based on Transient Phenomena”, IEEDPSP-89, Edinburgh, UK, April 1989.[12] M. Chamia, S. Liberman, “Ultra High Speed Relay for EHVKJHV Transmission Lines Development, Design and Application”, IEEE Transactions on PAS, Vol.PAS-97, No.6, Dec. 1998.附件2:外文原文(复印件)基于DSP&ARM微处理器的线路距离保护摘要通过研究微处理器的开发保护和保护装置的特点,要求设备的多任务下的监测、保护、控制和沟通探讨,和双CPU结构的微处理器保护装置与ARM和DSP设计。

_毕业设计外文文献及翻译_

_毕业设计外文文献及翻译_

_毕业设计外文文献及翻译_Graduation Thesis Foreign Literature Review and Chinese Translation1. Title: "The Impact of Artificial Intelligence on Society"Abstract:人工智能对社会的影响摘要:人工智能技术的快速发展引发了关于其对社会影响的讨论。

本文探讨了人工智能正在重塑不同行业(包括医疗保健、交通运输和教育)的各种方式。

还讨论了AI实施的潜在益处和挑战,以及伦理考量。

总体而言,本文旨在提供对人工智能对社会影响的全面概述。

2. Title: "The Future of Work: Automation and Job Displacement"Abstract:With the rise of automation technologies, there is growing concern about the potential displacement of workers in various industries. This paper examines the trends in automation and its impact on jobs, as well as the implications for workforce development and retraining programs. The ethical and social implications of automation are also discussed, along with potential strategies for mitigating job displacement effects.工作的未来:自动化和失业摘要:随着自动化技术的兴起,人们越来越担心各行业工人可能被替代的问题。

毕业设计外文资料翻译译文

毕业设计外文资料翻译译文

附件1:外文资料翻译译文包装对食品发展的影响一个消费者对某个产品的第一印象来说包装是至关重要的,包括沟通的可取性,可接受性,健康饮食形象等。

食品能够提供广泛的产品和包装组合,传达自己加工的形象感知给消费者,例如新鲜包装/准备,冷藏,冷冻,超高温无菌,消毒(灭菌),烘干产品。

食物的最重要的质量属性之一,是它的味道,其影响人类的感官知觉,即味觉和嗅觉。

味道可以很大程度作退化的处理和/或扩展存储。

其他质量属性,也可能受到影响,包括颜色,质地和营养成分。

食品质量不仅取决于原材料,添加剂,加工和包装的方法,而且其预期的货架寿命(保质期)过程中遇到的分布和储存条件的质量。

越来越多的竞争当中,食品生产商,零售商和供应商;和质量审核供应商有显着提高食品质量以及急剧增加包装食品的选择。

这些改进也得益于严格的冷藏链中的温度控制和越来越挑剔的消费者。

保质期的一个定义是:在食品加工和包装组合下,在食品的容器和条件,在销售点分布在特定系统的时间能保持令人满意的食味品质。

保质期,可以用来作为一个新鲜的概念,促进营销的工具。

延期或保质期长的产品,还提供产品的使用时间,方便以及减少浪费食物的风险,消费者和/或零售商。

包装产品的质量和保质期的主题是在第3章中详细讨论。

包装为消费者提供有关产品的重要信息,在许多情况下,使用的包装和/或产品,包括事实信息如重量,体积,配料,制造商的细节,营养价值,烹饪和开放的指示,除了法律准则的最小尺寸的文字和数字,有定义的各类产品。

消费者寻求更详细的产品信息,同时,许多标签已经成为多语种。

标签的可读性是为视障人士的问题,这很可能成为一个对越来越多的老年人口越来越重要的问题。

食物的选择和包装创新的一个主要驱动力是为了方便消费者的需求。

这里有许多方便的现代包装所提供的属性,这些措施包括易于接入和开放,处置和处理,产品的知名度,再密封性能,微波加热性,延长保质期等。

在英国和其他发达经济体显示出生率下降和快速增长的一个相对富裕的老人人口趋势,伴随着更加苛刻的年轻消费者,他们将要求和期望改进包装的功能,如方便包揭开(百货配送研究所,IGD)。

毕业设计(论文)外文资料翻译

毕业设计(论文)外文资料翻译

1、外文原文(复印件)2、外文资料翻译译文节能智能照明控制系统Sherif Matta and Syed Masud Mahmud, SeniorMember, IEEE Wayne State University, Detroit,Michigan 48202Sherif.Matta@,smahmud@摘要节约能源已成为当今最具挑战性的问题之一。

最浪费能源的来自低效利用的电能消耗的人工光源设备(灯具或灯泡)。

本文提出了一种通过把人工照明的强度控制到令人满意的水平,来节约电能,并且有详细设计步骤的系统。

在白天使用照明设备时,尽可能的节约。

当记录超过预设的照明方案时,引入改善日光采集和控制的调光系统。

设计原理是,如果它可以通过利用日光这样的一种方式,去控制百叶窗或窗帘。

否则,它使用的是人工建筑内部的光源。

光通量是通过控制百叶窗帘的开启角度来控制,同时,人工光源的强度的控制,通过控制脉冲宽度来调制(PWM)对直流灯的发电量或剪切AC灯泡的AC波。

该系统采用控制器区域网络(CAN),作为传感器和致动器通信用的介质。

该系统是模块化的,可用来跨越大型建筑物。

该设计的优点是,它为用户提供了一个单点操作,而这个正是用户所希望的光的亮度。

该控制器的功能是确定一种方法来满足所需的最小能量消耗光的量。

考虑的主要问题之一是系统组件的易于安装和低成本。

该系统显示出了显著节省的能源量,和在实际中实施的可行性。

关键词:智能光控系统,节能,光通量,百叶帘控制,控制器区域网络(CAN),光强度的控制一简介多年来,随着建筑物的数量和建筑物房间内的数量急剧增加,能源的浪费、低效光控制和照明分布难以管理。

此外,依靠用户对光的手动控制,来节省能源是不实际的。

很多技术和传感器最近已经向管理过多的能量消耗转变,例如在一定区域内的检测活动采用运动检测。

当有人进入房间时,自动转向灯为他们提供了便利。

他们通过在最后人员离开房间后不久关闭转向灯来减少照明能源的使用。

b1-3毕业设计(论文)外文资料翻译(学生用).

b1-3毕业设计(论文)外文资料翻译(学生用).

南京理工大学紫金学院毕业设计(论文)外文资料翻译系:计算机专业:计算机科学与技术姓名:沈俊男学号:060601239外文出处:E. Jimenez-Ruiz,R. Berlanga. The Management and(用外文写)Integration of Biomedical[M/OL]. Castellon:Spanish Ministry of Education and Scienceproject,2004[2005-09098]./ftp/cs/papers/0609/0609144.pdf附件1:外文资料翻译译文管理和集成的生物医学知识:应用于Health-e-Child项目摘要:这个Health-e-Child项目的目的是为欧洲儿科学发展集成保健平台。

为了实现一个关于儿童健康的综合观点,一个复杂的生物医学数据、信息和知识的整合是必需的。

本体论将用于正式定义这个领域的专业知识,将塑造医学知识管理系统的基础。

文中介绍了一种对生物医学知识的垂直整合的新颖的方法。

该方法将会主要使临床医生中心化,并使定义本体碎片成为可能,连接这些碎片(语义桥接器),丰富了本体碎片(观点)。

这个策略为规格和捕获的碎片,桥接器和观点概述了初步的例子证明从医院数据库、生物医学本体、生物医学公共数据库的生物医学信息的征收。

关键词:垂直的知识集成、近似查询、本体观点、语义桥接器1.1 医学数据集成问题数据来源的集成已经在数据库社区成为传统的研究课题。

一个综合数据库系统主要的目标是允许用户均匀的访问一个分布和一个异构数据库。

数据集成的关键因素是定义一个全局性的模式,但是值得指出的是,我们必须区分三种全局模式:数据库模式、概念模式和域本体模式。

首先介绍了数据类型的信息存储、本地查询;其二,概括了这些图式采用更富有表达力的数据模型,如统一建模语言(UML)(TAMBIS和SEMEDA都遵循这个模式)。

最后,领域本体的概念及性质描述涉及领域(如生物医学)独立于任何数据模型,促进应用程序语义的表达(例如,通过语义标注)以及推理。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

理工学院毕业设计外文资料翻译专业:计算机科学与技术姓名:马艳丽学号: 12L0752218外文出处:The Design and Implementation of 3DElectronic Map of Campus Based on WEBGIS 附件: 1.外文资料翻译译文;2.外文原文。

附件1:外文资料翻译译文基于WebGIS的校园三维电子地图的设计与实现一.导言如今,数字化和信息化是当今时代的主题。

随着信息革命和计算机科学的发展,计算机技术已经渗透到科学的各个领域,并引起了许多革命性的变化,在这些科目,古代制图学也不例外。

随着技术和文化的不断进步,地图变化的形式和内容也随之更新。

在计算机图形学中,地理信息系统(GIS)不断应用到Web,制作和演示的传统方式经历了巨大的变化,由于先进的信息技术的发展,地图的应用已经大大延长。

在这些情况下,绘图将面临广阔的发展前景。

电子地图是随之应运而生的产品之一。

随着计算机技术,计算机图形学理论,遥感技术,航空摄影测量技术和其他相关技术的飞速发展。

用户需要的三维可视化,动态的交互性和展示自己的各种地理相关的数据处理和分析,如此多的关注应支付的研究三维地图。

东北石油大学及其周边地区的基础上本文设计并建立三维电子地图。

二.系统设计基于WebGIS的校园三维电子地图系统的具有普通地图的一般特性。

通过按键盘上的箭头键(上,下,左,右),可以使地图向相应的方向移动。

通过拖动鼠标,可以查看感兴趣的任何一个地方。

使用鼠标滚轮,可以控制地图的大小,根据用户的需求来查看不同缩放级别的地图。

在地图的左下角会显示当前鼠标的坐标。

在一个div层,我们描绘了一个新建筑物的热点,这层可以根据不同的地图图层的显示,它也可以自动调整。

通过点击热点,它可以显示热点的具体信息。

也可以输入到查询的信息,根据自己的需要,并得到一些相关的信息。

此外,通过点击鼠标,人们可以选择检查的三维地图和卫星地图。

主要功能包括:•用户信息管理:检查用户名和密码,根据权限设置级别的认证,允许不同权限的用户通过互联网登录系统。

•位置信息查询:系统可以为用户提供模糊查询和快速定位。

•地图管理:实现加载地图,地图查询,图层管理,以及其他常见的操作,例如距离测量和地图放大,缩小,鹰眼,标签,印刷等等。

•漫游地图:使用向上和向下键漫游的任何区域的地图,或拖动和拖放直接。

三.系统开发过程首先,我们收集了包含建筑外观的信息,并对道路设计了树的形状。

然后,我们建立的三维场景与3DS MAX的软件。

通过这种方式我们渲染场景,并实现高清晰度的地图之后,我们用切割图形程序将地图切割成小图片,最后我们建立HTML页面,它可以异步加载地图,并实现了电子地图的功能。

该系统开发的流程图将图1所示。

图1 系统开发流程图传统的地图在设计时对数学规律、地图符号和制图综合都有严格的要求。

网络景观电子地图的制作也有它自己的技术标准,这是优于传统地图的。

三维电子地图有不同缩放级别;因此,它并不需要严格的规模,但需要统一的生产标准。

地图符号通常尽可能地模仿真实世界,并尽可能的简单化。

屏幕的范围远远大于纸质地图的固定视觉。

制图概括重视抽象模型和实际的性能结果之间的平衡。

作为数据采集和管理,如引进用户索取地图信息是数据采集的最后结果。

一开始,我们收集所需的数据,包括名称、地址、介绍和建筑物的数码照片,并准备后续的三维建模。

收集的数据后,我们应该注意存档和备份文件,以防丢失的文件。

为了生成地图,配制好的标准场景设计是必要的。

我们设置的参数包括:垫、灯、海拔高度、渲染效果等等,以确保我们努力的成果最后具有均匀的效果。

空间实体的表现通常以点、线、面的形式显示在三维电子地图。

与矢量图形相比,网格图形具有无可比拟的优势。

网格图形和WebGIS的背景出版技术的组合,可以提高系统的响应速度和节省系统的输入。

系统通过JavaScript 语言实现了地图的交互。

各种浏览器支持的脚本语言的支持存在差异,所以在不同的浏览器测试地图的各个功能是不可缺少的步骤。

四.关键技术三维电子地图的发展与相关领域的发展分不开的,并且借鉴了其他领域的的研究方法,技术和工具。

而其其他领域的研究直接应用到了三维电子地图的开发和建设,计算机图形学,三维GIS,虚拟现实和地理数据基础,虚拟场景的建模,并因此成为三维电子地图系统的技术支持。

校园三维电子地图系统是基于WebGIS技术的一个标准的软件技术,这意味着没有任何商业软件的支持。

本系统的开发利用常见的现有技术包括JavaScript技术,Ajax技术,XML技术等等。

Ajax是一种开源的技术,它是一个将多种技术混合在一起,包括文档对象的网页显示,层次结构的DOM文档对象模型和用来定义风格元素的CSS,和数据交换格式的XML或JSON,实现和XMLHttpRequest异步服务器请求的JavaScript客户端脚本语言。

Ajax的利用非同步的交互技术,这意味着没有必要刷新全部的页面,因此,它减轻了用户的等待时间。

这就是它为什么会更容易被大众所接受的原因。

EXT是用JavaScript编写的优秀Ajax开源框架;它与后端技术无关,可以用来开发一个华丽的外观富客户端应用程序。

该系统使EXT结合JSP实现的其页面电子地图功能。

该系统结合了EXT原型框架,创造一个丰富的客户端和高度交互的Web应用程序,有效地实现富客户端的应用程序,并可以在一个安全控制的方式管理客户端的安全。

JavaScript 是系统在设计和实施过程中的原理技术。

它允许仅在客户机上,就可以完成各种各样的任务,不需要网络服务器的参与,用于支持分布式计算和处理,因此减少了不必要的资源浪费。

JavaScript既不允许访问本地硬盘,也不能数据将数据到服务器,更不用说修改和删除网络文件。

浏览网页信息并实现动态交互的唯一方式是通过浏览器,它可以有效地防范数据丢失,从而是系统达到了较高的安全系数。

JavaScript可被用来根据不同用户定制浏览器,更加人性化设计的网页,更容易为用户掌握的方法。

JavaScript技术是指通过小块的方法来实现编程。

正如其他脚本语言,JavaScript是一种解释型语言,它提供了一个方便的开发环境。

在系统中,我们利用JavaScript脚本语言实现的关键功能,如加载地图,缩放地图,地理位置,以及其他相关的辅助功能,如地图图标显示,测距,鹰眼,标签。

Oracle 数据库后台管理中所用的数据满足需要,JSP,XML和HTML一起实现用户的身份验证以及添加,删除,修改,查询信息等等。

该系统的主要功能是通过实现WebGIS技术在浏览器中显示三维电子地图。

由于JavaScript技术和WebGIS开发模型的组合,我们可以降低系统的成本,同时提高互操作性和系统性能。

由于AJAX技术的应用,我们可以在加载动态地图时得到进一步改进。

所有我们使用的技术将减少反应时间,这将对用户留下一个快速和有效的印象。

五.系统实现A.创建三维场景和地图的场景渲染。

基于WebGIS的校园三维电子地图,是一个以东北石油大学为原型的电子地图系统。

为了实现这个系统,我们需要完成三维场景和场景渲染地图的制作,所以我们选择了操作简单而灵活的3DMAX模型。

给出了电子地图的需要,三维模型应该是微妙的变化。

由于东北石油大学太多复杂的建筑物,三维度模型的构建将占用大量的时间。

要完成三维场景我们应该先准备好来渲染场景。

其实网格图像三维电子地图是固定的角度来看旋转眼栅格地图。

空间三维实体建模后,选择合适的渲染方法,使固定摄像机角度定位在渲染(通常在45度角),然后渲染输出的参数设置,使它们进入相机从固定的角度大小的图片。

B.加载地图在Web中,主要通过div层表现,有三层显示地图。

一层是用来作为一个窗口载体地图,该层的大小是一样大的地图,我们通常看到的通过浏览器(以下简称为窗口层)。

另一层是用来遵循(称为移动图层)的鼠标拖动移动一层。

另一个层是移动的层,用于跟随鼠标的拖动(以下简称为移动层)的,其他是介乎在窗口层和移动层的被覆层。

由用户操作在地图窗口是由上述的三层,地图的基本操作是通过设置在不同的图层功能实现。

当加载地图,我们使用栅格数据,即我们通常所说的图像数据。

栅格数据包括图像数据,二维地图和三维模拟的电子地图。

这个系统中的栅格数据是三维模拟的电子地图。

抽象的二维地图使一些普通用户很难了解他们需要的信息,但三维模拟的地图模拟真实世界的信息准确,因此用户可以轻松地看到真实的世界。

这个系统主要显示地图图片,当您查看或拖动地图,它就像一张完整的地图图片的当前窗口,但事实上的小图片拼凑而成。

这些小图片是通过特定的切图程序将完整地图切割而成;所有的图片卡的大小都相同,并有固定的命名规则,所以地图是速度更快和更容易地加载。

有完整的地图绘制的方法很多,系统使用方形板的方法将地图切割到256像素*256像素的地图,然后写基于命名规则脚本完成图片加载。

C.地图的基本功能拖动,缩放,平移地图的基本功能,也是不同于一个简单的地图图像的重要特征。

以下是一个简要说明的实现方法。

要实现拖动,第一件事就是设置鼠标事件功能。

这些事件包括按下鼠标和松开鼠标左键。

因此,两种功能的组合可以完成地图导航。

鼠标按下事件主要是用来记录拖动的状态以及目前的位置,当鼠标功能将捕获的拖动完成状态,然后使用地图显示功能加载地图。

实现缩放的功能如下过程:放大和适当的比例值,需要放大的增益比前值。

计算地图放大后中心的坐标。

公式:(point.x / oldpercent)* newpercent。

修改图标层中的数据(图标层逻辑操作CMAP“_ Base.js”)。

删除当前地图的图层,并强制内存回收。

加载所需的地图文件。

与这些基本的功能,用户可以观察整个校园简洁清晰地建筑物。

地图分为五个缩放级别,用户可以放大出来要查看更多的建筑物,也可以放大以检查建筑细节。

D.其他实用功能1)突出显示以及弹出提示框对于一些热点建筑物的查询,我们使用JSON 数据创建一个div 图层,填充颜色,然后设置为半透明,当鼠标移动到图层,该区域将突出选择。

当鼠标点击突出显示的区域,会弹出一个小窗口,显示了建筑的细节。

以一个体育场为例,当鼠标不在体育场,建筑没有什么变化,但在体育场上空,移动鼠标时,建筑物的轮廓显示。

当点击的亮点体育场,体育场将弹出的一些基本信息,如体育场办公室的电话,详细地址,基本轮廓。

2)范围经度和纬度与校园电子地图坐标之间的相互转换,我们可以先变换校园电子地图坐标的经度和纬度坐标,然后通过计算两个点之间的距离的纬度和经度坐标,这种方法是简单和精确。

3)标签显示和隐藏为了提示一些关键的地方(如公共交通站、路牌),图中使用中其标签进行标记的新图层,很方便的为用户认识到特定的位置,但标记信息将影响显示整个场景,因此,用户可以选择在需要的时候显示标签。

相关文档
最新文档