毕业设计外文资料翻译_文献英文原文
毕业论文(设计)外文文献翻译及原文
金融体制、融资约束与投资——来自OECD的实证分析R.SemenovDepartment of Economics,University of Nijmegen,Nijmegen(荷兰内梅亨大学,经济学院)这篇论文考查了OECD的11个国家中现金流量对企业投资的影响.我们发现不同国家之间投资对企业内部可获取资金的敏感性具有显著差异,并且银企之间具有明显的紧密关系的国家的敏感性比银企之间具有公平关系的国家的低.同时,我们发现融资约束与整体金融发展指标不存在关系.我们的结论与资本市场信息和激励问题对企业投资具有重要作用这种观点一致,并且紧密的银企关系会减少这些问题从而增加企业获取外部融资的渠道。
一、引言各个国家的企业在显著不同的金融体制下运行。
金融发展水平的差别(例如,相对GDP的信用额度和相对GDP的相应股票市场的资本化程度),在所有者和管理者关系、企业和债权人的模式中,企业控制的市场活动水平可以很好地被记录.在完美资本市场,对于具有正的净现值投资机会的企业将一直获得资金。
然而,经济理论表明市场摩擦,诸如信息不对称和激励问题会使获得外部资本更加昂贵,并且具有盈利投资机会的企业不一定能够获取所需资本.这表明融资要素,例如内部产生资金数量、新债务和权益的可得性,共同决定了企业的投资决策.现今已经有大量考查外部资金可得性对投资决策的影响的实证资料(可参考,例如Fazzari(1998)、 Hoshi(1991)、 Chapman(1996)、Samuel(1998)).大多数研究结果表明金融变量例如现金流量有助于解释企业的投资水平。
这项研究结果解释表明企业投资受限于外部资金的可得性。
很多模型强调运行正常的金融中介和金融市场有助于改善信息不对称和交易成本,减缓不对称问题,从而促使储蓄资金投着长期和高回报的项目,并且提高资源的有效配置(参看Levine(1997)的评论文章)。
因而我们预期用于更加发达的金融体制的国家的企业将更容易获得外部融资.几位学者已经指出建立企业和金融中介机构可进一步缓解金融市场摩擦。
毕设外文文献+翻译1
毕设外文文献+翻译1外文翻译外文原文CHANGING ROLES OF THE CLIENTS、ARCHITECTSAND CONTRACTORS THROUGH BIMAbstract:Purpose –This paper aims to present a general review of the practical implications of building information modelling (BIM) based on literature and case studies. It seeks to address the necessity for applying BIM and re-organising the processes and roles in hospital building projects. This type of project is complex due to complicated functional and technical requirements, decision making involving a large number of stakeholders, and long-term development processes.Design/methodology/approach–Through desk research and referring to the ongoing European research project InPro, the framework for integrated collaboration and the use of BIM are analysed.Findings –One of the main findings is the identification of the main factors for a successful collaboration using BIM, which can be recognised as “POWER”: product information sharing (P),organisational roles synergy (O), work processes coordination (W), environment for teamwork (E), and reference data consolidation (R).Originality/value –This paper contributes to the actual discussion in science and practice on the changing roles and processes that are required to develop and operate sustainable buildings with the support of integrated ICT frameworks and tools. It presents the state-of-the-art of European research projects and some of the first real cases of BIM application inhospital building projects.Keywords:Europe, Hospitals, The Netherlands, Construction works, Response flexibility, Project planningPaper type :General review1. IntroductionHospital building projects, are of key importance, and involve significant investment, and usually take a long-term development period. Hospital building projects are also very complex due to the complicated requirements regarding hygiene, safety, special equipments, and handling of a large amount of data. The building process is very dynamic and comprises iterative phases and intermediate changes. Many actors with shifting agendas, roles and responsibilities are actively involved, such as: the healthcare institutions, national and local governments, project developers, financial institutions, architects, contractors, advisors, facility managers, and equipment manufacturers and suppliers. Such building projects are very much influenced, by the healthcare policy, which changes rapidly in response to the medical, societal and technological developments, and varies greatly between countries (World Health Organization, 2000). In The Netherlands, for example, the way a building project in the healthcare sector is organised is undergoing a major reform due to a fundamental change in the Dutch health policy that was introduced in 2008.The rapidly changing context posts a need for a building with flexibility over its lifecycle. In order to incorporate life-cycle considerations in the building design, construction technique, and facility management strategy, a multidisciplinary collaboration is required. Despite the attempt for establishing integrated collaboration, healthcare building projects still facesserious problems in practice, such as: budget overrun, delay, and sub-optimal quality in terms of flexibility, end-user?s dissatisfaction, and energy inefficiency. It is evident that the lack of communication and coordination between the actors involved in the different phases of a building project is among the most important reasons behind these problems. The communication between different stakeholders becomes critical, as each stakeholder possesses different setof skills. As a result, the processes for extraction, interpretation, and communication of complex design information from drawings and documents are often time-consuming and difficult. Advanced visualisation technologies, like 4D planning have tremendous potential to increase the communication efficiency and interpretation ability of the project team members. However, their use as an effective communication tool is still limited and not fully explored. There are also other barriers in the information transfer and integration, for instance: many existing ICT systems do not support the openness of the data and structure that is prerequisite for an effective collaboration between different building actors or disciplines.Building information modelling (BIM) offers an integrated solution to the previously mentioned problems. Therefore, BIM is increasingly used as an ICT support in complex building projects. An effective multidisciplinary collaboration supported by an optimal use of BIM require changing roles of the clients, architects, and contractors; new contractual relationships; and re-organised collaborative processes. Unfortunately, there are still gaps in the practical knowledge on how to manage the building actors to collaborate effectively in their changing roles, and todevelop and utilise BIM as an optimal ICT support of the collaboration.This paper presents a general review of the practical implications of building information modelling (BIM) based on literature review and case studies. In the next sections, based on literature and recent findings from European research project InPro, the framework for integrated collaboration and the use of BIM are analysed. Subsequently, through the observation of two ongoing pilot projects in The Netherlands, the changing roles of clients, architects, and contractors through BIM application are investigated. In conclusion, the critical success factors as well as the main barriers of a successful integrated collaboration using BIM are identified.2. Changing roles through integrated collaboration and life-cycle design approachesA hospital building project involves various actors, roles, and knowledge domains. In The Netherlands, the changing roles of clients, architects, and contractors in hospital building projects are inevitable due the new healthcare policy. Previously under the Healthcare Institutions Act (WTZi), healthcare institutions were required to obtain both a license and a building permit for new construction projects and major renovations. The permit was issued by the Dutch Ministry of Health. The healthcare institutions were then eligible to receive financial support from the government. Since 2008, new legislation on the management of hospital building projects and real estate has come into force. In this new legislation, a permit for hospital building project under the WTZi is no longer obligatory, nor obtainable (Dutch Ministry of Health, Welfare and Sport, 2008). This change allows more freedom from the state-directed policy, and respectively,allocates more responsibilities to the healthcare organisations to deal with the financing and management of their real estate. The new policy implies that the healthcare institutions are fully responsible to man age and finance their building projects and real estate. The government?s support for the costs of healthcare facilities will no longer be given separately, but will be included in the fee for healthcare services. This means that healthcare institutions must earn back their investment on real estate through their services. This new policy intends to stimulate sustainable innovations in the design, procurement and management of healthcare buildings, which will contribute to effective and efficient primary healthcare services.The new strategy for building projects and real estate management endorses an integrated collaboration approach. In order to assure the sustainability during construction, use, and maintenance, the end-users, facility managers, contractors and specialist contractors need to be involved in the planning and design processes. The implications of the new strategy are reflected in the changing roles of the building actors and in the new procurement method.In the traditional procurement method, the design, and its details, are developed by the architect, and design engineers. Then, the client (the healthcare institution) sends an application to the Ministry of Healthto obtain an approval on the building permit and the financial support from the government. Following this, a contractor is selected through a tender process that emphasises the search for the lowest-price bidder. During the construction period, changes often take place due to constructability problems of the design and new requirements from the client.Because of the high level of technical complexity, and moreover, decision-making complexities, the whole process from initiation until delivery of a hospital building project can take up to ten years time. After the delivery, the healthcare institution is fully in charge of the operation of the facilities. Redesigns and changes also take place in the use phase to cope with new functions and developments in the medical world.The integrated procurement pictures a new contractual relationship between the parties involved in a building project. Instead of a relationship between the client and architect for design, and the client and contractor for construction, in an integrated procurement the client only holds a contractual relationship with the main party that is responsible for both design and construction. The traditional borders between tasks and occupational groups become blurred since architects, consulting firms, contractors, subcontractors, and suppliers all stand on the supply side in the building process while the client on the demand side. Such configuration puts the architect, engineer and contractor in a very different position that influences not only their roles, but also their responsibilities, tasks and communication with the client, the users, the team and other stakeholders.The transition from traditional to integrated procurement method requires a shift of mindset of the parties on both the demand and supply sides. It is essential for the client and contractor to have a fair and open collaboration in which both can optimally use their competencies. The effectiveness of integrated collaboration is also determined by the client?s capacity and strategy to organize innovative tendering procedures.A new challenge emerges in case of positioning an architect in a partnership with the contractor instead of with the client. In case of the architect enters a partnership with the contractor, an important issues is how to ensure the realisation of the architectural values as well as innovative engineering through an efficient construction process. In another case, the architect can stand at the client?s side in a strategic advisory role instead of being the designer. In this case, the architect?s responsibility is translating client?s requirements and wishes into the architectural values to be included in the design specification, and evaluating the contractor?s proposal against this. In any of this new role, the architect holds the responsibilities as stakeholder interest facilitator, custodian of customer value and custodian of design models.The transition from traditional to integrated procurement method also brings consequences in the payment schemes. In the traditional building process, the honorarium for the architect is usually based on a percentage of the project costs; this may simply mean that the more expensive the building is, the higher the honorarium will be. The engineer receives the honorarium based on the complexity of the design and the intensity of the assignment. A highly complex building, which takes a number of redesigns, is usually favourable for the engineers in terms of honorarium. A traditional contractor usually receives the commission based on the tender to construct the building at the lowest price by meeting the minimum specifications given by the client. Extra work due to modifications is charged separately to the client. After the delivery, the contractor is no longer responsible for the long-term use of the building. In the traditional procurement method, all risks are placed with theclient.In integrated procurement method, the payment is based on the achieved building performance; thus, the payment is non-adversarial. Since the architect, engineer and contractor have a wider responsibility on the quality of the design and the building, the payment is linked to a measurement system of the functional and technical performance of the building over a certain period of time. The honorarium becomes an incentive to achieve the optimal quality. If the building actors succeed to deliver a higher added-value thatexceed the minimum client?s requirements, they will receive a bonus in accordance to the client?s extra gain. The level of transparency is also improved. Open book accounting is an excellent instrument provided that the stakeholders agree on the information to be shared and to its level of detail (InPro, 2009).Next to the adoption of integrated procurement method, the new real estate strategy for hospital building projects addresses an innovative product development and life-cycle design approaches. A sustainable business case for the investment and exploitation of hospital buildings relies on dynamic life-cycle management that includes considerations and analysis of the market development over time next to the building life-cycle costs (investment/initial cost, operational cost, and logistic cost). Compared to the conventional life-cycle costing method, the dynamic life-cycle management encompasses a shift from focusing only on minimizing the costs to focusing on maximizing the total benefit that can be gained. One of the determining factors for a successful implementation of dynamic life-cycle management is the sustainable design of the building and building components, which means that the design carriessufficient flexibility to accommodate possible changes in the long term (Prins, 1992).Designing based on the principles of life-cycle management affects the role of the architect, as he needs to be well informed about the usage scenarios and related financial arrangements, the changing social and physical environments, and new technologies. Design needs to integrate people activities and business strategies over time. In this context, the architect is required to align the design strategies with the organisational, local and global policies on finance, business operations, health and safety, environment, etc.The combination of process and product innovation, and the changing roles of the building actors can be accommodated by integrated project delivery or IPD (AIA California Council, 2007). IPD is an approach that integrates people, systems, business structures and practices into a process that collaboratively harnesses the talents and insights of all participants to reduce waste and optimize efficiency through all phases of design, fabrication and construction. IPD principles can be applied to a variety of contractual arrangements. IPD teams will usually include members well beyond the basic triad of client, architect, and contractor. At a minimum, though, an Integrated Project should include a tight collaboration between the client, the architect, and the main contractor ultimately responsible for construction of the project, from the early design until the project handover. The key to a successful IPD is assembling a team that is committed to collaborative processes and is capable of working together effectively. IPD is built on collaboration. As a result, it can only be successful if the participants share and apply common values and goals.3. Changing roles through BIM applicationBuilding information model (BIM) comprises ICT frameworks and tools that can support the integrated collaboration based on life-cycle design approach. BIM is a digital representation of physical and functional characteristics of a facility. As such it serves as a shared knowledge resource for information about a facility forming a reliable basis for decisions during its lifecycle from inception onward (National Institute of Building Sciences NIBS, 2007). BIM facilitates time and place independent collaborative working. A basic premise of BIM is collaboration by different stakeholders at different phases of the life cycle of a facility to insert, extract, update or modify information in the BIM to support and reflect the roles of that stakeholder. BIM in its ultimate form, as a shared digital representation founded on open standards for interoperability, can become a virtual information model to be handed from the design team to the contractor and subcontractors and then to the client.BIM is not the same as the earlier known computer aided design (CAD). BIM goes further than an application to generate digital (2D or 3D) drawings. BIM is an integrated model in which all process and product information is combined, stored, elaborated, and interactively distributed to all relevant building actors. As a central model for all involved actors throughout the project lifecycle, BIM develops andevolves as the project progresses. Using BIM, the proposed design and engineering solutions can be measured against the client?s requirements and expected building performance. The functionalities of BIM to support the design process extend to multidimensional (nD), including: three-dimensional visualisation and detailing, clash detection, material schedule, planning, costestimate, production and logistic information, and as-built documents. During the construction process, BIM can support the communication between the building site, the factory and the design office– which is crucial for an effective and efficient prefabrication and assembly processes as well as to prevent or solve problems related to unforeseen errors or modifications. When the building is in use, BIM can be used in combination with the intelligent building systems to provide and maintain up-to-date information of the building performance, including the life-cycle cost.To unleash the full potential of more efficient information exchange in the AEC/FM industry in collaborative working using BIM, both high quality open international standards and high quality implementations of these standards must be in place. The IFC open standard is generally agreed to be of high quality and is widely implemented in software. Unfortunately, the certification process allows poor quality implementations to be certified and essentially renders the certified software useless for any practical usage with IFC. IFC compliant BIM is actually used less than manual drafting for architects and contractors, and show about the same usage for engineers. A recent survey shows that CAD (as a closed-system) is still the major form of technique used in design work (over 60 per cent) while BIM is used in around 20 percent of projects for architects and in around 10 per cent of projects for engineers and contractors.The application of BIM to support an optimal cross-disciplinary and cross-phase collaboration opens a new dimension in the roles and relationships between the building actors. Several most relevant issues are: the new role of a model manager; the agreement on the access right and IntellectualProperty Right (IPR); the liability and payment arrangement according to the type of contract and in relation to the integrated procurement; and the use of open international standards.Collaborative working using BIM demands a new expert role of a model manager who possesses ICT as well as construction process know-how (InPro, 2009). The model manager deals with the system as well as with the actors. He provides and maintains technological solutions required for BIM functionalities, manages the information flow, and improves the ICT skills of the stakeholders. The model manager does not take decisions on design and engineering solutions, nor the organisational processes, but his roles in the chain of decision making are focused on:the development of BIM, the definition of the structure and detail level of the model, and the deployment of relevant BIM tools, such as for models checking, merging, and clash detections;the contribution to collaboration methods, especially decision making and communication protocols, task planning, and risk management;and the management of information, in terms of data flow and storage, identification of communication errors, and decision or process (re-)tracking.Regarding the legal and organisational issues, one of the actual questions is: “In what way does the intellectual property right (IPR) in collaborative working using BIM differ from the IPR in a traditional teamwork?”. In terms of combine d work, the IPR of each element is at tached to its creator. Although it seems to be a fully integrated design, BIM actually resulted from a combination of works/elements; for instance: the outline of the building design, is created by the architect, the design for theelectrical system, is created by the electrical contractor, etc. Thus, in case of BIM as a combined work, the IPR is similar to traditional teamwork. Working with BIM with authorship registration functionalities may actually make it easier to keep track of the IPR.How does collaborative working, using BIM, effect the contractual relationship? On the one hand,collaborative working using BIM does not necessarily change the liability position in the contract nor does it obligate an alliance contract. The General Principles of BIM A ddendum confirms: …This does not effectuate or require a restructuring of contractual relationships or shifting of risks between or among the Project Participants other than as specifically required per the Protocol Addendum and its Attachments? (ConsensusDOCS, 2008). On the other hand, changes in terms of payment schemes can be anticipated. Collaborative processes using BIM will lead to the shifting of activities from to the early design phase. Much, if not all, activities in the detailed engineering and specification phase will be done in the earlier phases. It means that significant payment for the engineering phase, which may count up to 40 per cent of the design cost, can no longer be expected. As engineering work is done concurrently with the design, a new proportion of the payment in the early design phase is necessary.4. Review of ongoing hospital building projects using BIMIn The Netherlands, the changing roles in hospital building projects are part of the strategy, which aims at achieving a sustainable real estate in response to the changing healthcare policy. Referring to literature and previous research, the main factors that influence the success of the changing roles can be concluded as: the implementation of an integrated procurementmethod and a life-cycle design approach for a sustainable collaborative process; the agreement on the BIM structure and the intellectual rights; and the integration of the role of a model manager. The preceding sections have discussed the conceptual thinking on how to deal with these factors effectively. This current section observes two actual projects and compares the actual practice with the conceptual view respectively.The main issues, which are observed in the case studies, are: the selected procurement method and the roles of the involved parties within this method;the implementation of the life-cycle design approach;the type, structure, and functionalities of BIM used in the project;the openness in data sharing and transfer of the model, and the intended use of BIM in the future; and the roles and tasks of the model manager.The pilot experience of hospital building projects using BIM in the Netherlands can be observed at University Medical Centre St Radboud (further referred as UMC) and Maxima Medical Centre (further referred as MMC). At UMC, the new building project for the Faculty of Dentistry in the city of Nijmegen has been dedicated as a BIM pilot project. At MMC, BIM is used in designing new buildings for Medical Simulation and Mother-and-Child Centre in the city of Veldhoven.The first case is a project at the University Medical Centre (UMC) St Radboud. UMC is more than just a hospital. UMC combines medical services, education and research. More than 8500 staff and 3000 students work at UMC. As a part of the innovative real estate strategy, UMC has considered to use BIM for its building projects. The new development of the Faculty ofDentistry and the surrounding buildings on the Kapittelweg in Nijmegen has been chosen as a pilot project to gather practical knowledge and experience on collaborative processes with BIM support.The main ambition to be achieved through the use of BIM in the building projects at UMC can be summarised as follows: using 3D visualisation to enhance the coordination and communication among the building actors, and the user participation in design;integrating the architectural design with structural analysis, energy analysis, cost estimation, and planning;interactively evaluating the design solutions against the programme of requirements and specifications;reducing redesign/remake costs through clash detection during the design process; andoptimising the management of the facility through the registration of medical installations andequipments, fixed and flexible furniture, product and output specifications, and operational data.The second case is a project at the Maxima Medical Centre (MMC). MMC is a large hospital resulted from a merger between the Diaconessenhuis in Eindhoven and St Joseph Hospital in Veldhoven. Annually the 3,400 staff of MMC provides medical services to more than 450,000 visitors and patients. A large-scaled extension project of the hospital in Veldhoven is a part of its real estate strategy. A medical simulation centre and a women-and-children medical centre are among the most important new facilities within this extension project. The design has been developed using 3D modelling with several functionalities of BIM.The findings from both cases and the analysis are as follows.Both UMC and MMC opted for a traditional procurement method in which the client directly contracted an architect, a structural engineer, and a mechanical, electrical and plumbing (MEP) consultant in the design team. Once the design and detailed specifications are finished, a tender procedure will follow to select a contractor. Despite the choice for this traditional method, many attempts have been made for a closer and more effective multidisciplinary collaboration. UMC dedicated a relatively long preparation phase with the architect, structural engineer and MEP consultant before the design commenced. This preparation phase was aimed at creating a common vision on the optimal way for collaboration using BIM as an ICT support. Some results of this preparation phase are: a document that defines the common ambition for the project and the collaborative working process and a semi-formal agreement that states the commitment of the building actors for collaboration. Other than UMC, MMC selected an architecture firm with an in-house engineering department. Thus, the collaboration between the architect and structural engineer can take place within the same firm using the same software application.Regarding the life-cycle design approach, the main attention is given on life-cycle costs, maintenance needs, and facility management. Using BIM, both hospitals intend to get a much better insight in these aspects over the life-cycle period. The life-cycle sustainability criteria are included in the assignments for the design teams. Multidisciplinary designers and engineers are asked to collaborate more closely and to interact with the end-users to address life-cycle requirements. However, ensuring the building actors to engage in an integrated collaboration to generate sustainable design solutions that meet the life-cycle。
毕业设计外文文献翻译(原文+译文)
Environmental problems caused by Istanbul subway excavation and suggestionsfor remediation伊斯坦布尔地铁开挖引起的环境问题及补救建议Ibrahim Ocak Abstract:Many environmental problems caused by subway excavations have inevitably become an important point in city life. These problems can be categorized as transporting and stocking of excavated material, traffic jams, noise, vibrations, piles of dust mud and lack of supplies. Although these problems cause many difficulties,the most pressing for a big city like Istanbul is excava tion,since other listed difficulties result from it. Moreover, these problems are environmentally and regionally restricted to the period over which construction projects are underway and disappear when construction is finished. Currently, in Istanbul, there are nine subway construction projects in operation, covering approximately 73 km in length; over 200 km to be constructed in the near future. The amount of material excavated from ongoing construction projects covers approximately 12 million m3. In this study, problems—primarily, the problem with excavation waste(EW)—caused by subway excavation are analyzed and suggestions for remediation are offered.摘要:许多地铁开挖引起的环境问题不可避免地成为城市生活的重要部分。
毕业设计(论文)外文参考资料及译文
英文原文:Java is a simple, object-oriented, distributed, interpreted, robust security, structure-neutral, portable, high performance, multithreaded dynamic language. The main advantage of Java language, Java applications across hardware platforms and operating systems for transplant - this is because the JVM is installed on each platform can understand the same byte code. Java language and platform scalability is very strong. At the low end, Java language is the first open standards technology support enterprise one, support the use of XML and Web service can not stride business lines to share information and applications Cheng Xu.There are three versions of Java platform, which makes software developers, service providers and equipment manufacturers can target specific market development:1. Java SE form applications. Java SE includes support for Java Web services development classes, and for the Java Platform, Enterprise Edition (Java EE) to provide a basis. Most Java developers use Java SE 5, also known as Java 5.0 or "Tiger".2. Java EE formerly known as J2EE. Enterprise Edition to help develop and deploy portable, robust, scalable and secure server-side Java applications. Java SE Java EE is built on the foundation, which provides Web services, component model, management and communication API, can be used to achieve enterprise-class service-oriented architecture and Web 2.0 applications.3. Java ME formerly known as J2ME. Java ME devices in mobile and embedded applications running on a robust and flexible environment. Java ME includes flexible user interfaces, robust security model, and many built-in network protocols and networking that can be dynamically downloaded and extensive support for offline applications. Java ME-based application specification only write once and can be used in many devices and can use the native features of each device.Java language is simple. Java language syntax and the C language and C ++ language is very close, Java discarded the C++, rarely used, hard to understand the characteristics, such as operator overloading, multiple inheritance, the mandatory automatic type conversion. Java language does not use pointers, and provides automated waste collection. Java is an object-oriented language. Java language provides classes, interfaces and inheritance of the original language, for simplicity, only supports single inheritance between classes, but support multiple inheritance between interfaces and support classes and interfaces to achieve between the mechanism (keyword implements) . Java language fully supports dynamic binding, and C ++ language used only for dynamic binding of virtual functions. In short, Java language is a pure object-oriented programming language. Java language is distributed. Java language support for Internet application development, Java's RMI (remote method activation) mechanism is also an important means of developing distributed applications. Java language is robust. Java's strong type system, exception handling, automated waste collection is an important guarantee robust Java programs. Java language is safe. Java is often used in network environment, this, Java provides a security mechanism to prevent malicious code attacks.Java language is portable. This portability comes from the architecture neutrality. Java system itself is highly portable. Java language is multi-threaded. In the Java language, the thread is a special object, it must Thread class or the son (Sun) class to create. Java language support simultaneous execution of multiple threads, and provide synchronization mechanisms between threads (keyword synchronized).Java language features make Java an excellent application of unparalleled robustness and reliability, which also reduced application maintenance costs. Java on the full support of object technology and Java Platform API embedded applications to reduce development time and reduce costs. Java's compile once, run everywhere feature can make it anywhere available to provide an open architecture and multi-platform, low-cost way of transmitting information between. Hibernate Hibernate is a lightweight JDBC object package. It is an independent object persistence framework, and the App Server, and EJB is no necessary link. Hibernate can use JDBC can be used in any occasion, such as Java application, database access code, DAO interface implementation class, or even access the database inside a BMP code. In this sense, Hibernate, and EB is not a category of things that did not exist either-or relationship.Hibernate and JDBC is a closely related framework, the Hibernate and JDBC driver compatibility, and databases have some relationship, but the Java program and use it, and the App Server does not have any relationship, there was no compatibility issues. 1614Hibernate provides two Cache, first-level cache is a Session-level cache, which cache belongs to the scope of services. This level of cache by the hibernate managed without the need for intervention under normal circumstances; second-level cache is SessionFactory-level cache, it belongs to the process of range or scope of the cache cluster. This level of cache can be configured and changed, and can be dynamically loaded and unloaded. Hibernate query results also provide a query cache, it depends on the second level cache.When an application called Session's save (), update (), saveOrUpdate (), get () or load (), and the query interface call list (), iterate () or filter () method, if the Session cache does not exist a corresponding object, Hibernate will put the object to the first level cache. When cleaning the cache, Hibernate objects according to the state of the cache changes to synchronize update the database. Session for the application provides two methods of managing the cache: evict (Object obj): removed from the cache parameters of the specified persistent object. clear (): Empty the cache of all persistent objects.Hibernate second-level cache strategy general process is as follows:1) The condition when a query is always issued a select * from table_name where .... (Select all fields) such as SQL statement to query the database, an access to all of the data object.2) all the data objects to be placed under the ID to the second level cache.3) When the Hibernate object-based ID to access the data, the first check from the Session a cache; finding out, if the configuration of the secondary cache, then the secondary cache from the investigation; finding out, and then query the database, the results in accordance with the ID into the cache.4) remove, update and increase the time data, while updating the cache. Hibernate second against the conditions of the Query Cache.Hibernate object-relational mapping for the delay and non-delay object initialization. Non-lazy when reading an object and the object will be all read out together with other objects. This sometimes results in hundreds (if not thousands of words) select statement when reading the object implementation. This problem sometimes occurs when using the two-way relationship, often leading to the databases to be read during the initialization phase out. Of course, you can take the trouble to examine each object and other objects of Guanxi, and to the most expensive of the Shan Chu, but in the last, we may therefore lose Le ORM tool this Xiangzai obtained Bian Li.A cache and secondary cache of comparison: the first level cache second level cache data stored in the form of interrelated persistent objects the object of bulk data cache range of the scope of services, each transaction has a separate first-level cache process range or scope of the cluster, the cache is the same process or cluster to share on all matters within the concurrent access policies because each transaction has a separate first-level cache, concurrency problem does not occur without the need to provide concurrent access policy will be a number of matters simultaneous access to the same second-level cache data, it is necessary to provide appropriate concurrent access policies, to ensure that a particular transaction isolation level data expiration policies did not provide data expiration policies. Object in a cache will never expire, unless the application explicitly clear the cache or clear a specific object must provide data expiration policies, such as memory cache based on the maximum number of objects, allowing objects in the cache of the most a long time, and allowing the object in the cache the longest idle time of physical memory and hard disk memory storage medium. First of all bulk data objects stored in the memory-based cache, when the number of objects in memory to data expiration policy specified limit, the remaining objects will be written on the hard disk cache. Caching software implementation of the Hibernate Session is included in the realization of the cache provided by third parties, Hibernate provides only a cache adapter (CacheProvider). Used to plug into a particular cache in Hibernate. Way cache enabled applications by as long as the Session interface implementation save, update, delete, data loading and query the database operations, Hibernate will enable first-level cache, the data in the database in the form of an object copied to the cache For batch updates and bulk delete operations, if you do not want to enable first-level cache, you can bypass the Hibernate API, JDBC API directly to perform that operation. Users can type in a single class or a single set of second-level cache size on the configuration. If the instance of the class are frequently read but rarely modified, you can consider using a second-level cache. Only for a class or set of second-level cache is configured, Hibernate will run when an instance of it to the second-level cache. User management means the first level cache of physical media for the memory cache, because the memory capacity is limited, must pass the appropriate search strategies and retrieval methods to limit the number of objects loaded. Session of the evit () method can explicitly clear the cache a specific object, but this method is not recommended. Second-level cache memory andthe physical media can be a hard disk, so the second-level cache can store large amounts of data, data expiration policy maxElementsInMemory property values can control the number of objects in memory. Second-level cache management mainly includes two aspects: Select to use the second-level cache of persistent classes, set the appropriate concurrency strategy: Select the cache adapter, set the appropriate data expiration policies.One obvious solution is to use Hibernate's lazy loading mechanism provided. This initialization strategy is only invoked in an object-to-many or many to many relationship between its relationship only when read out of the object. This process is transparent to the developer, and only had a few requests for database operations, it will be more obvious performance have open. This will be by using the DAO pattern abstracts the persistence time of a major problem. Persistence mechanisms in order to completely abstract out all of the database logic, including open or closed session, can not appear in the application layer. The most common is the realization of the simple interface of some DAO implementation class to encapsulate the database logic completely. A fast but clumsy solution is to give up DAO mode, the database connection logic to add the application layer. This may be an effective small applications, but in large systems, this is a serious design flaw, preventing the system scalability.Struts2Struts2 is actually not a stranger to the Web frameworks, Struts2 is Webwork design ideas as the core, absorb Struts1 advantages, so that the Struts2 is the product of the integration Struts1 and Webwork.MVC Description: Struts2 WebWork is compatible with the MVC framework Struts1 and since, that the MVC framework on the MVC framework will have to make a brief, limited to a brief, if want to learn more about MVC can view the related knowledge document, or to find a Struts1 books, I believe the above is not rare on the length of MVC. Closer to home, in fact, Java the present situation of these frameworks, its ultimate goal is to contact coupling, whether Spring, Hibernate or the MVC framework, are designed to increase contact with coupling reuse. MVC contact with the coupling between View and Model. MVC consists of three basic parts: Model, View and Controller, these three parts work together to minimize the coupling to increase the scalability of the program and maintainability. Various parts of the implementation technology can be summarized as follows:1) Model: JavaBean, EJB's EntityBean2) View: JSP, Struts in TagLib3) Controller: Struts the ActionServlet, ActionTo sum up the advantages of MVC mainly about aspects:1) corresponds to multiple views can be a model. By MVC design pattern, a model that corresponds to multiple views, you can copy the code and the code to reduce the maintenance amount, if model changes, but also easy to maintain2) model the data returned and display logic separate. Model data can be applied to any display technology, for example, use the JSP page, Velocity templates, or directly from Excel documents, etc.3) The application is separated into three layers, reducing the coupling between the layers, providing application scalability4) The concept of layers is also very effective, because it put the different models and different views together, to complete a different request. Therefore, the control layer can be said to include the concept of user requests permission5) MVC more software engineering management. Perform their duties in different layers, each layer has the same characteristics of the components is beneficial tool by engineering and production management of program codeStruts2 Introduction: Struts2 Struts1 development appears to come from, but in fact Struts1 Struts2 and design ideas in the framework of the above is very different, Struts2 WebWork's design is based on the core, why not follow the Struts1 Struts2 design ideas After all, Struts1 in the current enterprise applications market is still very big in the, Struts1 some shortcomings:1) support the performance of a single layer2) coupled with the Servlet API serious, this could be the Execute method from the Action Statement which you can see them3) The code depends Struts1 API, there are invasive, this can be written when the Action class and look out FormBean, Action Struts in Action class must implement The reason for Struts2 WebWork's design for the core point is the recent upward trend of WebWork and play WebWork not Struts1 above those shortcomings, more MVC design ideas, and more conducive to reuse the code. Based on the above description can be read out, Struts2 architecture and architecture Struts1 very different, Struts1 is to use the ActionServlet as its central processor, Struts2 is using an interceptor (FilterDispatcher) as its central processor, so One benefit is to make Action class and Servlet API was isolated.Struts2 simple process flow is as follows:1) browser sends a request2) the processor to find the corresponding file under struts.xml the Action class to process the request3) WebWork interceptor chain applications automatically request common functions, such as: WorkFlow, Validation functions4) If Struts.xml Method configuration file parameters, then call the corresponding Action Method parameters in the Method class method, or call the Execute method to deal with common user request5) Action class method returns the results of the corresponding response to the browserStruts2 and Struts1 contrast:1) Action class impleme achieve the time to achieve any classes and interfaces, while providing a ActionSupport class Struts2, however, not required.2) Struts1 the Action class is the singleton pattern, must be designed into the thread-safe, Struts2 was generated for each request for an instance3) Struts1 the Action class dependence and the Servlet API, execute the method from its signature can be seen, execute method has two parameters Servlet HttpServletRequest and HttpServletResponse, Struts2 is not dependent on the ServletAPI4) Struts1 depends on the Servlet API the Web elements, therefore, of Action Struts1 when testing is difficult, it needs with other testing tools, Struts2 in Action can be as testing a number of other classes as Service Model layer test5) Struts1 of Action and the View through the ActionForm or its sub-class of data transmission, although there LazyValidationForm this ActionForm appearance, but still can not like the other levels as a simple POJO data transfer, and Struts2 would like expect change becomes a reality6) Struts1 binding of the JSTL, the preparation of convenience for the page, Struts2 integrates ONGL, you can use JSTL, Therefore, Struts2 is more powerful expression language underCompared with Struts2 WebWork: Struts2 actually WebWork2.3, however, Struts2 WebWork, or with a little difference:1) Struts2 IOC no longer support the built-in containers, use Spring's IOC container2) Struts2 Ajax for Webwork features some of the label to use Dojo to be replacedServletServlet is a server-side Java application, platform and protocol independent features that can generate dynamic Web pages. Customer requests to play it (Web browser or other HTTP client) and server response (HTTP server, database or application) of the middle layer. Servlet Web server is located inside the server-side Java applications started from the command line with the traditional application of different Java, Servlet loaded by the Web server, the Web server must include the Java Virtual Machine to support Servlet.HTTP Servlet using a HTML form to send and receive data. To create an HTTP Servlet, need to extend the HttpServlet class, the class is a special way to handle HTML forms GenericServlet a subclass. HTML form is <FORM> and </ FORM> tag definition. Form typically includes input fields (such as text input fields, check boxes, radio buttons and selection lists) and a button for submitting data. When submitting information, they also specify which server should implement the Servlet (or other program). HttpServlet class contains the init (), destroy (), service () and other methods. Where init () and destroy () method is inherited.init () method: In the Servlet life period, only run once init () method. It is executed when the server load Servlet. You can configure the server to start the server or the client's first visit to Servlet fashion into the Servlet. No matter how many clients to access Servlet, will not repeat the init (). The default init () method is usually to meet the requirements, but can also use custom init () method to overwrite it, typically the management server-side resources. For example, you may write a custom init () to be used only once a load GIF images, GIF images and improve the Servlet returns with the performance of multiple clients request. Another example is to initialize the database connection. The default init () method sets the Servlet initialization parameters, and use it's ServletConfig object parameter to start the configuration, all covered by init () method of the Servlet should call super.init () to ensure that stillperform these tasks. In the call to service () method before, make sure you have completed the init () method.service () method: service () method is the core of Servlet. Whenever a client requests a HttpServlet object, the object of the service () method must be called, and passed to this method a "request" (ServletRequest) objects and a "response" (ServletResponse) object as a parameter. Already exists in the HttpServlet service () method. The default service function is invoked with the HTTP request method to do the corresponding functions. For example, if the HTTP request method is GET, the default on the call to doGet (). Servlet Servlet support should do HTTP method override function. Because HttpServlet.service () method checks whether the request method calls the appropriate treatment, unnecessary coverage service () method. Just do cover the corresponding method on it.Servlet response to the following types: an output stream, the browser based on its content type (such as text / HTML) to explain; an HTTP error response, redirect to another URL, servlet, JSP.doGet () method: When a client through the HTML form to send a HTTP GET request or when a direct request for a URL, doGet () method is called. Parameters associated with the GET request to the URL of the back, and send together with this request. When the server does not modify the data, you should use doGet () method. doPost () method: When a client through the HTML form to send a HTTP POST request, doPost () method is called. Parameters associated with the POST request as a separate HTTP request from the browser to the server. When the need to modify the server-side data, you should use the doPost () method.destroy () method: destroy () method is only executed once, that is, stop and uninstall the server to execute the method of Servlet. Typically, the Servlet as part of the process server to shut down. The default destroy () method is usually to meet the requirements, but can also cover it, and typically manage server-side resources. For example, if the Servlet will be accumulated in the run-time statistics, you can write a destroy () method is used in Servlet will not load the statistics saved in the file. Another example is to close the database connection.When the server uninstall Servlet, it will in all service () method call is completed, or at a specified time interval after the call to destroy () method. Running a Servlet service () method may have other threads, so make sure the call destroy () method, the thread has terminated or completed.GetServletConfig () method: GetServletConfig () method returns a ServletConfig object, which used to return the initialization parameters and ServletContext. ServletContext interface provides information about servlet environment. GetServletInfo () method: GetServletInfo () method is an alternative method, which provides information on the servlet, such as author, version, copyright.When the server calls sevlet of Service (), doGet () and doPost () of these three methods are needed "request" and "response" object as a parameter. "Request" object to provide the requested information, and the "response" object to provide a response message will be returned to the browser as a communications channel.javax.servlet packages in the relevant classes for the ServletResponse andServletRequest, while the javax.servlet.http package of related classes for the HttpServletRequest and HttpServletResponse. Servlet communication with the server through these objects and ultimately communicate with the client. Servlet through call "request" object approach informed the client environment, server environment, information and all information provided by the client. Servlet can call the "response" object methods to send response, the response is ready to send back to clientJSPJavaServerPages (JSP) technology provides a simple and fast way to create a display content dynamically generated Web pages. Leading from the industry, Sun has developed technology related to JSP specification that defines how the server and the interaction between the JSP page, the page also describes the format and syntax.JSP pages use XML tags and scriptlets (a way to use script code written in Java), encapsulates the logic of generating page content. It labels in various formats (HTML or XML) to respond directly passed back to the page. In this way, JSP pages to achieve a logical page design and display their separation.JSP technology is part of the Java family of technologies. JSP pages are compiled into a servlet, and may call JavaBeans components (beans) or EnterpriseJavaBeans components (enterprise beans), so that server-side processing. Therefore, JSP technology in building scalable web-based applications play an important role.JSP page is not confined to any particular platform or web server. JSP specification in the industry with a wide range of adaptability.JSP technology is the result of collaboration with industry, its design is an open, industry standards, and support the vast majority of servers, browsers and related tools. The use of reusable components and tags replaced on the page itself relies heavily on scripting languages, JSP technology has greatly accelerated the pace of development. Support the realization of all the JSP to Java programming language-based scripting language, it has inherent adaptability to support complex operations.JqueryjQuery is the second prototype followed by a good Javascrīpt framework. Its purpose is: to write less code, do more.It is lightweight js library (compressed only 21k), which is less than the other js library which, it is compatible CSS3, is also compatible with all browsers (IE 6.0 +, FF 1.5 +, Safari 2.0 +, Opera 9.0 +).jQuery is a fast, simple javaScript library, allowing users to more easily dealwith HTML documents, events, to achieve animation effects, and provide easy AJAX for interactive web site.jQuery also has a larger advantage is that it is all documented, and various applications are very detailed, as well as many mature plug-ins available.jQuery's html page to allow users to maintain separate code and html content, that is, no need to insert in the html inside a pile of js to call the command, and you can just define id.jQuery is the second prototype followed by a good Javascrīpt framework. On theprototype I use small, simple and understood. However, after using the jquery immediately attracted by her elegance. Some people use such a metaphor to compare the prototype and jquery: prototype like Java, and jquery like a ruby. In fact I prefer java (less contact with Ruby Bale), but a simple jquery does have considerable practical appeal ah! I put the project in the framework jquery as its the only class package. Use the meantime there is also a little bit of experience, in fact, these ideas, in the jquery documentation above may also be speaking, but still it down to stop notes.译文:Java是一种简单的,面向对象的,分布式的,解释型的,健壮安全的,结构中立的,可移植的,性能优异、多线程的动态语言。
毕业设计外文文献翻译范文
毕业设计外文文献翻译专业学生姓名班级学号指导教师优集学院外文资料名称:Knowledge-Based Engineeri--ng Design Methodology外文资料出处:Int.J.Engng Ed.Vol.16.No.1附件: 1.外文资料翻译译文2.外文原文基于知识工程(KBE)设计方法D. E. CALKINS1.背景复杂系统的发展需要很多工程和管理方面的知识、决策,它要满足很多竞争性的要求。
设计被认为是决定产品最终形态、成本、可靠性、市场接受程度的首要因素。
高级别的工程设计和分析过程(概念设计阶段)特别重要,因为大多数的生命周期成本和整体系统的质量都在这个阶段。
产品成本的压缩最可能发生在产品设计的最初阶段。
整个生命周期阶段大约百分之七十的成本花费在概念设计阶段结束时,缩短设计周期的关键是缩短概念设计阶段,这样同时也减少了工程的重新设计工作量。
工程权衡过程中采用良好的估计和非正式的启发进行概念设计。
传统CAD工具对概念设计阶段的支持非常有限。
有必要,进行涉及多个学科的交流合作来快速进行设计分析(包括性能,成本,可靠性等)。
最后,必须能够管理大量的特定领域的知识。
解决方案是在概念设计阶段包含进更过资源,通过消除重新设计来缩短整个产品的时间。
所有这些因素都主张采取综合设计工具和环境,以在早期的综合设计阶段提供帮助。
这种集成设计工具能够使由不同学科的工程师、设计者在面对复杂的需求和约束时能够对设计意图达成共识。
那个设计工具可以让设计团队研究在更高级别上的更多配置细节。
问题就是架构一个设计工具,以满足所有这些要求。
2.虚拟(数字)原型模型现在需要是一种代表产品设计为得到一将允许一产品的早发展和评价的真实事实上原型的过程的方式。
虚拟样机将取代传统的物理样机,并允许设计工程师,研究“假设”的情况,同时反复更新他们的设计。
真正的虚拟原型,不仅代表形状和形式,即几何形状,它也代表如重量,材料,性能和制造工艺的非几何属性。
毕业设计(论文)外文翻译【范本模板】
华南理工大学广州学院本科生毕业设计(论文)翻译英文原文名Review of Vibration Analysis Methods for Gearbox Diagnostics and Prognostics中文译名对变速箱振动分析的诊断和预测方法综述学院汽车工程学院专业班级车辆工程七班学生姓名刘嘉先学生学号201130085184指导教师李利平填写日期2015年3月15日英文原文版出处:Proceedings of the 54th Meeting of the Society for Machinery Failure Prevention Technology, Virginia Beach,V A, May 1-4,2000,p. 623-634译文成绩:指导教师(导师组长)签名:译文:简介特征提取技术在文献中有描述;然而,大多数人似乎掩盖所需的特定的预处理功能。
一些文件没有提供足够的细节重现他们的结果,并没有一个全面的比较传统的功能过渡齿轮箱数据。
常用术语,如“残差信号”,是指在不同的文件不同的技术.试图定义了状态维修社区中的常用术语和建立所需的特定的预处理加工特性。
本文的重点是对所使用的齿轮故障检测功能。
功能分为五个不同的组基于预处理的需要。
论文的第一部分将提供预处理流程的概述和其中每个特性计算的处理方案。
在下一节中,为特征提取技术描述,将更详细地讨论每一个功能。
最后一节将简要概述的宾夕法尼亚州立大学陆军研究实验室的CBM工具箱用于齿轮故障诊断。
特征提取概述许多类型的缺陷或损伤会增加机械振动水平。
这些振动水平,然后由加速度转换为电信号进行数据测量。
原则上,关于受监视的计算机的健康的信息被包含在这个振动签名。
因此,新的或当前振动签名可以与以前的签名进行比较,以确定该元件是否正常行为或显示故障的迹象。
在实践中,这种比较是不能奏效的。
由于大的变型中,签名的直接比较是困难的。
相反,一个涉及从所述振动署名数据特征提取更多有用的技术也可以使用。
毕业设计外文原文及翻译
Thermal analysis for the feed drive system of a CNC machineAbstractA high-speed drive system generates more heat through friction at contact areas, such as the ball-screw and the nut, thereby causing thermal expansion which adversely affects machining accuracy. Therefore, the thermal deformation of a ball-screw is oneof the most important objects to consider for high-accuracy and high-speed machine tools. The objective of this work was to analyze the temperature increase and the thermal deformation of a ball-screw feed drive system. The temperature increase was measured using thermocouples, while a laser interferometer and a capacitance probe were applied to measure the thermal error of the ball-screw.Finite element method was used to analyze the thermal behavior of a ball-screw. The measured data were compared with numerical simulation results. Inverse analysis was applied to estimate the strength of the heat source from the measured temperature profile.The generated heat sources for different feed rates were investigated.Keywords:Machine tool; Ball-screw; Thermal error; Finite element method; Thermocouple1. IntroductionPrecise positioning systems with high speed, high resolution and long stroke become more important in ultra-precision machining. The development of high-speed feed drive systems has been a major issue in the machine-tool industry. A high-speed feed drive system reduces necessary non-cutting time. However, due to the backlash and friction force between the ball-screw and the nut, it is difficult to provide a highly precise feed drive system.Most current research is focused on the thermal error compensation of the whole machine tools. Thermally induced error is a time-dependent nonlinear process caused by nonuniform temperature variation in the machine structure. The interaction between the heat source location, its intensity, thermal expansion coefficient and the machine system configuration creates complex thermal behavior . Researchers have employed various techniques namely finite element methods,coordinate transformation methods, neural net-works etc., in modelling the thermal characteristicsA high-speed drive system generates more heat through friction at contact areas, such as the ball-screw and the nut, thereby causing thermal expansion which adversely affects machining accuracy. Therefore, the thermal deformation of a ball-screw is one of the most important objects to consider for high-accuracy and high-speed machine tools [5]. In order to achieve high-precision positioning, pre-load on the ball-screw is necessary to eliminate backlash. ball-screw pre-load also plays an important role in improving rigidity, noise, accuracy and life of the positioning stage [6]. However, pre-load also produces significant friction between the ball-screw and the nut that generates greater heat, leading to large thermal deformation of the ball-screw and causing low positioning accuracy. Consequently, the accuracy of the main system, such as a machine tool, is affected. There-fore, anoptimum pre-load of the ball-screw is one of the most important things to consider for machine tools with high accuracy and great rigidity.Only a few researchers have tackled this problem with some success. Huang used the multiple regression method to analyze the thermal deformation of a ball-screw feed drive system. Three temperature increases at front bearing, nut and back bearing were selected as independent variables of the analysis model. The multiple-regression model may be used to predict the thermal deformation of the ball-screw. Kim et al. Analyzed the temperature distribution along a ball-screw system using finite element methods with bilinear type of elements. Heat induced due to friction is the main source of deformation in a ball-screw system, the heat generated being dependent on the pre-load, the lubrication of the nut and the assembly conditions. The proposed FEM model was based on the assumption that the screw shaft and nut are a solid and hollow shaft respectively. Yun et al. used the modified lumped capacitance method and genius education algorithm to analyze the linear positioning error of the ball-screw.The objective of this work was to analyze the temperature increase and the thermal deformation of a ball-screw feed drive system. The temperature increase was measured using thermocouples while a laser interferometer and a capacitance probe were applied to meas-ure the thermal error of the ball-screw. Finite element method was also applied to simulate the thermal behavior of the ball-screw. Measured data were compared with numerical simulation results. Inverse analysis was applied to estimate the strength of the heat source from the measured temperature pro file. Generated heat sources for different feed rates were investigated.2 Experimental set-up and procedureIn this study, the object used to investigate the thermal characteristics of a ball-screw system is a machine center as shown in Fig. 1. The maximum rapid speed along thex-axis of the machine center is 40 m/min and the x-axis travel is 800 mm. The table repeatedly moved along the x-axis with a stroke of 600 mm. The main heat sourceFig. 1. Photograph of machine center.of the ball-screw system is the friction caused by a moving nut and rotating bearings. The produced temperature increase and thermal deformation were measured to study the thermal characteristics of the ball-screw system.In order to measure the temperature increase and the thermal deformation of a ball-screw system under long-term movement of the nut, experiments were performed with the arrangement shown in Fig. 2. Temperatures at nine points were measured as shown in Fig. 2a .Two thermocouples (numbered 1 and 8) were located on the rear and front bearing surfaces, respectively. They were used to measure the surface temperatures of these two support bearings. The last one (numbered 9) was used to measure the room temperature. The recorded room temperature was to eliminate the effect of environmental variation. These three thermocouples were used for continuous acquisition under moving conditions. The other six thermocouples (numbered 2 –7) were used to measure the surface temperatures of the ball-screw. Because the moving nut covered most of the ball-screw, thermocouples cannot be consistently fixed on the ball-screw. While temperature measurement was necessary, the ball-screw stopped running and these six thermocouples were quickly attached to specified locations of the ball-screw. Having collected the required data, the thermocouples were quickly removed.Thermal deformation errors were simultaneously measured with two methods. Because a thrust bearing is used on the driving side of the ball-screw, this end is considered to be fixed. A capacitance probe was installed next to the driven side of the ball-screw with a direction perpendicular to the side surface as shown in Fig. 2b. This probe was used to record the whole thermal deformation of the ball-screw. The values can be collected continuously during running conditions. The second method is used to measure the thermal error distribution at some specified time. Before the feed drive system started to operate, the original positional error distribution was measured with a laser interferometer (HP5528A). The table moved step-by-step (the increment ofFig. 2. Locations of measured points for (a) temperatures and (b) thermal errors.each step was 100 mm) and the positioning error was recorded at each step. Then the feed drive system started to operate and generate heat. After a certain time interval, the feed drive system stopped to measure thermal errors. In the same way, the positioning error distribution was again collected with the laser interferometer. Subtracting the actual error from the original error of the x-coordinates, the results are net thermal errors. Having collected the temperature increase (with thermocouples) and deformation distribution, the feed drive system starts running again.In this study, three feed rates (10, 15 and 20 m/min) along the x-axis and three different pre-loads (0, 150 and 300 kgf·cm) were used. The table moved along the x-axis in a reciprocating motion and the stroke was 600mm. The point temperatures and thermal errors were measured at sampling intervals of 10 min. Each stopping time was only about 10 s. These procedures were operated repeatedly until the temperature reached a steady state.3. Experimental results and discussionThe developed experimental setup was utilized for three constant feed rates (running at 10, 15 and 20m/min, respectively). The table reciprocated until point temperatures and thermal errors reached a steady state. Firstly, the ball-screw pre-load was zero and its thermal characteristics were studied. In Fig. 3, temperature variationsFig. 3. (a) Measured temperature increase and (b) thermal error over time for feed rate of 10 m/min and zero-pre-load.and thermal errors of the feed drive system are shown over time for a feed rate of 10 m/min. Measurements can also be made for feed rates of 15 and 20m/min. Themeasured data at a steady state are shown in Tables 1 and 2 . A brief discussion can be made as fol-lows.1. The higher feed rate produces larger frictional heat at the interface between the ball-screw and the nut. The frictional heat generated by the support bearings and the motor also increases with the feed rate. Therefore, the temperature of the ball-screw increases with the feed rate.2. The table travels over the middle part with a 600 mm stroke. The central part of the ball-screw reveals a higher temperature increase. Support bearings do not have high temperature increase because the bearing pre-load is zero.3. A higher rotational speed brings a larger thermal expansion for the ball-screw. The middle part of the screw has a slightly larger thermal expansion because of its higher temperature increase. However, this phenomenon is not obvious. The thermal error at some specified point of the ball-screw is approximately proportional to the distance between this point and the front end (the motor-driving side of the screw). Secondly, the ball-screw pre-load was set at 150kgf·cm and its thermal characteristics were studied. In Figs. 4 –5, temperature variations around the feed drive system and thermal errors are shown over time for feed rates of 10 and 15 m/min. Measured data are shown in Tables 1 and 2. Results reveal two interesting phenomena shown as follows.1. Temperature increases of measured points grow gradually until the ball-screw reaches a steady state except for the temperature increase of the bearing on the driven side. The temperature of this bearing quickly reaches a maximum value and then gradually drops.2. The thermal errors of P6, P7 and P8 are negative at the steady state. It means that these three points move to the driving side due to thermal expansion, while other points move to the driven side. Furthermore, the thermal errors of P4 to P8 show a gradual decrease after 60 min.These phenomena are different from previous results with no pre-load. Some experiments were carried out to study these phenomena. We found that the two bearing stands bent if the ball-screw was pre-loaded. After the pre-load was applied on the ball-screw, the original positional error distribution was measured using a laser interferometer. At this moment, the bending effects on error distribution were includ- Table 1Temperature distribution at steady state with different pre-loads and feed rates (unit: °C)Table 2Thermal error distribution at steady state with different pre-loads and feed rates (unit:µm)Fig. 4. (a) Measured temperature increase Fig. 5. (a) Measured temperature increase and (b) thermal error over time for feed and (b) thermal error over time for feed rate of 10 m/min and pre-load of rate of 15 m/min and pre-load of150kgf ·cm. 150kgf ·cm.-ed in the measured positioning error. The feed drive system starts to run and the ball-screw expands. The expansion relaxes the pre-load of the ball-screw and the bending deformation of two bearing stands. Therefore, the points on the driving side move closer to the motor, thereby thermal errors are negative, nevertheless, the points on the driven side move to the free end, thereby thermal errors are positive.The temperature change of the rear bearing was also investigated. A journal bearing was applied on the driven side and a thrust bearing was applied on the driving side. The pre-load of the ball-screw increases the pre-load of the bearing on the driven side. When the feed drive system runs, the bearing temperature on the driven side sharplyincreases due to the rising pre-load. However, the thermal expansion of the ball-screw relaxes the ball-screw and decreases pre-load of the bearing on the driven side. Therefore, the temperature gradually decreases to a steady state.Finally, the ball-screw pre-load was set to 300kgf·cm and its thermal characteristics were studied. In Figs. 6 and 7, temperature variations around the feed drive systemFig. 6. (a) Measured temperature increase and (b) thermal error over time for feed rate of 10 m/min and pre-load of 300kgf ·cm.Fig. 7. (a) Measured temperature increase and (b) thermal error over time for feed rate of 15 m/min and pre-load of 300kgf ·cm.and thermal errors are shown over time for feed rates of 10 and 15 m/min. The tendency with a 300kgf·cm is similar to that with a 150kgf·cm. Measured data are shown in Tables 1 and 2.4. Numerical simulationThe main heat source of a ball-screw system is the friction caused by a moving nut and the support bearings. In this study, temperature distribution was calculated using the FEM based on the following assumptions:1. The screw shaft is a solid cylinder.2. Friction heat generation between the moving nut and the screw shaft is uniform at contacting surface and is proportional to contacting time.3. Heat generation at support bearings is also constant per unit area and unit time.4. Convective heat coefficients are always constant during moving. The radiation term is neglected.The problem is de fined as transient heat conduction in non-deforming media without radiation. A classical form of the initial/boundary value problem is shown below:where is the internal heat generation rate, q the entering heat flux, a unit outward normal vector, the ambient temperature and h the convective heattrans-fer coefficient at a given boundary. A simplified heat transfer model of the ball-screw system is described in Fig. 8 along with the boundary conditions. The nut moves reciprocally with a stroke, s. The length of the nut is w. According to the previously mentioned assumption, No. 2, frictional heat fluxes on the balls-crew are shown as in Fig. 8b . Both ends of the balls-crew are subjected to frictional heat fluxq and q caused by the support bearings. Heat fluxes on rear and front ends are13 respectively. Other surfaces are subjected to convection heat transfer as shown in Fig.8c .To obtain an approximate solution, Eqs. (1)–(3) may be transformed through discretization into algebraic expressions that may be solved for unknowns. In orderto allow the replacement of the continuous system by an equivalent discrete system, the original domain is divided into elements. Four-node tetrahedral elements are chosen in this study. Elements and nodes of the balls-crew for FEM are shown in Fig.9.Once temperature distribution is obtained, the thermal expansion of the balls-crew may be predicted. In the case of linearly elastic isotropic three-dimensional solid, stress–strain relations are given by Hooke ’s law as [9]:of balls-crew.Fig. 9. Elements and nodes of ball-screw for FEM.where [C] is a matrix of elastic coefficients and 0ε→is the vector of initial strains. In the case of heating of an isotropic material, the initial strain vector is given by:where a is the coefficient of thermal expansion and T is the temperature change. Three unknowns 123,,q q and q are to be determined with inverse analysis. Firstly, initial guessing of these heat fluxes is applied in FEM simulation to obtain the temperature distribution of the balls-crew. If numerical results do not agree with the measured temperature distribution, the values of 123,,q q and q are adjusted iteratively until numerical and simulation results are in good agreement.Calculated values of 123,,q q and q for an un-pre-loaded ball-screw are listed in Table3. Measured and simulated temperature distributions for feed rates of 10, 15 and 20 m/min are indicated in Fig. 10. For each feed rate, it shows a good agreement between measured and simulated temperature distributions. The numerical program can also be used to simulate the thermal expansion of the ball-screw based on the calculated heatTable 3Values of heat flux at different locations (unit:2W m)/Fig. 10. Temperature increase from experimental measurement and numerical simulation for feed rate of (a) 10 m/min, (b) 15 m/min and (c) 20 m/min.fluxes. Measured and simulated thermal expansions of the ball-screw are compared as shown in Table 4. Thermal expansions also show good agreement with each other. From Table 3, the heat flux increases with the feed rate. Approximate linear relation can be found between the heat flux and the feed rate under the same operating condition.5. ConclusionsThis paper proposes a systematic method to investigate the thermal characteristics of a feed drive system. The approach measures the temperature increase and the thermal deformation under long-term movement of the working table. A simplified FEM model for the ball-screw was developed. The FEM model incorporated with themeasured temperature distribution was used to determine the strength of the frictional heat source by inverse analysis. The strength of the heat source was applied to the FEM model to calculate the thermal errors of the feed drive system. Calculated and measured thermal errors were found to agree with each other. From the results, the following conclusions can be drawn:1. The positional accuracy increases while closer to the driving side of the ball-screw. The thermal error increases with the distance to the driven side of the ball-screw. The maximum thermal error occurs at the driven side of the ball-screw (free end). This value can be taken as the total thermal error of the ball-screw and may be measured with a capacitance probe.2. The ball-screw pre-load raises the temperature increases of both support bearings, especially the bearing on the driven side. The surface temperature of the ball-screw decreases because the thermal effects relax the pre-load, thereby decreasing the friction between the nut and the ball-screw.3. The thermal expansion of the ball-screw increases with the feed rate, thereby increasing the positional error. However, the increasing pre-load reduces thermal errors and improves the positional accuracy of the feed drive system.4.Two bearing stands may bend if the ball-screw is pre-loaded. The thermal expansi Table 4Thermal errors at different feed rates-on relaxes the pre-load of the ball-screw and the bending deformation of two bearing stands. Therefore, the points on the motor side move closer to the motor and the thermal errors are negative; nevertheless, the points on the free side move to the free end and the thermal errors are positive.数控加工中心进给驱动系统的热分析摘要高速驱动系统在接触区域(如滚珠丝杠和螺母)通过摩擦产生大量的热,从而导致热膨胀,热膨胀严重地影响机械加工精度。
软件工程专业毕业设计外文文献翻译
软件工程专业毕业设计外文文献翻译1000字本文将就软件工程专业毕业设计的外文文献进行翻译,能够为相关考生提供一定的参考。
外文文献1: Software Engineering Practices in Industry: A Case StudyAbstractThis paper reports a case study of software engineering practices in industry. The study was conducted with a large US software development company that produces software for aerospace and medical applications. The study investigated the company’s software development process, practices, and techniques that lead to the production of quality software. The software engineering practices were identified through a survey questionnaire and a series of interviews with the company’s software development managers, software engineers, and testers. The research found that the company has a well-defined software development process, which is based on the Capability Maturity Model Integration (CMMI). The company follows a set of software engineering practices that ensure quality, reliability, and maintainability of the software products. The findings of this study provide a valuable insight into the software engineering practices used in industry and can be used to guide software engineering education and practice in academia.IntroductionSoftware engineering is the discipline of designing, developing, testing, and maintaining software products. There are a number of software engineering practices that are used in industry to ensure that software products are of high quality, reliable, and maintainable. These practices include software development processes, software configuration management, software testing, requirements engineering, and project management. Software engineeringpractices have evolved over the years as a result of the growth of the software industry and the increasing demands for high-quality software products. The software industry has developed a number of software development models, such as the Capability Maturity Model Integration (CMMI), which provides a framework for software development organizations to improve their software development processes and practices.This paper reports a case study of software engineering practices in industry. The study was conducted with a large US software development company that produces software for aerospace and medical applications. The objective of the study was to identify the software engineering practices used by the company and to investigate how these practices contribute to the production of quality software.Research MethodologyThe case study was conducted with a large US software development company that produces software for aerospace and medical applications. The study was conducted over a period of six months, during which a survey questionnaire was administered to the company’s software development managers, software engineers, and testers. In addition, a series of interviews were conducted with the company’s software development managers, software engineers, and testers to gain a deeper understanding of the software engineering practices used by the company. The survey questionnaire and the interview questions were designed to investigate the software engineering practices used by the company in relation to software development processes, software configuration management, software testing, requirements engineering, and project management.FindingsThe research found that the company has a well-defined software development process, which is based on the Capability Maturity Model Integration (CMMI). The company’s software development process consists of five levels of maturity, starting with an ad hoc process (Level 1) and progressing to a fully defined and optimized process (Level 5). The company has achieved Level 3 maturity in its software development process. The company follows a set of software engineering practices that ensure quality, reliability, and maintainability of the software products. The software engineering practices used by the company include:Software Configuration Management (SCM): The company uses SCM tools to manage software code, documentation, and other artifacts. The company follows a branching and merging strategy to manage changes to the software code.Software Testing: The company has adopted a formal testing approach that includes unit testing, integration testing, system testing, and acceptance testing. The testing process is automated where possible, and the company uses a range of testing tools.Requirements Engineering: The company has a well-defined requirements engineering process, which includes requirements capture, analysis, specification, and validation. The company uses a range of tools, including use case modeling, to capture and analyze requirements.Project Management: The company has a well-defined project management process that includes project planning, scheduling, monitoring, and control. The company uses a range of tools to support project management, including project management software, which is used to track project progress.ConclusionThis paper has reported a case study of software engineering practices in industry. The study was conducted with a large US software development company that produces software for aerospace and medical applications. The study investigated the company’s software development process,practices, and techniques that lead to the production of quality software. The research found that the company has a well-defined software development process, which is based on the Capability Maturity Model Integration (CMMI). The company uses a set of software engineering practices that ensure quality, reliability, and maintainability of the software products. The findings of this study provide a valuable insight into the software engineering practices used in industry and can be used to guide software engineering education and practice in academia.外文文献2: Agile Software Development: Principles, Patterns, and PracticesAbstractAgile software development is a set of values, principles, and practices for developing software. The Agile Manifesto represents the values and principles of the agile approach. The manifesto emphasizes the importance of individuals and interactions, working software, customer collaboration, and responding to change. Agile software development practices include iterative development, test-driven development, continuous integration, and frequent releases. This paper presents an overview of agile software development, including its principles, patterns, and practices. The paper also discusses the benefits and challenges of agile software development.IntroductionAgile software development is a set of values, principles, and practices for developing software. Agile software development is based on the Agile Manifesto, which represents the values and principles of the agile approach. The manifesto emphasizes the importance of individuals and interactions, working software, customer collaboration, and responding to change. Agile software development practices include iterative development, test-driven development, continuous integration, and frequent releases.Agile Software Development PrinciplesAgile software development is based on a set of principles. These principles are:Customer satisfaction through early and continuous delivery of useful software.Welcome changing requirements, even late in development. Agile processes harness change for the customer's competitive advantage.Deliver working software frequently, with a preference for the shorter timescale.Collaboration between the business stakeholders and developers throughout the project.Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.Working software is the primary measure of progress.Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.Continuous attention to technical excellence and good design enhances agility.Simplicity – the art of maximizing the amount of work not done – is essential.The best architectures, requirements, and designs emerge from self-organizing teams.Agile Software Development PatternsAgile software development patterns are reusable solutions to common software development problems. The following are some typical agile software development patterns:The Single Responsibility Principle (SRP)The Open/Closed Principle (OCP)The Liskov Substitution Principle (LSP)The Dependency Inversion Principle (DIP)The Interface Segregation Principle (ISP)The Model-View-Controller (MVC) PatternThe Observer PatternThe Strategy PatternThe Factory Method PatternAgile Software Development PracticesAgile software development practices are a set ofactivities and techniques used in agile software development. The following are some typical agile software development practices:Iterative DevelopmentTest-Driven Development (TDD)Continuous IntegrationRefactoringPair ProgrammingAgile Software Development Benefits and ChallengesAgile software development has many benefits, including:Increased customer satisfactionIncreased qualityIncreased productivityIncreased flexibilityIncreased visibilityReduced riskAgile software development also has some challenges, including:Requires discipline and trainingRequires an experienced teamRequires good communicationRequires a supportive management cultureConclusionAgile software development is a set of values, principles, and practices for developing software. Agile software development is based on the Agile Manifesto, which represents the values and principles of the agile approach. Agile software development practices include iterative development, test-driven development, continuous integration, and frequent releases. Agile software development has many benefits, including increased customer satisfaction, increased quality, increased productivity, increased flexibility, increased visibility, and reduced risk. Agile software development also has some challenges, including the requirement for discipline and training, the requirement for an experienced team, the requirement for good communication, and the requirement for a supportive management culture.。
毕业设计(论文)外文资料翻译
毕业设计(论文)外文资料翻译学院:艺术学院专业:环境设计姓名:学号:外文出处: The Swedish Country House附件: 1.外文资料翻译译文;2.外文原文附件1:外文资料翻译译文室内装饰简述一室内装饰设计要素1 空间要素空间的合理化并给人们以美的感受是设计基本的任务。
要勇于探索时代、技术赋于空间的新形象,不要拘泥于过去形成的空间形象。
2 色彩要求室内色彩除对视觉环境产生影响外,还直接影响人们的情绪、心理。
科学的用色有利于工作,有助于健康。
色彩处理得当既能符合功能要求又能取得美的效果。
室内色彩除了必须遵守一般的色彩规律外,还随着时代审美观的变化而有所不同。
3 光影要求人类喜爱大自然的美景,常常把阳光直接引入室内,以消除室内的黑暗感和封闭感,特别是顶光和柔和的散射光,使室内空间更为亲切自然。
光影的变换,使室内更加丰富多彩,给人以多种感受。
4 装饰要素室内整体空间中不可缺少的建筑构件、如柱子、墙面等,结合功能需要加以装饰,可共同构成完美的室内环境。
充分利用不同装饰材料的质地特征,可以获得千变完化和不同风格的室内艺术效果,同时还能体现地区的历史文化特征。
5 陈设要素室内家具、地毯、窗帘等,均为生活必需品,其造型往往具有陈设特征,大多数起着装饰作用。
实用和装饰二者应互相协调,求的功能和形式统一而有变化,使室内空间舒适得体,富有个性。
6 绿化要素室内设计中绿化以成为改善室内环境的重要手段。
室内移花栽木,利用绿化和小品以沟通室内外环境、扩大室内空间感及美化空间均起着积极作用。
二室内装饰设计的基本原则1 室内装饰设计要满足使用功能要求室内设计是以创造良好的室内空间环境为宗旨,使室内环境合理化、舒适化、科学化;要考虑人们的活动规律处理好空间关系,空间尺寸,空间比例;合理配置陈设与家具,妥善解决室内通风,采光与照明,注意室内色调的总体效果。
2 室内装饰设计要满足精神功能要求室内设计的精神就是要影响人们的情感,乃至影响人们的意志和行动,所以要研究人们的认识特征和规律;研究人的情感与意志;研究人和环境的相互作用。
毕业设计外文文献翻译【范本模板】
毕业设计(论文)外文资料翻译系别:专业:班级:姓名:学号:外文出处:附件: 1. 原文; 2。
译文2013年03月附件一:A Rapidly Deployable Manipulator SystemChristiaan J。
J。
Paredis, H. Benjamin Brown,Pradeep K. KhoslaAbstract:A rapidly deployable manipulator system combines the flexibility of reconfigurable modular hardware with modular programming tools,allowing the user to rapidly create a manipulator which is custom-tailored for a given task. This article describes two main aspects of such a system,namely,the Reconfigurable Modular Manipulator System (RMMS)hardware and the corresponding control software。
1 IntroductionRobot manipulators can be easily reprogrammed to perform different tasks, yet the range of tasks that can be performed by a manipulator is limited by mechanicalstructure。
Forexample,a manipulator well-suited for precise movement across the top of a table would probably no be capable of lifting heavy objects in the vertical direction. Therefore,to perform a given task,one needs to choose a manipulator with an appropriate mechanical structure.We propose the concept of a rapidly deployable manipulator system to address the above mentioned shortcomings of fixed configuration manipulators。
(完整版)_毕业设计外文文献翻译_69267082
毕业设计(论文)外文文献翻译专业计算机科学与技术学生姓名班级学号指导教师信息工程学院1、外文文献The History of the InternetThe Beginning - ARPAnetThe Internet started as a project by the US government. The object of the project was to create a means of communications between long distance points, in the event of a nation wide emergency or, more specifically, nuclear war. The project was called ARPAnet, and it is what the Internet started as. Funded specifically for military communication, the engineers responsible for ARPANet "Internet."By definition, an 'Internet' is four or more computers connected by a network.ARPAnet achieved its network by using a protocol called TCPIP. The basics around this protocol was that if information sent over a network failed to get through on one route, it would find another route to work with, as well as establishing a means for one computer to "talk" to another computer, regardless of whether it was a PC or a Macintosh.By the 80's ARPAnet, just years away from becoming the more well known Internet, its network. By the year 1984, it its network.In 1986 ARPAnet (supposedly) shut down, but only the organizationshutdown, and the existing networks still existed between the more than 1000 computers. It shut down due to a failied link up with NSF, who wanted to connect its 5 countywide super computers into ARPAnet.With the funding of NSF, new 1988. By that time, there were 28,174 computers on the (by then decided) Internet. In 1989 there were 80,000 computers on it. By 1989, there were.Another network was built to support the incredible number of people joining. It was constructed in 1992.Today - The InternetToday, the Internet the line' to experience the wealth of information of the Internet. Millions of people now use the Internet, and it's predicted that by the year 2003 every single person on the planet will our time and era, and is evolving so quickly its virtual Internet environments.The Internet is not a 'thing' itself. The Internet cannot just "crash." It functions the same way as the telephone system, only there is no Internet company that runs the Internet.The Internet is a collection of millioins of computers that are all connected to each other, or office network, only it thing about . How does a computer in Houston know a computer in Tokyo to view a webpage?Internet communication, communication among computers connected to the Internet, is based on a language. This language is called TCPIP. TCPIP establishes a language for a computer to access and transmit data over the Internet system.But TCPIP assumes that there is a physical connecetion between one computer and another. This is not usually the case. There would that is requireed is established by way of modems,phonelines, and other modem cable connections (like cable modems or DSL). Modems on computers read and transmit data over established lines,which could be phonelines or data lines. The actual .To explain this better, let's look at Internet Service Provider (ISP). The ISP might in turn be connected to another ISP, or a straight connection into the Internet backbone.2. The user launches a web browser like Netscape or Internet Explorer and types in an internet location to go to.3. Here's where the tricky part comes in. First, the computer sends data about it's data request to a router. A router is a very of routers in the world make what is called a "backbone," on which all the data on the Internet is transferred. The backbone presently operates at a speed of several gigabytes per-second. Such a speed compared to a normal modem is like comparing the to the ice-cube.Routers similarly to envelopes. So, when the request for the webpage goes through, it uses TCPIP protocols to tell the router what to do with the data, where it's going, and overall where the user wants to go.4. The router sends these packets to other routers, eventually leading to the target computer. It's like whisper down the lane (only the information remains intact).5. When the information reaches the target web server, the webserver then begins to send the web page back. A webserver is the computer wherethe webpage is stored that is running a program that packets, sent through routers, and arrive at the users computer where the user can view the webpage once it is assembled.The packets which contain the data also contain special information that lets routers and other computers know the right order.With millions of web pages, and millions of users, using the Internet is not always easy for a beginning user, especially for someone who is not entirely comfortale with using computers. Below you can find tips tricks and services of the Internet.Before you access webpages, you must the software they usually give to customers; you. The fact that you are viewing this page means that you be found at and MSIE can be found atThe fact that you're reading this right now means that you of instructions (like if it remark made by new web-users.Sometimes websites error on a website is not the user's fault, of course.A 404 error means that the page you tried to go to does not exist. This could be because the site is still being constructed and the page created yet, or because the site author made a typo in the page. There's nothing much to do about a 404 error except for e-mailing the site administrator (of the page you wanted to go to) an telling the Javascript code of a website. Not all websites utilize Javascript, but many do. Javascript is different from Java, and most browsers now support Javascript. If you are using an old version of a web browser (Netscape 3.0 for example), you might get Javascript errors because sites utilize Javascript versions that yourbrowser does not support. So, you can try getting a newer version of your web browser.E-mail stands for Electronic Mail, and that's what it is. E-mail enables people to send letters, and even files and pictures to each other.To use e-mail, you must e-mail client, which is just like a personal post office, since it retrieves and stores e-mail.Secondly, you must e-mail account. Most Internet Service Providers provide free e-mail account(s) for free. Some services offer free e-mail, like Hotmail, and Geocities.After configuring your e-mail client with your POP3 and SMTP server address (your e-mail provider will give you that information), you are ready to receive mail.An attachment is a file sent in a letter. If someone sends you an attachment and you don't know who it is, don't run the file, ever. It could be a virus or some other kind of nasty programs. You can't get a virus just by reading e-mail, you'll put a text graphic, your business information, anything you want.Imagine that a computer on the Internet is an island in the sea. The sea is filled with millions of islands. This is the Internet. Imagine an island communicates with other island by sending ships to other islands and receiving ships. The island the Internet a network (or the Internet). This method is similar to the islandocean symbolism above.Telnet refers to accessing ports on a server directly with a text connection. Almost every kind of Internet function, like accessing web pages,"chatting," and e-mailing is done over a Telnet connection.Telnetting requires a Telnet client. A telnet program comes with the Windows system, so Windows users can access telnet by typing in "telnet" (without the "'s) in the run dialog. Linux , chat daemon) can be accessed via telnet, although they are not usually meant to be accessed in such a manner. For instance, it is possible to connect directly to a mail server and check your mail by interfacing with the e-mail server software, but it's easier to use an e-mail client (of course).There are millions of WebPages that come from all over the world, yet database of websites. For instance, if you wanted to find a website on dogs, you'd search for "dog" or "dogs" or "dog information." Here are a few search-engines.1. Altavista (3. Excite ( possibly find. This means that a search engine can literally map out as much of the Internet as it's own time and speed allows for.An indexed collection uses Yahoo's site. You can click on Computers & the Internet. Then you can click on Hardware. Then you can click on Modems, etc., and along the way through sections, there are sites available which relate to what section you're in.Metasearch searches many search engines at the same time, finding the top choices from about 10 search engines, making searching a lot more effective.Once you are able to use search engines, you can effectively find the pages you want.With the arrival of networking and multi user systems, security on the mind of system developers and system operators. Since the dawn ofAT&T and its phone network, known by many, . Why should you be careful while making purchases via a website? Let's look at to a webpage. Looks safe, right? Not necessarily. As the user submits the information, it is being streamed through a series of computers that make up the Internet backbone. The information is in little chunks, in packages called packets. Here's the problem: While the information is being transferred through this big backbone, what is preventing a ". There are methods of enforcing security, like password protection, an most importantly, encryption.Encryption means scrambling data into a code that can only be unscrambled on the "other end." Browser's like Netscape Communicator and Internet Explorer feature encryption support for making on-line transfers. Some encryptions work better than others. The most advanced encryption system is called DES (Data Encryption Standard), and it was adopted by the US Defense Department because it was deemed so difficult to 'crack' that they considered it a security risk if it would fall into another countries to unlock an entire document. The problem is, there are 75 trillion possible keys to use, so it is a3. Excite () - Web spider & Indexed4. Lycos () - Web spider & Indexed5. Metasearch () - Multiple search网络蜘蛛是一种搜索引擎使用的程序,它随着可能找到的任何链接从一个网页到另一个网页。
_毕业设计外文文献及翻译_
_毕业设计外文文献及翻译_Graduation Thesis Foreign Literature Review and Chinese Translation1. Title: "The Impact of Artificial Intelligence on Society"Abstract:人工智能对社会的影响摘要:人工智能技术的快速发展引发了关于其对社会影响的讨论。
本文探讨了人工智能正在重塑不同行业(包括医疗保健、交通运输和教育)的各种方式。
还讨论了AI实施的潜在益处和挑战,以及伦理考量。
总体而言,本文旨在提供对人工智能对社会影响的全面概述。
2. Title: "The Future of Work: Automation and Job Displacement"Abstract:With the rise of automation technologies, there is growing concern about the potential displacement of workers in various industries. This paper examines the trends in automation and its impact on jobs, as well as the implications for workforce development and retraining programs. The ethical and social implications of automation are also discussed, along with potential strategies for mitigating job displacement effects.工作的未来:自动化和失业摘要:随着自动化技术的兴起,人们越来越担心各行业工人可能被替代的问题。
毕业设计外文文献翻译
毕业设计外文文献翻译Graduation Design Foreign Literature Translation (700 words) Title: The Impact of Artificial Intelligence on the Job Market Introduction:Artificial Intelligence (AI) is a rapidly growing field that has the potential to revolutionize various industries and job markets. With advancements in technologies such as machine learning and natural language processing, AI has become capable of performing tasks traditionally done by humans. This has raised concerns about the future of jobs and the impact AI will have on the job market. This literature review aims to explore the implications of AI on employment and job opportunities.AI in the Workplace:AI technologies are increasingly being integrated into the workplace, with the aim of automating routine and repetitive tasks. For example, automated chatbots are being used to handle customer service queries, while machine learning algorithms are being employed to analyze large data sets. This has resulted in increased efficiency and productivity in many industries. However, it has also led to concerns about job displacement and unemployment.Job Displacement:The rise of AI has raised concerns about job displacement, as AI technologies are becoming increasingly capable of performing tasks previously done by humans. For example, automated machines can now perform complex surgeries with greaterprecision than human surgeons. This has led to fears that certain jobs will become obsolete, leading to unemployment for those who were previously employed in these industries.New Job Opportunities:While AI might potentially replace certain jobs, it also creates new job opportunities. As AI technologies continue to evolve, there will be a greater demand for individuals with technical skills in AI development and programming. Additionally, jobs that require human interaction and emotional intelligence, such as social work or counseling, may become even more in demand, as they cannot be easily automated.Job Transformation:Another potential impact of AI on the job market is job transformation. AI technologies can augment human abilities rather than replacing them entirely. For example, AI-powered tools can assist professionals in making decisions, augmenting their expertise and productivity. This may result in changes in job roles and the need for individuals to adapt their skills to work alongside AI technologies.Conclusion:The impact of AI on the job market is still being studied and debated. While AI has the potential to automate certain tasks and potentially lead to job displacement, it also presents opportunities for new jobs and job transformation. It is essential for individuals and organizations to adapt and acquire the necessary skills to navigate these changes in order to stay competitive in the evolvingjob market. Further research is needed to fully understand the implications of AI on employment and job opportunities.。
(完整版)_毕业设计(论文)外文翻译_(原文)
毕业设计(论文)——外文翻译(原文)NEW APPLICATION OF DATABASERelational databases in use for over two decades. A large portion of the applications of relational databases in the commercial world, supporting such tasks as transaction processing for banks and stock exchanges, sales and reservations for a variety of businesses, and inventory and payroll for almost of all companies. We study several new applications, which recent years.First. Decision-support systemAs the online availability of data , businesses to exploit the available data to make better decisions about increase sales. We can extract much information for decision support by using simple SQL queries. Recently support based on data analysis and data mining, or knowledge discovery, using data from a variety of sources.Database applications can be broadly classified into transaction processing and decision support. Transaction-processing systems are widely used today, and companies generated by these systems.The term data mining refers loosely to finding relevant information, or “discovering knowledge,” from a large volume of data. Like knowledge discovery in artificial intelligence, data mining attempts to discover statistical rules and patterns automatically from data. However, data mining differs from machine learning in that it deals with large volumes of data, stored primarily on disk.Knowledge discovered from a database can be represented by a set of rules. We can discover rules from database using one of two models:In the first model, the user is involved directly in the process of knowledge discovery.In the second model, the system is responsible for automatically discovering knowledgefrom the database, by detecting patterns and correlations in the data.Work on automatic discovery of rules influenced strongly by work in the artificial-intelligence community on machine learning. The main differences lie in the volume of data databases, and in the need to access disk. Specialized data-mining algorithms developed to which rules are discovered depends on the class of data-mining application. We illustrate rule discovery using two application classes: classification and associations.Second. Spatial and Geographic DatabasesSpatial databases store information related to spatial locations, and provide support for efficient querying and indexing based on spatial locations. Two types of spatial databases are particularly important:Design databases, or computer-aided-design (CAD) databases, are spatial databases used to store design information about databases are integrated-circuit and electronic-device layouts.Geographic databases are spatial databases used to store geographic information, such as maps. Geographic databases are often called geographic information systems.Geographic data are spatial in nature, but differ from design data in certain ways. Maps and satellite images are typical examples of geographic data. Maps may provide not only location information -such as boundaries, rivers and roads---but also much more detailed information associated with locations, such as elevation, soil type, land usage, and annual rainfall.Geographic data can be categorized into two types: raster data (such data consist a bit maps or pixel maps, in two or more dimensions.), vector data (vector data are constructed from basic geographic objects). Map data are often represented in vector format.Third. Multimedia DatabasesRecently, there much interest in databases that store multimedia data, such as images, audio, and video. Today multimedia data typically are stored outside the database, in files systems. When the number of multimedia objects is relatively small, features provided by databases are usually not important. Database functionality becomes important when the number of multimedia objects stored is large. Issues such as transactional updates, querying facilities, and indexing then become important. Multimedia objects often they were created, who created them, and to what category they belong. One approach to building a database for such multimedia objects is to use database for storing the descriptive attributes, and for keeping track of the files in which the multimedia objects are stored.However, storing multimedia outside the database makes it the basis of actual multimedia data content. It can also lead to inconsistencies, such a file that is noted in the database, but whose contents are missing, or vice versa. It is therefore desirable to store the data themselves in the database.Forth. Mobility and Personal DatabasesLarge-scale commercial databases stored in central computing facilities. In the case of distributed database applications, there strong central database and network administration. Two technology trends which this assumption of central control and administration is not entirely correct:1.The increasingly widespread use of personal computers, and, more important, of laptop or “notebook” computers.2.The development of a relatively low-cost wireless digital communication infrastructure, base on wireless local-area networks, cellular digital packet networks, and other technologies.Wireless computing creates a situation where machines no longer at which to materialize the result of a query. In some cases, the location of the user is a parameter of the query. A example is a traveler’s information system that provides data on the current route must be processed based on knowledge of the user’s location, direction of motion, and speed.Energy (battery power) is a scarce resource for mobile computers. This limitation influences many aspects of system design. Among the more interesting consequences of the need for energy efficiency is the use of scheduled data broadcasts to reduce the need for mobile system to transmit queries. Increasingly amounts of data may reside on machines administered by users, rather than by database administrators. Furthermore, these machines may, at times, be disconnected from the network.SummaryDecision-support systems are gaining importance, as companies realize the value of the on-line data collected by their on-line transaction-processing systems. Proposed extensions to SQL, such as the cube operation, of summary data. Data mining seeks to discover knowledge automatically, in the form of statistical rules and patterns from large databases. Data visualization systems data as well as geographic data. Design data are stored primarily as vector data; geographic data consist of a combination of vector and raster data.Multimedia databases are growing in importance. Issues such as similarity-based retrieval and delivery of data at guaranteed rates are topics of current research.Mobile computing systems , leading to interest in database systems that can run on such systems. Query processing in such systems may involve lookups on server database.毕业设计(论文)——外文翻译(译文)数据库的新应用我们使用关系数据库已经有20多年了,关系数据库应用中有很大一部分都用于商业领域支持诸如银行和证券交易所的事务处理、各种业务的销售和预约,以及几乎所有公司都需要的财产目录和工资单管理。
本科毕业设计外文文献及译文1
本科毕业设计外文文献及译文文献、资料题目:Transit Route Network Design Problem:Review文献、资料来源:网络文献、资料发表(出版)日期:2007.1院(部):xxx专业:xxx班级:xxx姓名:xxx学号:xxx指导教师:xxx翻译日期:xxx外文文献:Transit Route Network Design Problem:Review Abstract:Efficient design of public transportation networks has attracted much interest in the transport literature and practice,with manymodels and approaches for formulating the associated transit route network design problem _TRNDP_having been developed.The presentpaper systematically presents and reviews research on the TRNDP based on the three distinctive parts of the TRNDP setup:designobjectives,operating environment parameters and solution approach.IntroductionPublic transportation is largely considered as a viable option for sustainable transportation in urban areas,offering advantages such as mobility enhancement,traffic congestion and air pollution reduction,and energy conservation while still preserving social equity considerations. Nevertheless,in the past decades,factors such as socioeconomic growth,the need for personalized mobility,the increase in private vehicle ownership and urban sprawl have led to a shift towards private vehicles and a decrease in public transportation’s share in daily commuting (Sinha2003;TRB2001;EMTA2004;ECMT2002;Pucher et al.2007).Efforts for encouraging public transportation use focuses on improving provided services such as line capacity,service frequency,coverage,reliability,comfort and service quality which are among the most important parameters for an efficient public transportation system(Sinha2003;Vuchic2004.) In this context,planning and designing a cost and service efficientpublic transportation network is necessary for improving its competitiveness and market share. The problem that formally describes the design of such a public transportation network is referred to as the transit route network design problem(TRNDP);it focuses on the optimization of a number of objectives representing the efficiency of public transportation networks under operational and resource constraints such as the number and length of public transportation routes, allowable service frequencies,and number of available buses(Chakroborty2003;Fan and Machemehl2006a,b).The practical importance of designing public transportation networks has attractedconsiderable interest in the research community which has developed a variety of approaches and modelsfor the TRNDP including different levels of design detail and complexity as well as interesting algorithmic innovations.In thispaper we offer a structured review of approaches for the TRNDP;researchers will obtain a basis for evaluating existing research and identifying future research paths for further improving TRNDP models.Moreover,practitioners will acquire a detailed presentation of both the process and potential tools for automating the design of public transportation networks,their characteristics,capabilities,and strengths.Design of Public Transportation NetworksNetwork design is an important part of the public transportation operational planning process_Ceder2001_.It includes the design of route layouts and the determination of associated operational characteristics such as frequencies,rolling stock types,and so on As noted by Ceder and Wilson_1986_,network design elements are part of the overall operational planning process for public transportation networks;the process includes five steps:_1_design of routes;_2_ setting frequencies;_3_developing timetables;_4_scheduling buses;and_5_scheduling drivers. Route layout design is guided by passenger flows:routes are established to provide direct or indirect connection between locations and areas that generate and attract demand for transit travel, such as residential and activity related centers_Levinson1992_.For example,passenger flows between a central business district_CBD_and suburbs dictate the design of radial routes while demand for trips between different neighborhoods may lead to the selection of a circular route connecting them.Anticipated service coverage,transfers,desirable route shapes,and available resources usually determine the structure of the route network.Route shapes areusually constrained by their length and directness_route directness implies that route shapes are as straight as possible between connected points_,the usage of given roads,and the overlapping with other transit routes.The desirable outcome is a set of routesconnecting locations within a service area,conforming to given design criteria.For each route, frequencies and bus types are the operational characteristics typically determined through design. Calculations are based on expected passenger volumes along routes that are estimated empirically or by applying transit assignmenttechniques,under frequency requirement constraints_minimum and maximum allowedfrequencies guaranteeing safety and tolerable waiting times,respectively_,desired load factors, fleet size,and availability.These steps as well as the overall design.process have been largely based upon practical guidelines,the expert judgment of transit planners,and operators experience_Baaj and Mahmassani1991_.Two handbooks by Black _1995_and Vuchic_2004_outline frameworks to be followed by planners when designing a public transportation network that include:_1_establishing the objectives for the network;_2_ defining the operational environment of the network_road structure,demand patterns,and characteristics_;_3_developing;and_4_evaluating alternative public transportation networks.Despite the extensive use of practical guidelines and experience for designing transit networks,researchers have argued that empirical rules may not be sufficient for designing an efficient transit network and improvements may lead to better quality and more efficient services. For example,Fan and Machemehl_2004_noted that researchers and practitioners have been realizing that systematic and integrated approaches are essential for designing economically and operationally efficient transit networks.A systematic design process implies clear and consistent steps and associated techniques for designing a public transportation network,which is the scope of the TRNDP.TRNDP:OverviewResearch has extensively examined the TRNDP since the late1960s.In1979,Newell discussed previous research on the optimal design of bus routes and Hasselström_1981_ analyzed relevant studies and identified the major features of the TRNDP as demand characteristics,objective functions,constraints,passengerbehavior,solution techniques,and computational time for solving the problem.An extensive review of existing work on transit network design was provided by Chua_1984_who reported five types of transit system planning:_1_manual;_2_marketanalysis;_3_systems analysis;_4_systems analysis with interactive graphics;and_5_ mathematical optimization approach.Axhausemm and Smith_1984_analyzed existing heuristic algorithms for formulating the TRNDP in Europe,tested them,anddiscussed their potential implementation in the United States.Ceder and Wilson_1986_reportedprior work on the TRNDP and distinguished studies into those that deal with idealized networks and to those that focus on actual routes,suggesting that the main features of the TRNDP include demand characteristics,objectivesand constraints,and solution methods.At the same period,Van Nes et al._1988_grouped TRNDP models into six categories:_1_ analytical models for relating parameters of the public transportation system;_2_models determining the links to be used for public transportation route construction;_3_models determining routes only;_4_models assigning frequencies to a set of routes;_5_two-stage models for constructing routes and then assigning frequencies;and_6_models for simultaneously determining routes and frequencies.Spacovic et al._1994_and Spacovic and Schonfeld_1994_proposed a matrix organization and classified each study according to design parameters examined,objectives anticipated,network geometry,and demand characteristics. Ceder and Israeli_1997_suggested broad categorizations for TRNDP models into passenger flow simulation and mathematical programming models.Russo_1998_adopted the same categorization and noted that mathematical programming models guarantee optimal transit network design but sacrifice the level of detail in passenger representation and design parameters, while simulation models address passenger behavior but use heuristic procedures obtaining a TRNDP solution.Ceder_2001_enhanced his earlier categorization by classifying TRNDP models into simulation,ideal network,and mathematical programming models.Finally,in a recent series of studies,Fan and Machemehl_2004,2006a,b_divided TRNDP approaches into practical approaches,analytical optimization models for idealized conditions,and metaheuristic procedures for practical problems.The TRNDP is an optimization problem where objectives are defined,its constraints are determined,and a methodology is selected and validated for obtaining an optimal solution.The TRNDP is described by the objectives of the public transportation network service to be achieved, the operational characteristics and environment under which the network will operate,and the methodological approach for obtaining the optimal network design.Based on this description of the TRNDP,we propose a three-layer structure for organizing TRNDP approaches_Objectives, Parameters,and Methodology_.Each layer includes one or more items that characterize each study.The“Objectives”layer incorporates the goals set when designing a public transportation system such as the minimization of the costs of the system or the maximization of the quality of services provided.The“Parameters”layer describes the operating environment and includes both the design variables expected to be derived for the transit network_route layouts,frequencies_as well as environmental and operational parameters affecting and constraining that network_for example,allowable frequencies,desired load factors,fleet availability,demand characteristics and patterns,and so on_.Finally,the“Methodology”layer covers the logical–mathematical framework and algorithmic tools necessary to formulate and solve the TRNDP.The proposed structure follows the basic concepts toward setting up a TRNDP:deciding upon the objectives, selecting the transit network items and characteristics to be designed,setting the necessary constraints for the operating environment,and formulating and solving the problem. TRNDP:ObjectivesPublic transportation serves a very important social role while attempting to do this at the lowest possible operating cost.Objectives for designing daily operations of a public transportation system should encompass both angles.The literature suggests that most studies actually focus on both the service and economic efficiency when designing such a system. Practical goals for the TRNDP can be briefly summarized as follows_Fielding1987;van Oudheudsen et al.1987;Black1995_:_1_user benefit maximization;_2_operator cost minimization;_3_total welfare maximization;_4_capacity maximization;_5_energy conservation—protection of the environment;and_6_individual parameter optimization.Mandl_1980_indicated that public transportation systems have different objectives to meet. He commented,“even a single objective problem is difficult to attack”_p.401_.Often,these objectives are controversial since cutbacks in operating costs may require reductions in the quality of services.Van Nes and Bovy_2000_pointed out that selected objectives influence the attractiveness and performance of a public transportation network.According to Ceder and Wilson_1986_,minimization of generalized cost or time or maximization of consumer surplus were the most common objectives selected when developing transit network design models. Berechman_1993_agreed that maximization of total welfare is the most suitable objective for designing a public transportation system while Van Nes and Bovy_2000_argued that the minimization of total user and system costs seem the most suit able and less complicatedobjective_compared to total welfare_,while profit maximization leads to nonattractive public transportation networks.As can be seen in Table1,most studies seek to optimize total welfare,which incorporates benefits to the user and to the er benefits may include travel,access and waiting cost minimization,minimization of transfers,and maximization of coverage,while benefits for the system are maximum utilization and quality of service,minimization of operating costs, maximization of profits,and minimization of the fleet size used.Most commonly,total welfare is represented by the minimization of user and system costs.Some studies address specific objectives from the user,theoperator,or the environmental perspective.Passenger convenience,the number of transfers, profit and capacity maximization,travel time minimization,and fuel consumption minimization are such objectives.These studies either attempt to simplify the complex objective functions needed to setup the TRNDP_Newell1979;Baaj and Mahmassani1991;Chakroborty and Dwivedi2002_,or investigate specific aspects of the problem,such as objectives_Delle Site and Fillipi2001_,and the solution methodology_Zhao and Zeng2006;Yu and Yang2006_.Total welfare is,in a sense,a compromise between objectives.Moreover,as reported by some researchers such as Baaj and Mahmassani_1991_,Bielli et al._2002_,Chackroborty and Dwivedi_2002_,and Chakroborty_2003_,transit network design is inherently a multiobjective problem.Multiobjective models for solving the TRNDP have been based on the calculation of indicators representing different objectives for the problem at hand,both from the user and operator perspectives,such as travel and waiting times_user_,and capacity and operating costs _operator_.In their multiobjective model for the TRNDP,Baaj and Majmassani_1991_relied on the planner’s judgment and experience for selecting the optimal public transportation network,based on a set of indicators.In contrast,Bielli et al._2002_and Chakroborty and Dwivedi_2002_,combined indicators into an overall,weighted sum value, which served as the criterion for determining the optimaltransit network.TRNDP:ParametersThere are multiple characteristics and design attributes to consider for a realistic representation of a public transportation network.These form the parameters for the TRNDP.Part of these parameters is the problem set of decision variables that define its layout and operational characteristics_frequencies,vehicle size,etc._.Another set of design parameters represent the operating environment_network structure,demand characters,and patterns_, operational strategies and rules,and available resources for the public transportation network. These form the constraints needed to formulate the TRNDP and are,a-priori fixed,decided upon or assumed.Decision VariablesMost common decision variables for the TRNDP are the routes and frequencies of the public transportation network_Table1_.Simplified early studies derived optimal route spacing between predetermined parallel or radial routes,along with optimal frequencies per route_Holroyd1967; Byrne and Vuchic1972;Byrne1975,1976;Kocur and Hendrickson1982;Vaughan1986_,while later models dealt with the development of optimal route layouts and frequency determination. Other studies,additionally,considered fares_Kocur and Hendrickson1982;Morlok and Viton 1984;Chang and Schonfeld1991;Chien and Spacovic2001_,zones_Tsao and Schonfeld1983; Chang and Schonfeld1993a_,stop locations_Black1979;Spacovic and Schonfeld1994; Spacovic et al.1994;Van Nes2003;Yu and Yang2006_and bus types_Delle Site and Filippi 2001_.Network StructureSome early studies focused on the design of systems in simplified radial_Byrne1975;Black 1979;Vaughan1986_,or rectangular grid road networks_Hurdle1973;Byrne and Vuchic1972; Tsao and Schonfeld1984_.However,most approaches since the1980s were either applied to realistic,irregular grid networks or the network structure was of no importance for the proposed model and therefore not specified at all.Demand PatternsDemand patterns describe the nature of the flows of passengers expected to be accommodated by the public transportation network and therefore dictate its structure.For example,transit trips from a number of origins_for example,stops in a neighborhood_to a single destination_such as a bus terminal in the CBD of a city_and vice-versa,are characterized as many-to-one_or one-tomany_transit demand patterns.These patterns are typically encountered in public transportation systems connecting CBDs with suburbs and imply a structure of radial orparallel routes ending at a single point;models for patterns of that type have been proposed by Byrne and Vuchic_1972_,Salzborn_1972_,Byrne_1975,1976_,Kocur and Hendrickson _1982_,Morlok and Viton_1984_,Chang and Schonfeld_1991,1993a_,Spacovic and Schonfeld_1994_,Spacovic et al._1994_,Van Nes_2003_,and Chien et al._2003_.On the other hand,many-to-many demand patterns correspond to flows between multiple origins and destinations within an urban area,suggesting that the public transportation network is expected to connect various points in an area.Demand CharacteristicsDemand can be characterized either as“fixed”_or“inelastic”_or“elastic”;the later meaning that demand is affected by the performance and services provided by the public transportation network.Lee and Vuchic_2005_distinguished between two types of elastic demand:_1_demand per mode affected by transportation services,with total demand for travel kept constant;and_2_total demand for travel varying as a result of the performance of the transportation system and its modes.Fan and Machemehl_2006b_noted that the complexity of the TRNDP has led researchers intoassuming fixed demand,despite its inherent elastic nature.However,since the early1980s, studies included aspects of elastic demand in modeling the TRNDP_Hasselstrom1981;Kocur and Hendrickson1982_.Van Nes et al._1988_applied a simultaneous distribution-modal split model based on transit deterrence for estimatingdemand for public transportation.In a series of studies,Chang and Schonfeld_1991,1993a,b_ and Spacovic et al._1994_estimated demand as a direct function of travel times and fares with respect to their elasticities,while Chien and Spacovic2001_,followed the same approach assuming that demand is additionally affected by headways,route spacing and fares.Finally, studies by Leblanc_1988_,Imam_1998_,Cipriani et al._2005_,Lee and Vuchic_2005_;and Fan and Machemehl_2006a_based demand estimation on mode choice models for estimating transit demand as a function of total demand for travel.中文译文:公交路线网络设计问题:回顾摘要:公共交通网络的有效设计让交通理论与实践成为众人关注的焦点,随之发展出了很多规划相关公交路线网络设计问题(TRNDP)的模型与方法。
毕业设计(论文)外文资料翻译
1、外文原文(复印件)2、外文资料翻译译文节能智能照明控制系统Sherif Matta and Syed Masud Mahmud, SeniorMember, IEEE Wayne State University, Detroit,Michigan 48202Sherif.Matta@,smahmud@摘要节约能源已成为当今最具挑战性的问题之一。
最浪费能源的来自低效利用的电能消耗的人工光源设备(灯具或灯泡)。
本文提出了一种通过把人工照明的强度控制到令人满意的水平,来节约电能,并且有详细设计步骤的系统。
在白天使用照明设备时,尽可能的节约。
当记录超过预设的照明方案时,引入改善日光采集和控制的调光系统。
设计原理是,如果它可以通过利用日光这样的一种方式,去控制百叶窗或窗帘。
否则,它使用的是人工建筑内部的光源。
光通量是通过控制百叶窗帘的开启角度来控制,同时,人工光源的强度的控制,通过控制脉冲宽度来调制(PWM)对直流灯的发电量或剪切AC灯泡的AC波。
该系统采用控制器区域网络(CAN),作为传感器和致动器通信用的介质。
该系统是模块化的,可用来跨越大型建筑物。
该设计的优点是,它为用户提供了一个单点操作,而这个正是用户所希望的光的亮度。
该控制器的功能是确定一种方法来满足所需的最小能量消耗光的量。
考虑的主要问题之一是系统组件的易于安装和低成本。
该系统显示出了显著节省的能源量,和在实际中实施的可行性。
关键词:智能光控系统,节能,光通量,百叶帘控制,控制器区域网络(CAN),光强度的控制一简介多年来,随着建筑物的数量和建筑物房间内的数量急剧增加,能源的浪费、低效光控制和照明分布难以管理。
此外,依靠用户对光的手动控制,来节省能源是不实际的。
很多技术和传感器最近已经向管理过多的能量消耗转变,例如在一定区域内的检测活动采用运动检测。
当有人进入房间时,自动转向灯为他们提供了便利。
他们通过在最后人员离开房间后不久关闭转向灯来减少照明能源的使用。
毕业设计外文翻译英文翻译英文原稿
Harmonic source identification and current separationin distribution systemsYong Zhao a,b,Jianhua Li a,Daozhi Xia a,*a Department of Electrical Engineering Xi’an Jiaotong University, 28 West Xianning Road, Xi’an, Shaanxi 710049, Chinab Fujian Electric Power Dispatch and Telecommunication Center, 264 Wusi Road, Fuzhou, Fujian, 350003, China AbstractTo effectively diminish harmonic distortions, the locations of harmonic sources have to be identified and their currents have to be separated from that absorbed by conventional linear loads connected to the same CCP. In this paper, based on the intrinsic difference between linear and nonlinear loads in their V –I characteristics and by utilizing a new simplified harmonic source model, a new principle for harmonic source identification and harmonic current separation is proposed. By using this method, not only the existence of harmonic source can be determined, but also the contributions of the harmonic source and the linear loads to harmonic voltage distortion can be distinguished. The detailed procedure based on least squares approximation is given. The effectiveness of the approach is illustrated by test results on a composite load.2004 Elsevier Ltd. All rights reserved.Keywords: Distribution system; Harmonic source identification; Harmonic current separation; Least squares approximation1. IntroductionHarmonic distortion has experienced a continuous increase in distribution systems owing to the growing use of nonlinear loads. Many studies have shown that harmonics may cause serious effects on power systems, communication systems, and various apparatus [1–3]. Harmonic voltages at each point on a distribution network are not only determined by the harmonic currents produced by harmonic sources (nonlinear loads), but also related to all linear loads (harmonic current sinks) as well as the structure and parameters of the network. To effectively evaluate and diminish the harmonic distortion in power systems, the locations of harmonic sources have to be identified and the responsibility of the distortion caused by related individual customers has to be separated.As to harmonic source identification, most commonly the negative harmonic power is considered as an essential evidence of existing harmonic source [4–7]. Several approaches aiming at evaluating the contribution of an individual customer can also be found in the literatures. Schemes based on power factor measurement to penalize the customer’s harmonic currents are discussed in Ref. [8]. However, it would be unfair to use economical penalization if we could not distinguish whether the measured harmonic current is from nonlinear load or from linear load.In fact, the intrinsic difference between linear and nonlinear loads lies in their V –I characteristics. Harmonic currents of a linear load are i n linear proportion to its supplyharmonic voltages of the same order 次, whereas the harmonic currents of a nonlinear load are complex nonlinear functions of its supply fundamental 基波and harmonic voltage components of all orders. To successfully identify and isolate harmonic source in an individual customer or several customers connected at same point in the network, the V –I characteristics should be involved and measurement of voltages and currents under several different supply conditions should be carried out.As the existing approaches based on measurements of voltage and current spectrum or harmonic power at a certain instant cannot reflect the V –I characteristics, they may not provide reliable information about the existence and contribution of harmonic sources, which has been substantiated by theoretical analysis or experimental researches [9,10].In this paper, to approximate the nonlinear characteristics and to facilitate the work in harmonic source identification and harmonic current separation, a new simplified harmonic source model is proposed. Then based on the difference between linear and nonlinear loads in their V –I characteristics, and by utilizing the harmonic source model, a new principle for harmonic source identification and harmonic current separation is presented. By using the method, not only the existence of harmonic source can be determined, but also the contributions of the harmonic sources and the linear loads can be separated. Detailed procedure of harmonic source identification and harmonic current separation based on least squares approximation is presented. Finally, test results on a composite load containing linear and nonlinear loads are given to illustrate the effectiveness of the approach.2. New principle for harmonic source identification and current separationConsider a composite load to be studied in a distribution system, which may represent an individual consumer or a group of customers supplied by a common feeder 支路in the system. To identify whether it contains any harmonic source and to separate the harmonic currents generated by the harmonic sources from that absorbed by conventional linear loads in the measured total harmonic currents of the composite load, the following assumptions are made.(a) The supply voltage and the load currents are both periodical waveforms withperiod T; so that they can be expressed by Fourier series as1()s i n (2)h h h v t ht T πθ∞==+ (1)1()sin(2)h h h i t ht πφ∞==+The fundamental frequency and harmonic components can further be presented bycorresponding phasorshr hi h h hr hi h hV jV V I jI I θφ+=∠+=∠ , 1,2,3,...,h n = (2)(b) During the period of identification, the composite load is stationary, i.e. both its composition and circuit parameters of all individual loads keep unchanged.Under the above assumptions, the relationship between the total harmonic currents of the harmonic sources(denoted by subscript N) in the composite load and the supply voltage, i.e. the V –I characteristics, can be described by the following nonlinear equation ()()()N i t f v t = (3)and can also be represented in terms of phasors as()()122122,,,...,,,,,,...,,Nhr r i nr ni Nh Nhi r inr ni I V V V V V I I V V V V V ⎡⎤=⎢⎥⎣⎦ 2,3,...,h n = (4)Note that in Eq. (4), the initial time (reference time) of the voltage waveform has been properly selected such that the phase angle u1 becomes 0 and 10i V =, 11r V V =in Eq. (2)for simplicity.The V –I characteristics of the linear part (denote by subscript L) of the composite load can be represented by its equivalent harmonic admittance Lh Lh Lh Y G jB =+, and the total harmonic currents absorbed by the linear part can be described as,Lhr LhLh hr Lh Lhi LhLh hi I G B V I I B G V -⎡⎤⎡⎤⎡⎤==⎢⎥⎢⎥⎢⎥⎣⎦⎣⎦⎣⎦2,3,...,h n = (5)From Eqs. (4) and (5), the whole harmonic currents absorbed by the composite load can be expressed as()()122122,,,...,,,,,,...,,hr Lhr Nhr r i nr ni h hi Lhi Nhi r inr ni I I I V V V V V I I I I V V V V V ⎡⎤⎡⎤⎡⎤==-⎢⎥⎢⎥⎢⎥⎣⎦⎣⎦⎣⎦ 2,3,...,h n = (6)As the V –I characteristics of harmonic source are nonlinear, Eq. (6) can neither be directly used for harmonic source identification nor for harmonic current separation. To facilitate the work in practice, simplified methods should be involved. The common practice in harmonic studies is to represent nonlinear loads by means of current harmonic sources or equivalent Norton models [11,12]. However, these models are not of enough precision and new simplified model is needed.From the engineering point of view, the variations of hr V and hi V ; ordinarily fall into^3% bound of the rated bus voltage, while the change of V1 is usually less than ^5%. Within such a range of supply voltages, the following simplified linear relation is used in this paper to approximate the harmonic source characteristics, Eq. (4)112222112322,ho h h r r h i i hnr nr hni ni Nh ho h h r r h i i hnr nr hni ni a a V a V a V a V a V I b b V b V b V b V b V ++++++⎡⎤=⎢⎥++++++⎣⎦2,3,...,h n = (7)这个地方不知道是不是原文写错?23h r r b V 其他的都是2The precision and superiority of this simplified model will be illustrated in Section 4 by test results on several kinds of typical harmonic sources.The total harmonic current (Eq. (6)) then becomes112222112222,2,3,...,Lh Lh hr ho h h r r h i i hnr nr hni ni h Lh Lh hi ho h h r r h i i hnr nr hni ni G B V a a V a V a V a V a V I B G V b b V b V b V b V b V h n-++++++⎡⎤⎡⎤⎡⎤=-⎢⎥⎢⎥⎢⎥++++++⎣⎦⎣⎦⎣⎦= (8)It can be seen from the above equations that the harmonic currents of the harmonic sources (nonlinear loads) and the linear loads differ from each other intrinsically in their V –I characteristics. The harmonic current component drawn by the linear loads is uniquely determined by the harmonic voltage component with same order in the supply voltage. On the other hand, the harmonic current component of the nonlinear loads contains not only a term caused by the same order harmonic voltage but also a constant term and the terms caused by fundamental and harmonic voltages of all other orders. This property will be used for identifying the existence of harmonic source sin composite load.As the test results shown in Section 4 demonstrate that the summation of the constant term and the component related to fundamental frequency voltage in the harmonic current of nonlinear loads is dominant whereas other components are negligible, further approximation for Eq. (7) can be made as follows.Let112'012()()nh h hkr kr hki ki k k h Nhnh h hkr kr hki kik k h a a V a V a V I b b V b V b V =≠=≠⎡⎤+++⎢⎥⎢⎥=⎢⎥⎢⎥+++⎢⎥⎢⎥⎣⎦∑∑ hhr hhi hr Nhhhr hhi hi a a V I b b V ⎡⎤⎡⎤''=⎢⎥⎢⎥⎣⎦⎣⎦hhrhhihr Lh Lh Nh hhrhhi hi a a V I I I b b V ''⎡⎤⎡⎤'''=-=⎢⎥⎢⎥''⎣⎦⎣⎦,2,3,...,hhr hhiLh Lh hhrhhi hhr hhi Lh Lh hhr hhi a a G B a a h n b b B G b b ''-⎡⎤⎡⎤⎡⎤=-=⎢⎥⎢⎥⎢⎥''⎣⎦⎣⎦⎣⎦The total harmonic current of the composite load becomes112012(),()2,3,...,nh h hkr kr hki ki k k hhhrhhi hr h Lh NhLhNh n hhrhhi hi h h hkr kr hki kik k h a a V a V a V a a V I I I I I b b V b b V b V b V h n=≠=≠⎡⎤+++⎢⎥⎢⎥''⎡⎤⎡⎤''=-=-=-⎢⎥⎢⎥⎢⎥''⎣⎦⎣⎦⎢⎥+++⎢⎥⎢⎥⎣⎦=∑∑ (9)By neglecting ''Nh I in the harmonic current of nonlinear load and adding it to the harmonic current of linear load, 'Nh I can then be deemed as harmonic current of thenonlinear load while ''Lh I can be taken as harmonic current of the linear load. ''Nh I =0 means the composite load contains no harmonic sources, while ''0NhI ≠signify that harmonic sources may exist in this composite load. As the neglected term ''Nh I is not dominant, it is obviousthat this simplification does not make significant error on the total harmonic current of nonlinear load. However, it makes the possibility or the harmonic source identification and current separation.3. Identification procedureIn order to identify the existence of harmonic sources in a composite load, the parameters in Eq. (9) should be determined primarily, i.e.[]0122hr h h h rh i hhr hhihnr hni C a a a a a a a a ''= []0122hi h h h rh i hhrhhihnr hni C b b b b b b b b ''=For this purpose, measurement of different supply voltages and corresponding harmoniccurrents of the composite load should be repeatedly performed several times in some short period while keeping the composite load stationary. The change of supply voltage can for example be obtained by switching in or out some shunt capacitors, disconnecting a parallel transformer or changing the tap position of transformers with OLTC. Then, the least squares approach can be used to estimate the parameters by the measured voltages and currents. The identification procedure will be explained as follows.(1) Perform the test for m (2m n ≥)times to get measured fundamental frequency andharmonic voltage and current phasors ()()k k h h V θ∠,()()k k hh I φ∠,()1,2,,,1,2,,k m h n == .(2) For 1,2,,k n = ,transfer the phasors corresponding to zero fundamental voltage phase angle ()1(0)k θ=and change them into orthogonal components, i.e.()()11kkr V V = ()10ki V =()()()()()()()()()()11cos sin kkkkk kkkhr h h hihhV V h V V h θθθθ=-=-()()()()()()()()()()11cos sin k kkkk kkkhrhhhihhI I h I I h φθφθ=-=-,2,3,...,h n =(3)Let()()()()()()()()1221Tk k k k k k k k r i hr hi nr ni VV V V V V V V ⎡⎤=⎣⎦ ,()1,2,,k m = ()()()12Tm X V V V ⎡⎤=⎣⎦ ()()()12T m hr hr hr hrW I I I ⎡⎤=⎣⎦()()()12Tm hi hi hihi W I I I ⎡⎤=⎣⎦ Minimize ()()()211hr mk hr k I C V=-∑ and ()()()211him k hi k IC V=-∑, and determine the parametershr C and hi C by least squares approach as [13]:()()11T T hr hr T T hi hiC X X X W C X X X W --== (10)(4) By using Eq. (9), calculate I0Lh; I0Nh with the obtained Chr and Chi; then the existence of harmonic source is identified and the harmonic current is separated.It can be seen that in the course of model construction, harmonic source identification and harmonic current separation, m times changing of supply system operating condition and measuring of harmonic voltage and currents are needed. More accurate the model, more manipulations are necessary.To compromise the needed times of the switching operations and the accuracy of the results, the proposed model for the nonlinear load (Eq. (7)) and the composite load (Eq. (9)) can be further simplified by only considering the dominant terms in Eq. (7), i.e.01111,Nhr h h hhr hhi hr Nh Nhi ho h hhrhhi hi I a a V a a V I I b b V b b V +⎡⎤⎡⎤⎡⎤⎡⎤==+⎢⎥⎢⎥⎢⎥⎢⎥+⎣⎦⎣⎦⎣⎦⎣⎦2,3,,h n = (11) 01111h h Nh ho h a a V I b b V +⎡⎤'=⎢⎥+⎣⎦01111,hr hhrhhi hr h h h LhNh hi hhr hhihi ho h I a a V a a V I I I I b b V b b V ''+⎡⎤⎡⎤⎡⎤⎡⎤''==-=-⎢⎥⎢⎥⎢⎥⎢⎥''+⎣⎦⎣⎦⎣⎦⎣⎦2,3,,h n = (12) In this case, part equations in the previous procedure should be changed as follows[]01hr h h hhrhhi C a a a a ''= []01hi h h hhrhhiC b b b b ''= ()()()1Tk k k hr hi V V V ⎡⎤=⎣⎦ Similarly, 'Nh I and 'Lh I can still be taken as the harmonic current caused by thenonlinear load and the linear load, respectively.4. Experimental validation4.1. Model accuracyTo demonstrate the validity of the proposed harmonic source models, simulations are performed on the following three kind of typical nonlinear loads: a three-phase six-pulse rectifier, a single-phase capacitor-filtered rectifier and an acarc furnace under stationary operating condition.Diagrams of the three-phase six-pulse rectifier and the single-phase capacitor-filtered rectifier are shown in Figs. 1 and 2 [14,15], respectively, the V –I characteristic of the arc furnace is simplified as shown in Fig. 3 [16].The harmonic currents used in the simulation test are precisely calculated from their mathematical model. As to the supply voltage, VekT1 is assumed to be uniformly distributed between 0.95 and 1.05, VekThr and VekThi ek 1; 2;…;m T are uniformly distributed between20.03 and 0.03 with base voltage 10 kV and base power 1 MVFig. 1. Diagram of three-phase six-pulse rectifier.Fig. 2. Diagram of single-phase capacitor-filtered rectifierFig. 3. Approximate V –I characteristics of arc furnace.Three different models including the harmonic current source (constant current) model, the Norton model and the proposed simplified model are simulated and estimated by the least squares approach for comparison.For the three-phase six-pulse rectifier with fundamental currentI=1.7621; the1 parameters in the simplified model for fifth and seventh harmonic currents are listed in Table 1.To compare the accuracy of the three different models, the mean and standard deviations of the errors on Ihr; Ihi and Ih between estimated value and the simulated actual value are calculated for each model. The error comparison of the three models on the three-phase six-pulse rectifier is shown in Table 2, where mhr; mhi and mha denote the mean, and shr; shi and sha represent the standard deviations. Note that I1 and _Ih in Table 2are the current values caused by rated pure sinusoidal supply voltage.Error comparisons on the single-phase capacitor-filtered rectifier and the arc furnace load are listed in Table 3 and 4, respectively.It can be seen from the above test results that the accuracy of the proposed model is different for different nonlinear loads, while for a certain load, the accuracy will decrease as the harmonic order increase. However, the proposed model is always more accurate than other two models.It can also be seen from Table 1 that the componenta50 t a51V1 and b50 t b51V1 are around 20:0074 t0:3939 0:3865 and 0:0263 t 0:0623 0:0886 while the componenta55V5r and b55V5i will not exceed 0:2676 £0:03 0:008 and 0:9675 £0:003 0:029; respectively. The result shows that the fifth harmonic current caused by the summation of constant term and the fundamental voltage is about 10 times of that caused by harmonic voltage with same order, so that the formal is dominant in the harmonic current for the three-phase six-pulse rectifier. The same situation exists for other harmonic orders and other nonlinear loads.4.2. Effectiveness of harmonic source identification and current separationTo show the effectiveness of the proposed harmonic source identification method, simulations are performed on a composite load containing linear load (30%) and nonlinear loads with three-phase six-pulse rectifier (30%),single-phase capacitor-filtered rectifier (20%) and ac arc furnace load (20%).For simplicity, only the errors of third order harmonic current of the linear and nonlinear loads are listed in Table 5, where IN3 denotes the third order harmonic current corresponding to rated pure sinusoidal supply voltage; mN3r ;mN3i;mN3a and mL3r ;mL3i;mL3a are error means of IN3r ; IN3i; IN3 and IL3r ; IL3i; IL3 between the simulated actual value and the estimated value;sN3r ;sN3i;sN3a and sL3r ;sL3i;sL3a are standard deviations.Table 2Table 3It can be seen from Table 5 that the current errors of linear load are less than that of nonlinear loads. This is because the errors of nonlinear load currents are due to both the model error and neglecting the components related to harmonic voltages of the same order, whereas only the later components introduce errors to the linear load currents. Moreover, it can be found that more precise the composite load model is, less error is introduced. However, even by using the very simple model (12), the existence of harmonic sources can be correctly identified and the harmonic current of linear and nonlinear loads can be effectively separated. Table 4Error comparison on the arc furnaceTable 55. ConclusionsIn this paper, from an engineering point of view, firstly anew linear model is presented for representing harmonic sources. On the basis of the intrinsic difference between linear and nonlinear loads in their V –I characteristics, and by using the proposed harmonic source model, a new concise principle for identifying harmonic sources and separating harmonic source currents from that of linear loads is proposed. The detailed modeling and identification procedure is also developed based on the least squares approximation approach. Test results on several kinds of typical harmonic sources reveal that the simplified model is of sufficient precision, and is superior to other existing models. The effectiveness of the harmonic source identification approach is illustrated using a composite nonlinear load.AcknowledgementsThe authors wish to acknowledge the financial support by the National Natural Science Foundation of China for this project, under the Research Program Grant No.59737140. References[1] IEEE Working Group on Power System Harmonics, The effects of power system harmonics on power system equipment and loads. IEEE Trans Power Apparatus Syst 1985;9:2555–63.[2] IEEE Working Group on Power System Harmonics, Power line harmonic effects on communication line interference. IEEE Trans Power Apparatus Syst 1985;104(9):2578–87.[3] IEEE Task Force on the Effects of Harmonics, Effects of harmonic on equipment. IEEE Trans Power Deliv 1993;8(2):681–8.[4] Heydt GT. Identification of harmonic sources by a State Estimation Technique. IEEE Trans Power Deliv 1989;4(1):569–75.[5] Ferach JE, Grady WM, Arapostathis A. An optimal procedure for placing sensors and estimating the locations of harmonic sources in power systems. IEEE Trans Power Deliv 1993;8(3):1303–10.[6] Ma H, Girgis AA. Identification and tracking of harmonic sources in a power system using Kalman filter. IEEE Trans Power Deliv 1996;11(3):1659–65.[7] Hong YY, Chen YC. Application of algorithms and artificial intelligence approach for locating multiple harmonics in distribution systems. IEE Proc.—Gener. Transm. Distrib 1999;146(3):325–9.[8] Mceachern A, Grady WM, Moncerief WA, Heydt GT, McgranaghanM. Revenue and harmonics: an evaluation of someproposed rate structures. IEEE Trans Power Deliv 1995;10(1):474–82.[9] Xu W. Power direction method cannot be used for harmonic sourcedetection. Power Engineering Society Summer Meeting, IEEE; 2000.p. 873–6.[10] Sasdelli R, Peretto L. A VI-based measurement system for sharing the customer and supply responsibility for harmonic distortion. IEEETrans Instrum Meas 1998;47(5):1335–40.[11] Arrillaga J, Bradley DA, Bodger PS. Power system harmonics. NewYork: Wiley; 1985.[12] Thunberg E, Soder L. A Norton approach to distribution networkmodeling for harmonic studies. IEEE Trans Power Deliv 1999;14(1):272–7.[13] Giordano AA, Hsu FM. Least squares estimation with applications todigital signal processing. New York: Wiley; 1985.[14] Xia D, Heydt GT. Harmonic power flow studies. Part I. Formulationand solution. IEEE Trans Power Apparatus Syst 1982;101(6):1257–65.[15] Mansoor A, Grady WM, Thallam RS, Doyle MT, Krein SD, SamotyjMJ. Effect of supply voltage harmonics on the input current of single phase diode bridge rectifier loads. IEEE Trans Power Deliv 1995;10(3):1416–22.[16] Varadan S, Makram EB, Girgis AA. A new time domain voltage source model for an arc furnace using EMTP. IEEE Trans Power Deliv 1996;11(3):1416–22.。
毕业设计(论文)外文文献原文及译文
毕业设计(论文)外文文献原文及译文Chapter 11. Cipher Techniques11.1 ProblemsThe use of a cipher without consideration of the environment in which it is to be used may not provide the security that the user expects. Three examples will make this point clear.11.1.1 Precomputing the Possible MessagesSimmons discusses the use of a "forward search" to decipher messages enciphered for confidentiality using a public key cryptosystem [923]. His approach is to focus on the entropy (uncertainty) in the message. To use an example from Section 10.1(page 246), Cathy knows that Alice will send one of two messages—BUY or SELL—to Bob. The uncertainty is which one Alice will send. So Cathy enciphers both messages with Bob's public key. When Alice sends the message, Bob intercepts it and compares the ciphertext with the two he computed. From this, he knows which message Alice sent.Simmons' point is that if the plaintext corresponding to intercepted ciphertext is drawn from a (relatively) small set of possible plaintexts, the cryptanalyst can encipher the set of possible plaintexts and simply search that set for the intercepted ciphertext. Simmons demonstrates that the size of the set of possible plaintexts may not be obvious. As an example, he uses digitized sound. The initial calculations suggest that the number of possible plaintexts for each block is 232. Using forward search on such a set is clearly impractical, but after some analysis of the redundancy in human speech, Simmons reduces the number of potential plaintexts to about 100,000. This number is small enough so that forward searches become a threat.This attack is similar to attacks to derive the cryptographic key of symmetric ciphers based on chosen plaintext (see, for example, Hellman's time-memory tradeoff attack [465]). However, Simmons' attack is for public key cryptosystems and does not reveal the private key. It only reveals the plaintext message.11.1.2 Misordered BlocksDenning [269] points out that in certain cases, parts of a ciphertext message can be deleted, replayed, or reordered.11.1.3 Statistical RegularitiesThe independence of parts of ciphertext can give information relating to the structure of the enciphered message, even if the message itself is unintelligible. The regularity arises because each part is enciphered separately, so the same plaintext always produces the same ciphertext. This type of encipherment is called code book mode, because each part is effectively looked up in a list of plaintext-ciphertext pairs.11.1.4 SummaryDespite the use of sophisticated cryptosystems and random keys, cipher systems may provide inadequate security if not used carefully. The protocols directing how these cipher systems are used, and the ancillary information that the protocols add to messages and sessions, overcome these problems. This emphasizes that ciphers and codes are not enough. The methods, or protocols, for their use also affect the security of systems.11.2 Stream and Block CiphersSome ciphers divide a message into a sequence of parts, or blocks, and encipher each block with the same key.Definition 11–1. Let E be an encipherment algorithm, and let Ek(b) bethe encipherment of message b with key k. Let a message m = b1b2…, whereeach biis of a fixed length. Then a block cipher is a cipher for whichE k (m) = Ek(b1)Ek(b2) ….Other ciphers use a nonrepeating stream of key elements to encipher characters of a message.Definition 11–2. Let E be an encipherment algorithm, and let Ek(b) bethe encipherment of message b with key k. Let a message m = b1b2…, whereeach bi is of a fixed length, and let k = k1k2…. Then a stream cipheris a cipher for which Ek (m) = Ek1(b1)Ek2(b2) ….If the key stream k of a stream cipher repeats itself, it is a periodic cipher.11.2.1 Stream CiphersThe one-time pad is a cipher that can be proven secure (see Section 9.2.2.2, "One-Time Pad"). Bit-oriented ciphers implement the one-time pad by exclusive-oring each bit of the key with one bit of the message. For example, if the message is 00101 and the key is 10010, the ciphertext is01||00||10||01||10 or 10111. But how can one generate a random, infinitely long key?11.2.1.1 Synchronous Stream CiphersTo simulate a random, infinitely long key, synchronous stream ciphers generate bits from a source other than the message itself. The simplest such cipher extracts bits from a register to use as the key. The contents of the register change on the basis of the current contents of the register.Definition 11–3. An n-stage linear feedback shift register (LFSR)consists of an n-bit register r = r0…rn–1and an n-bit tap sequence t =t 0…tn–1. To obtain a key bit, ris used, the register is shifted one bitto the right, and the new bit r0t0⊕…⊕r n–1t n–1 is inserted.The LFSR method is an attempt to simulate a one-time pad by generating a long key sequence from a little information. As with any such attempt, if the key is shorter than the message, breaking part of the ciphertext gives the cryptanalyst information about other parts of the ciphertext. For an LFSR, a known plaintext attack can reveal parts of the key sequence. If the known plaintext is of length 2n, the tap sequence for an n-stage LFSR can be determined completely.Nonlinear feedback shift registers do not use tap sequences; instead, the new bit is any function of the current register bits.Definition 11–4. An n-stage nonlinear feedback shift register (NLFSR)consists of an n-bit register r = r0…rn–1. Whenever a key bit is required,ris used, the register is shifted one bit to the right, and the new bitis set to f(r0…rn–1), where f is any function of n inputs.NLFSRs are not common because there is no body of theory about how to build NLFSRs with long periods. By contrast, it is known how to design n-stage LFSRs with a period of 2n– 1, and that period is maximal.A second technique for eliminating linearity is called output feedback mode. Let E be an encipherment function. Define k as a cryptographic key,(r) and define r as a register. To obtain a bit for the key, compute Ekand put that value into the register. The rightmost bit of the result is exclusive-or'ed with one bit of the message. The process is repeated until the message is enciphered. The key k and the initial value in r are the keys for this method. This method differs from the NLFSR in that the register is never shifted. It is repeatedly enciphered.A variant of output feedback mode is called the counter method. Instead of using a register r, simply use a counter that is incremented for every encipherment. The initial value of the counter replaces r as part of the key. This method enables one to generate the ith bit of the key without generating the bits 0…i – 1. If the initial counter value is i, set. In output feedback mode, one must generate all the register to i + ithe preceding key bits.11.2.1.2 Self-Synchronous Stream CiphersSelf-synchronous ciphers obtain the key from the message itself. The simplest self-synchronous cipher is called an autokey cipher and uses the message itself for the key.The problem with this cipher is the selection of the key. Unlike a one-time pad, any statistical regularities in the plaintext show up in the key. For example, the last two letters of the ciphertext associated with the plaintext word THE are always AL, because H is enciphered with the key letter T and E is enciphered with the key letter H. Furthermore, if theanalyst can guess any letter of the plaintext, she can determine all successive plaintext letters.An alternative is to use the ciphertext as the key stream. A good cipher will produce pseudorandom ciphertext, which approximates a randomone-time pad better than a message with nonrandom characteristics (such as a meaningful English sentence).This type of autokey cipher is weak, because plaintext can be deduced from the ciphertext. For example, consider the first two characters of the ciphertext, QX. The X is the ciphertext resulting from enciphering some letter with the key Q. Deciphering, the unknown letter is H. Continuing in this fashion, the analyst can reconstruct all of the plaintext except for the first letter.A variant of the autokey method, cipher feedback mode, uses a shift register. Let E be an encipherment function. Define k as a cryptographic(r). The key and r as a register. To obtain a bit for the key, compute Ek rightmost bit of the result is exclusive-or'ed with one bit of the message, and the other bits of the result are discarded. The resulting ciphertext is fed back into the leftmost bit of the register, which is right shifted one bit. (See Figure 11-1.)Figure 11-1. Diagram of cipher feedback mode. The register r is enciphered with key k and algorithm E. The rightmost bit of the result is exclusive-or'ed with one bit of the plaintext m i to produce the ciphertext bit c i. The register r is right shifted one bit, and c i is fed back into the leftmost bit of r.Cipher feedback mode has a self-healing property. If a bit is corrupted in transmission of the ciphertext, the next n bits will be deciphered incorrectly. But after n uncorrupted bits have been received, the shift register will be reinitialized to the value used for encipherment and the ciphertext will decipher properly from that point on.As in the counter method, one can decipher parts of messages enciphered in cipher feedback mode without deciphering the entire message. Let the shift register contain n bits. The analyst obtains the previous n bits of ciphertext. This is the value in the shift register before the bit under consideration was enciphered. The decipherment can then continue from that bit on.11.2.2 Block CiphersBlock ciphers encipher and decipher multiple bits at once, rather than one bit at a time. For this reason, software implementations of block ciphers run faster than software implementations of stream ciphers. Errors in transmitting one block generally do not affect other blocks, but as each block is enciphered independently, using the same key, identical plaintext blocks produce identical ciphertext blocks. This allows the analyst to search for data by determining what the encipherment of a specific plaintext block is. For example, if the word INCOME is enciphered as one block, all occurrences of the word produce the same ciphertext.To prevent this type of attack, some information related to the block's position is inserted into the plaintext block before it is enciphered. The information can be bits from the preceding ciphertext block [343] or a sequence number [561]. The disadvantage is that the effective block size is reduced, because fewer message bits are present in a block.Cipher block chaining does not require the extra information to occupy bit spaces, so every bit in the block is part of the message. Before a plaintext block is enciphered, that block is exclusive-or'ed with the preceding ciphertext block. In addition to the key, this technique requires an initialization vector with which to exclusive-or the initial plaintext block. Taking Ekto be the encipherment algorithm with key k, and I to be the initialization vector, the cipher block chaining technique isc 0 = Ek(m⊕I)c i = Ek(mi⊕ci–1) for i > 011.2.2.1 Multiple EncryptionOther approaches involve multiple encryption. Using two keys k and k' toencipher a message as c = Ek' (Ek(m)) looks attractive because it has aneffective key length of 2n, whereas the keys to E are of length n. However, Merkle and Hellman [700] have shown that this encryption technique can be broken using 2n+1encryptions, rather than the expected 22n(see Exercise 3).Using three encipherments improves the strength of the cipher. There are several ways to do this. Tuchman [1006] suggested using two keys k and k':c = Ek (Dk'(Ek(m)))This mode, called Encrypt-Decrypt-Encrypt (EDE) mode, collapses to a single encipherment when k = k'. The DES in EDE mode is widely used in the financial community and is a standard (ANSI X9.17 and ISO 8732). It is not vulnerable to the attack outlined earlier. However, it is vulnerable to a chosen plaintext and a known plaintext attack. If b is the block size in bits, and n is the key length, the chosen plaintext attacktakes O(2n) time, O(2n) space, and requires 2n chosen plaintexts. The known plaintext attack requires p known plaintexts, and takes O(2n+b/p) time and O(p) memory.A second version of triple encipherment is the triple encryption mode [700]. In this mode, three keys are used in a chain of encipherments.c = Ek (Ek'(Ek''(m)))The best attack against this scheme is similar to the attack on double encipherment, but requires O(22n) time and O(2n) memory. If the key length is 56 bits, this attack is computationally infeasible.11.3 Networks and CryptographyBefore we discuss Internet protocols, a review of the relevant properties of networks is in order. The ISO/OSI model [990] provides an abstract representation of networks suitable for our purposes. Recall that the ISO/OSI model is composed of a series of layers (see Figure 11-2). Each host, conceptually, has a principal at each layer that communicates with a peer on other hosts. These principals communicate with principals at the same layer on other hosts. Layer 1, 2, and 3 principals interact only with similar principals at neighboring (directly connected) hosts. Principals at layers 4, 5, 6, and 7 interact only with similar principals at the other end of the communication. (For convenience, "host" refers to the appropriate principal in the following discussion.)Figure 11-2. The ISO/OSI model. The dashed arrows indicate peer-to-peer communication. For example, the transport layers are communicating with each other. The solid arrows indicate the actual flow of bits. For example, the transport layer invokes network layer routines on the local host, which invoke data link layer routines, which put the bits onto the network. The physical layer passes the bits to the next "hop," or host, on the path. When the message reaches the destination, it is passed up to the appropriatelevel.Each host in the network is connected to some set of other hosts. They exchange messages with those hosts. If host nob wants to send a message to host windsor, nob determines which of its immediate neighbors is closest to windsor (using an appropriate routing protocol) and forwards the message to it. That host, baton, determines which of its neighbors is closest to windsor and forwards the message to it. This process continues until a host, sunapee, receives the message and determines that windsor is an immediate neighbor. The message is forwarded to windsor, its endpoint.Definition 11–5. Let hosts C0, …, Cnbe such that Ciand Ci+1are directlyconnected, for 0 i < n. A communications protocol that has C0 and Cnasits endpoints is called an end-to-end protocol. A communications protocolthat has Cj and Cj+1as its endpoints is called a link protocol.The difference between an end-to-end protocol and a link protocol is that the intermediate hosts play no part in an end-to-end protocol other than forwarding messages. On the other hand, a link protocol describes how each pair of intermediate hosts processes each message.The protocols involved can be cryptographic protocols. If the cryptographic processing is done only at the source and at the destination, the protocol is an end-to-end protocol. If cryptographic processing occurs at each host along the path from source to destination, the protocolis a link protocol. When encryption is used with either protocol, we use the terms end-to-end encryption and link encryption, respectively.In link encryption, each host shares a cryptographic key with its neighbor. (If public key cryptography is used, each host has its neighbor's public key. Link encryption based on public keys is rare.) The keys may be set on a per-host basis or a per-host-pair basis. Consider a network with four hosts called windsor, stripe, facer, and seaview. Each host is directly connected to the other three. With keys distributed on a per-host basis, each host has its own key, making four keys in all. Each host has the keys for the other three neighbors, as well as its own. All hosts use the same key to communicate with windsor. With keys distributed on a per-host-pair basis, each host has one key per possible connection, making six keys in all. Unlike the per-host situation, in the per-host-pair case, each host uses a different key to communicate with windsor. The message is deciphered at each intermediate host, reenciphered for the next hop, and forwarded. Attackers monitoring the network medium will not be able to read the messages, but attackers at the intermediate hosts will be able to do so.In end-to-end encryption, each host shares a cryptographic key with each destination. (Again, if the encryption is based on public key cryptography, each host has—or can obtain—the public key of each destination.) As with link encryption, the keys may be selected on a per-host or per-host-pair basis. The sending host enciphers the message and forwards it to the first intermediate host. The intermediate host forwards it to the next host, and the process continues until the message reaches its destination. The destination host then deciphers it. The message is enciphered throughout its journey. Neither attackers monitoring the network nor attackers on the intermediate hosts can read the message. However, attackers can read the routing information used to forward the message.These differences affect a form of cryptanalysis known as traffic analysis.A cryptanalyst can sometimes deduce information not from the content ofthe message but from the sender and recipient. For example, during the Allied invasion of Normandy in World War II, the Germans deduced which vessels were the command ships by observing which ships were sending and receiving the most signals. The content of the signals was not relevant; their source and destination were. Similar deductions can reveal information in the electronic world.第十一章密码技术11.1问题在没有考虑加密所要运行的环境时,加密的使用可能不能提供用户所期待的安全。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
毕业设计外文资料题目面对对象技术学院信息科学与工程学院专业计算机科学与技术班级计软1202学生刘桂斌学号20121214073指导教师史桂娴,王海燕二〇一六年三月二十日Object Technology2004, Vol.14 (2), pp.20.Object TechnologyTimothy A.BuddAbstract Object technology is a new approach to developing software that allows programmers to create objects, a combination of data and program instructions. This new technology has been steadily developed since the late 1960s and promises to be one of the major ingredients in the response to the ongoing software crisis.Keywords Object technology Optimization1.1 Introduction to OTThere exists a critical technology that is changing the way we conceive, build, use and evolve our computer systems. It is a technology that many companies are adopting to increase their efficiency, reduce costs and adapt to a dynamic marketplace. It is called Object Technology (OT).By allowing the integration of disparate and non compatible source, OT has the potential to precipitate a revolution in information systems design on a par with that caused in computer hardware by the introduction of the computer chip. Yet OT is not a newphenomenon. Development and product releases have been ongoing since its origin many years ago. However, the recent emphasis task of enterprise information technology integration has brought OT into the spotlight.OT promises to provide component-level software objects that can be quickly combined to build new applications that respond to changing business conditions. Once used, objects may be reused in other applications, lowering development costs and speeding up the development process. Because objects communicate by sending messages that can be understood by other objects, large integrated systems are easier to assemble.Each object is responsible for a specific function within either an application or a distributed system. That means that as the business changes, individual object may be easily upgraded, augmented or replaced, leaving the rest of the system untouched. This directly reduces the cost of maintenance and the timing and extendibility of new systems.1.2 OT-based ProductsThe current market for OT-based products can be divided into four major segments: ·Languages and programming tools·Developers’ toolkits·Object-Oriented database·Object-Oriented CASE toolsThe largest segment of the current market for OT-based products is languages andprogramming tools. Products in this area include language compliers for C++, Smalltalk, Common Lisp Object System (CLOS), Eiffel, Ada and Objective-C, as well as extensions to PASCAL and Modula-2.Products in this category are available from a variety of vendors. Increasingly, the trend in this group is to offer the language compliers with associated development tools as part of a complete development environment.Developers’ toolkits account for the next largest part of the OT market. These products are designed to develop a program that enables a developer to easily do one of two things. The first is interfacing an application to distributed environment. The second is developing a graphical screen through a product.By providing developers with higher level description language and reusablecomponents, products in this category give developers an easy and cost effective way to begin producing object-oriented systems.An important component in this category is the relatively new area of end-users tools. This element is important because organizing and analying the increasingly large amounts of data that computer systems are capable of collecting is a key problem.Object-oriented database management systems are one of the most interesting and rapidly growing segments of the OT market. A number of companies, including systems vendors like Digital and HP, and start-ups such as Object Design, Servio, and Objectivity, have all produced products.These products, dubbed ‖Objectbases‖, fill an important need by storing complexobjects as a single entity. The objectbase products allow objects to be stored, retrieved and shared in much the same way as data is stored in a relational database management system. The value of an objectbase, as opposed to a database, is best described as following: ―Object databases offer a better way to store objects because they provide all of the traditional database services without the overhead of disassembling and reassemblingobjects every time they are stored and retrieved. Compared with an object database, storing complex objects in a relational database is tedious at best. It’s like having to disassembling your car each night rather than just putting it into the gar age!‖Over the next few years, a shift from proprietary CASE implementations to those based on the object paradigm can be expected. This area has lagged growth from earlier projections. OT-based CASE tools will have to emerge as a viable product category to address the wide scale development of large systems. This category also include those tools that are methodological in nature.1.3 0bject-oriented ProgrammingObject-oriented programming (OOP) is a new approach to developing software that allows programmers to create objects, a combination of data and program instructions. Traditional programming methods keep data, such as files, independent of the programs that work with the data. Each traditional program, t5herfore, must define how the data will be used for that particular program. This often results in redundant programming code that must be changed every time the structure of the data is changed, such as when a new field is added to a file. With OOP, the program instructions and data are combined into objects that can be used repeatedly by programmers whenever they need them. Specificinstructions, called methods define how the object acts when it is used by a program.With OOP, programmers define classes of objects. Each class contains the methods that are unique to that class. Each class can have one or more subclasses. Each subclass contains the methods of its higher level classes plus whatever methods are unique to the subclass. The OOP capability to pass methods to lower levels is called ―inheritance‖.A specific instance of an object contains all methods from its higher level classes plus any methods that a unique to the object. When an OOP object is sent an instruction to do something, called a message, unlike a traditional program, the message does not have to tell the OOP object exactly what to do. What to do is defined by the methods that the OOP object contains or has inherited.Object—oriented programming can bring many advantages to users. It can bring productivity gains as high as 1000 to 1500 percent instead of the 10 or 15 percent gainsavailable from structured programming methods. It allows large complex systems to be built which are not economically feasible using traditional programming techniques. It allows program modifications to be made more easily. It could mean two different user interfaces within an application, one for the user who likes to type, and another for the users who just want to shout at the terminal.Objects can be viewed as reusable components, and once the programmer has developed a library of these components, he can minimize the amount of new coding required. One user envisions a commercial library of objects which could be purchased byprogrammers and reused for various applications. But creating a library is no simple task because the integrity of the original software design is critical. Reusability can be a mixed blessing for users, too, as a programmers has to be able to find the object he needs. But if productivity is your aim, reusability is worth the risks.The long-term productivity of systems is enhanced by object-oriented programming. Because of the modular nature of the code, programs are more malleable. This is particularly beneficial for applications that will be used for many years, during which company needs may change and make software modifications necessary.Software reliability can be improved by object-oriented programming. Since the objects are repeatedly tested in a variety of applications, bugs are more likely to be found and corrected. Object-oriented programming also has potential benefits in parallel processing. Execution speed under object oriented methods will improve with parallel processing.1.4 Object-oriented DBMSA shift toward object-oriented DBMSs does not have to replace relational DNMS. As its name implies, it is orientation rather than a full-blown DBMS model. As such, it can blend with and build on the relational schema.Object-oriented DBMSs integrate a variety of real-world data types –such as business procedures and policies, graphics, pictures, voice, and an non-tated text. Current relational products are not equipped to handle them efficiently. Data types in RDBMSs are more commonly record-oriented and expressed in numbers and text.Object orientation also makes contributions to application development efficiency.makes the data function, attributes, and relationships an integral part of the object. In this way, objects can be reused and replicated. You can query the data on its functions, attributes, and relationships.By contrast, most RDBMSs demand that the knowledge associated with the data be written into and maintained separately in each application program.Object orientation is going to be available in two forms: one for those who need and want a radical change, and one for those who want some of its advantages without going through a major conversion.The first form of object-oriented DBMS focused largely on the computer-aided design (CAD) market, which needed to store complex data types such as the graphics involved with an aircraft design.The second form is made up of the leading RDBMS vendors who support the concept of integrating object management capabilities whit their current line of relational products. Sybase, Inc, the first vendor to introduce an object-oriented capability,offers Sybase , which enables the user to program a limited number of business procedures along with the data types in a server’s database engine . Any client attempting a transaction that does not conform to these procedures is simply rejected by the database. That capability enables users to shorten the development cycle, since integrity logic and business rules no longer need to be programmed into each application.This approach reduces maintenance costs as well, since any changes in the procedure can be made once at the server level instead of several times within all the affected applications.Last, the server-level procedures increase the system’s performance, since the operations are taking place closer to where the data is actually stored.。