外文翻译 (2)软件
EDA技术及软件外文翻译
EDA Technology And SoftwareEDA is Electronic Design Automation (Electronic Automation) is the abbreviation of themselves, in the early 1990s from computer aided Design (CAD), computer aided manufacturing (CAM), computer aided testing (CAT) and computer aided engineering (CAE) development of the concepts and come.EDA technology is on the computer as the tool, the designer in EDA software platform, with VHDL HDL finish design documents, then by the computer automatically logic compilation, reduction, division, comprehensive, optimization, layout and wiring and simulation for a particular goal chips, until the adapter compilation, logic mapping and programming download, etc.1 EDA technology conceptsEDA technology is in electronic CAD technology developed on the basis of computer software system by means of computer for working platform, shirt-sleeve application of electronic technology, computer technology and information processing and intelligent technology to the latest achievements of electronic products, the automatic design.Using EDA tools, electronic stylist can be from concept, algorithm, agreement, etc, begin to design your electronic system a lot work can be finished by computer and electronic products can be from circuit design, performance analysis to design the IC territory or PCB layout the whole process of the computer automatically complete the processing.Now on the concept of using EDA or category very wide. Included in machinery, electronics, communication, aerospace, chemical, mineral, biology, medicine, military and other fields, have EDA applications. Current EDA technology has in big companies, enterprises, institutions and teaching research departments extensive use. For example in the aircraft manufacturing process, from design, performance testing and characteristic analysis until a flight simulator, all may involve EDA technology. Globalization-the EDA technology, mainly in electronic circuit design, PCB design and IC design.EDA can be divided into system level and circuit-level and physical implementation level.2. Development Environment MAX + PLUSⅡ/ QUARTERⅡAltera Corporation is the world's three major CPLD / FPGA manufacturers of the devices it can achieve the highest performance and integration, not only because of the use of advanced technology and new logic structure, but also because it provides a modern design tools MAX + PLUSⅡprogrammable logic development software, the software is launched the third generation of Altera PLD development system. Nothing to do with the structure provides a design environment for Altera CPLD designers to easily design entry, quick processing, and device programming. MAX + PLUSⅡprovides a comprehensive logic design capabilities, including circuitdiagrams, text and waveform design entry and compilation, logic synthesis, simulation and timing analysis, and device programming, and many other features. Especially in the schematic so, MAX + PLUSⅡis considered the most easy to use, the most friendly man-machine interface PLD development software. MAX + PLUSⅡcan develop anything other than the addition APEX20K CPLD / FPGA.MAX + PLUSⅡdevelopment system has many outstanding features:① open interface.②design and construction related: MAX + PLUSⅡsupport Altera's Classic, ACEX 1K, MAX 3000, MAX 5000, MAX 7000, MAX 9000, FLEX 6000, FLEX 8000 and FLEX 10K series of programmable logic devices, gate count is 600 ~ 250 000 doors, offers the industry really has nothing to do with the structure of programmable logic design environment. MAX + PLUSⅡcompiler also provides a powerful logic synthesis and optimization to reduce the burden on the user's design.③can be run on multiple platforms: MAX + PLUSⅡsoftware PC-based WindowsNT 4.0, Windows 98, Win dows 2000 operating systems, but also in Sun SPARCstations, HP 9000 Series 700/800, IBM RISC System/6000 such as run on workstations.④fully integrated: MAX + PLUSⅡsoftware design input, processing, calibration functions are fully integrated within the programmable logic development tools, which can be debugged more quickly and shorten the development cycle.⑤modular tools: designers can input from a variety of design, editing, calibration and programming tools to choose the device to form a user-style development environment, when necessary, to retain on the basis of the original features to add new features. The MAX + PLUSⅡSeries supports a variety of devices, designers need to learn new development tools for the development of new device structures.⑥mail-description language (HDL): MAX + PLUSⅡsoftware supports a variety of HDL design entry, including the standard VHDL, V erilog HDL and Altera's own developed hardware description language AHDL.⑦MegaCore Function: MegaCore are pre-validated for the realization of complex system-level functions provided by the HDL netlist file. It ACEX 1K, MAX 7000, MAX 9000, FLEX 6000, FLEX 8000 and FLEX 10K devices provide the most optimal design. Users can purchase them from the Altera MegaCore, using them can reduce the design task, designers can make more time and energy to improve the design and final product up.⑧ OpenCore Features: MAX + PLUSⅡsoftware with open characteristics of the kernel, OpenCore come to buy products for designers design their own assessment.At the same time, MAX + PLUSⅡthere are many other design entry methods, including:①graphic design input: MAX + PLUSⅡgraphic design input than other software easier to use features, because the MAX + PLUSⅡprovides a rich library unit for the designer calls, especially in the MAX2LIB in the provision of the mf library includes almost all 74 series of devices, in the prim library provides all of the separate digital circuit devices. So long as a digital circuit knowledge, almost no learning can take advantage of excess MAX + PLUSⅡfor CPLD / FPGA design. MAX + PLUSⅡalso includes a variety of special logic macros (Macro-Function) andthe parameters of the trillion of new features (Mega-Function) module. Full use of these modules are designed to greatly reduce the workload of designers to shorten design cycles and multiply.②Enter the text editor: MAX + PLUSⅡtext input language and compiler system supports AHDL, VHDL language, VERILOG language of the three input methods.③ wave input: If you know the input, output waveform, the waveform input can also be used.④hybrid approach: MAX + PLUSⅡdesign and development environment for graphical design entry, text editing input, waveform editing input hybrid editing. To do: in graphics editing, wave form editing module by editing the text include "module name. Inc" or the use of Function (... ..) Return (....) Way call. Similarly, the text editing module input form can also be called when the graphics editor, AHDL compiler results can be used in the VHDL language, VHDL compiler of the results can also be entered in the AHDL language or graphic to use. This flexible input methods, to design the user has brought great convenience.Altera's QuartusⅡis a comprehensive PLD development software to support the schematic, VHDL, V erilog HDL, and AHDL (Altera Hardware Description Language) and other design input forms, embedded devices, and integrated its own simulator, you can complete the design input to complete the hardware configuration of the PLD design process.QuartusⅡin the XP, Linux and Unix on the use, in addition to using the Tcl script to complete the design process, to provide a complete graphical user interface design. With running speed, unified interface, feature set, easy to use and so on.Altera's QuartusⅡsupport IP core, including the LPM / MegaFunction macro function module library, allowing users to take full advantage of sophisticated modules, simplifying the design complexity and speed up the design speed. Good for third-party EDA tool support also allows the user to the various stages in the design process using the familiar third-party EDA tools.In addition, QuartusⅡand DSP Builder tools and by Matlab / Simulink combination, you can easily achieve a variety of DSP applications; support Altera's programmable system chip (SOPC) development, set system-level design, embedded software development, programmable logic design in one, is a comprehensive development platform.MaxPLUSⅡgeneration as Altera's PLD design software, due to its excellent ease of use has been widely used. Altera has now stopped MaxPLUSⅡupdate support, QuartusⅡnot only support the device type as compared to the rich and the graphical interface changes. Altera QuartusⅡincluded in many such SignalTapⅡ, Chip Editor and RTL Viewer design aids, integrated SOPC and HardCopy design process, and inherit MaxPLUSⅡfriendly graphical interface and easy to use.MaxPLUSⅡgeneration as Altera's PLD design software, due to its excellent ease of use has been widely used. Altera has now stopped MaxPLUSⅡupdate support, QuartusⅡnot only support the device type as compared to the rich and the graphical interface changes. Altera QuartusⅡincluded in many such SignalTapⅡ, Chip Editor and RTL Viewer design aids, integrated SOPC and HardCopy design process, and inherit MaxPLUSⅡ friendly graphical interface and easy to use.Altera QuartusⅡ as a programmable logic design environment, due to its strong design capabilities and intuitive interface, more and more digital systems designers welcome.Altera's QuartusⅡis the fourth generation of programmable logic PLD software development platform. The platform supports a working group under the design requirements, including support for Internet-based collaborative design. Quartus platform and Cadence, ExemplarLogic, MentorGraphics, Synopsys and Synplicity EDA vendors and other development tools are compatible. LogicLock improve the software module design features, added FastFit compiler options, and promote the network editing performance, and improved debugging capabilities. MAX7000/MAX3000 devices and other items to support the product.3. Development of language VHDLVHDL (V ery High Speed Integrated Circuit Hardware Description Language) is a very high speed integrated circuit hardware description language, it can describe the function of the hardware circuitry, signal connectivity and the time between languages. It can be more effective than the circuit diagram to express the characteristics of the hardware circuit. Using the VHDL language, you can proceed to the general requirements of the system, since the detailed content will be designed to come down to earth, and finally to complete the overall design of the system hardware. IEEE VHDL language has been the industry standard as a design to facilitate reuse and sharing the results. At present, it can not be applied analog circuit design, but has been put into research. VHDL program structure, including: entity (Entity), structure (Architecture), configure (Configuration), Package Collection (Package) and the Library (Library). Among them, the entity is the basic unit of a VHDL program, by entity and the structure of two parts: the physical design system that is used to describe the external interface signal; structure used to describe the behavior of the system, the system processes or system data structure form. Configuration select the required language from the library system design unit to form different versions of different specifications, so that the function is designed to change the system. Collection of records of the design module package to share the data types, constants, subroutines and so on. Database used to store the compiled entities, the body structure, including the collection and configuration: one is the development of engineering software user, the other is the manufacturer's database.VHDL, the main features are:① powerful, high flexibility: VHDL language is a powerful language structure, clear and concise code can be used to design complex control logic. VHDL language also supports hierarchical design, support design databases and build re usable components. Currently, VHDL language has become a design, simulation, synthesis of standard hardware description language.② Device independence: VHDL language allows designers to generate a design do not need to first select a specific device. For the same design description, you can use a variety of different device structures to achieve its function. So the design description stage, able to focus on design ideas. When the design, simulation, after the adoption of a specific device specified integrated, adapter can be.③Portability: VHDL language is a standard language, so the use of VHDL design can be carried out by different EDA tool support. Transplanted from one toanother simulation tools simulation tools, synthesis tools from a port to another integrated tool, from a working platform into another working platform. EDA tools used in a technical skills, in other tools can also be used.④top-down design methods: the traditional design approach is bottom-up design or flat design. Bottom-up design methodology is to start the bottom of the module design, the gradual formation of the functional modules of complex circuits. Advantage of this design is obvious because it is a hierarchical circuit design, the general circuit sub-module are in accordance with the structure or function of division, so the circuit level clear, clear structure, easy people to develop, while the design archive file is easy, easy communication. Bottom-up design is also very obvious shortcomings, the overall design concept is often not leaving because the cost of months of low-level design in vain. Flat design is a module containing only the circuit, the circuit design is straightforward and, with no division structure and function, it is not hierarchical circuit design. Advantages of small circuit design can save time and effort, but with the increasing complexity of the circuit, this design highlights the shortcomings of the abnormal changes. Top-down design approach is to design top-level circuit description (top model), and then the top-level simulation using EDA software, if the top-level design of the simulation results meet the requirements, you can continue to lower the top-level module by the division level and simulation, design of such a level will eventually complete the entire circuit. Top-down design method compared with the first two are obvious advantages.⑤ rich data types: as a hardware description language VHDL data types are very rich language, in addition to VHDL language itself dozens of predefined data types, in the VHDL language programming also can be user-defined data types. Std_logic data types in particular the use of VHDL language can make the most realistic complex signals in analog circuits.⑥ modeling convenience: the VHDL language can be integrated in the statement and the statement are available for simulation, behavior description ability, therefore particularly suitable for signal modeling language VHDL. The current VHDL synthesizer to complex arithmetic comprehensive descriptions (such as: Quartus Ⅱ2.0 and above versions of std_logic_vector type of data can add, subtract, multiply, divide), so the circuit modeling for complex simulation of VHDL language, whether or comprehensive description of the language are very appropriate.⑦rich runtime and packages: The current package supports VHDL, very rich, mostly in the form of libraries stored in a specific directory, the user can at any time. Such as the IEEE library collection std_logic_1164, std_logic_arith, std_logic_unsigned other package. In the CPLD / FPGA synthesis, EDA software vendors can also use the various libraries and provide package. VHDL language and the user using a variety of results can be stored in a library, in the design of the follow-up can continue to use.⑧VHDL language is a modeling hardware description language, so with ordinary computer languages are very different, common computer language is the CPU clock according to the beat, after an instruction to perform the next instruction, so instruction is a sequential, that is the order of execution, and execution of each instruction takes a specific time. VHDL language to describe the results with the corresponding hardware circuit, which follows the characteristics of hardware, there isno order of execution of the statement is executed concurrently; and statements that do not like ordinary software, take some time each instruction, just follow their own hardware delay.EDA技术及软件EDA是电子设计自动化(Electronic Design Automation)的缩写,在20世纪90年代初从计算机辅助设计(CAD)、计算机辅助制造(CAM)、计算机辅助测试(CAT)和计算机辅助工程(CAE)的概念发展而来。
软件工程(外文翻译文献)
外文文献资料1、Software EngineeringSoftware is the sequences of instructions in one or more programming languages that comprise a computer application to automate some business function. Engineering is the use of tools and techniques in problem solving. Putting the two words together, software engineering is the systemtic application of tools and techniques in the development of computer-based applications.The software engineering process describes the steps it takes to deelop the system. We begin a development project with the notion that there is a problem to be solved via automation. The process is how you get from problem recognition to a working solution. A quality process is desirable because it is more likely to lead to a quality product. The process followed by a project team during the development life cycle of an application should be orderly, goal-oriented, enjoyable, and a learning experience.Object-oriented methodology is an approach to system lifecycle development that takes a top-down view of data objects, their allowable actions, and the underlying communication requirement to define a system architecture. The data and action components are encapsulated, that is , they are combined together, to form abstract data types Encapsulation means that if I know what data I want ,I also know the allowable processes against that data. Data are designed as lattice hierarchies of relationships to ensure that top-down, hierarchic inheritance and side ways relationships are accommodated. Encapsulated objects are constrained only to communicate via messages. At a minimum, messages indicate the receiver and action requested. Messages may be more elaborate, including the sender and data to be acted upon.That we try to apply engineering discipline to software development does not mean that we have all the answers about how to build applications. On the contrary, we still build systems that are not useful and thus are not used. Part of the reason for continuing problems in application development, is that we are constantly trying to hita moving target. Both the technology and the type of applications needed by businesses are constantly changing and becoming more complex. Our ability to develop and disseminate knowledge about how to successfully build systems for new technologies and new application types seriously lags behind technological and business changes.Another reason for continuing problems in application development is that we aren’t always free to do what we like and it is hard to change habits and cultures from the old way of doing things, as well as get users to agree with a new sequence of events or an unfamiliar format for documentation.You might ask then, if many organizations don’t use good software engineering practices, why should I bother learning them? There are two good answers to this question. First, if you never know the right thing to do, you have no chance of ever using it. Second, organizations will frequently accept evolutionary, small steps of change instead of revolutionary, massive change. You can learn individual techniques that can be applied without complete devotion to one way of developing systems. In this way, software engineering can speed changee in their organizations by demonstrating how the tools and techniques enhance th quality of both the product and the process of building a system.2、Data Base System1、IntroductionThe development of corporate databases will be one of the most important data-processing activities for the rest of the 1970s. Date will be increasingly regarded as a vital corporate resource, which must be organized so as to maximize their value. In addition to the databases within an organization, a vast new demand is growing for database services, which will collect, organize, and sell data.The files of data which computers can use are growing at a staggering rate. The growth rate in the size of computer storage is greater than the growth in the size or power of any other component in the exploding data processing industry. The more data the computers have access to, the greater is their potential power. In all walks of life and in all areas of industry, data banks will change the areas of what it is possiblefor man to do. In the end of this century, historians will look back to the coming of computer data banks and their associated facilities as a step which changed the nature of the evolution of society, perhaps eventually having a greater effect on the human condition than even the invention of the printing press.Some most impressive corporate growth stories of the generation are largely attributable to the explosive growth in the need of information.The vast majority of this information is not yet computerized. However, the cost of data storage hardware is dropping more rapidly than other costs in data processing. It will become cheaper to store data on computer files than to store them on paper. Not only printed information will be stored. The computer industry is improving its capability to store line drawing, data in facsimile form, photo-graphs, human speech, etc. In fact, any form of information other than the most intimate communications between humans can be transmitted and stored digitally.There are two main technology developments likely to become available in the near future. First, there are electromagnetic devices that will hold much more data than disks but have much longer access time. Second, there are solid-state technologies that will give microsecond access time but capacities are smaller than disks.Disks themselves may be increased in capacity somewhat. For the longer term future there are a number of new technologies which are currently working in research labs which may replace disks and may provide very large microsecond-access-time devices. A steady stream of new storage devices is thus likely to reach the marketplace over the next 5 years, rapidly lowering the cost of storing data.Given the available technologies, it is likely that on-line data bases will use two or three levels of storage. One solid-state with microsecond access time, one electromagnetic with access time of a fraction of a second. If two ,three ,or four levels of storage are used, physical storage organization will become more complex ,probably with paging mechanisms to move data between the levels; solid-state storage offers the possibility of parallel search operation and associativememory.Both the quantity of data stored and the complexity of their organization are going up by leaps and bounds. The first trillion bit on-line stores are now in use . in a few year’s time ,stores of this size may be common.A particularly important consideration in data base design is to store the data so that the can be used for a wide variety of applications and so that the way they can be changed quickly and easily. On computer installation prior to the data base era it has been remarkably difficult to change the way data are used. Different programmers view the data in different ways and constantly want to modify them as new needs arise modification , however ,can set off a chain reaction of changes to existing programs and hence can be exceedingly expensive to accomplish .Consequently , data processing has tended to become frozen into its old data structures .To achieve flexibility of data usage that is essential in most commercial situations . Two aspects of data base design are important. First, it should be possible to interrogate and search the data base without the lengthy operation of writing programs in conventional programming languages. Second ,the data should be independent of the programs which use them so that they can be added to or restructured without the programs being changed .The work of designing a data base is becoming increasing difficult , especially if it is to perform in an optimal fashion . There are many different ways in which data can be structured ,and they have different types of data need to be organized in different ways. Different data have different characteristics , which ought to effect the data organization ,and different users have fundamentally different requirements. So we need a kind of data base management system(DBMS)to manage data.Data base design using the entity-relationship model begins with a list of the entity types involved and the relationships among them. The philosophy of assuming that the designer knows what the entity types are at the outset is significantly different from the philosophy behind the normalization-based approach.The entity-relationship(E-R)approach uses entity-relationship diagrams. The E-Rapproach requires several steps to produre a structure that is acceptable by the particular DBMS. These steps are:(1) Data analysis(2) Producing and optimizing the entity model.(3) Logical schema development(4) Physical data base design process.Developing a data base structure from user requirements is called data bases design. Most practitioners agree that there are two separate phases to the data base design process. The design of a logical database structure that is processable by the data base management system(DBMS)d escribes the user’s view of data, and is the selection of a physical structure such as the indexed sequential or direct access method of the intended DBMS.Current data base design technology shows many residual effects of its outgrowth from single-record file design methods. File design is primarily application program dependent since the data has been defined and structured in terms of individual applications to use them. The advent of DBMS revised the emphasis in data and program design approaches.There are many interlocking questions in the design of data-base systems and many types of technique that one can use is answer to the question so many; in fact, that one often sees valuable approaches being overlooked in the design and vital questions not being asked.There will soon be new storage devices, new software techniques, and new types of data bases. The details will change, but most of the principles will remain. Therefore, the reader should concentrate on the principles.2、Data base systemThe conception used for describing files and data bases has varied substantially in the same organization.A data base may be defined as a collection of interrelated data stored together with as little redundancy as possible to serve on or more applications in an optimal fashion; the data are stored so that they are independent of programs which use thedata; a common and controlled approach is used in adding new data and in modifying and retrieving existing data within the data base. One system is said to contain a collection of data bases if they are entirely separate in structure.A data base may be designed for batch processing, real-time processing, or in-line processing. A data base system involve application program, DBMS, and data base.One of the most important characteristics of most data bases is that they will constantly need to change and grow. Easy restructuring of the data base must be possible as new data types and new applications are added. The restructuring should be possible without having to rewrite the application program and in general should cause as little upheaval as possible. The ease with which a data base can be changed will have a major effect on the rate at which data-processing application can be developed in a corporation.The term data independence is often quoted as being one of the main attributes of a data base. It implies that the data and the application programs which use them are independent so that either may be changed without changing the other. When a single set of data items serves a variety of applications, different application programs perceive different relationships between the data items. To a large extent, data-base organization is concerned with the representation of relationship between data items and records as well as how and where the data are stored. A data base used for many applications can have multiple interconnections between the data item about which we may wish to record. It can describes the real world. The data item represents an attribute, and the attribute must be associated with the relevant entity. We design values to the attributes, one attribute has a special significance in that it identifies the entity.An attribute or set of attribute which the computer uses to identify a record or tuple is referred to as a key. The primary key is defined as that key used to uniquely identify one record or tuple. The primary key is of great importance because it is used by the computer in locating the record or tuple by means of an index or addressing algorithm.If the function of a data base were merely to store data, its organization would be simple. Most of the complexities arise from the fact that is must also show the relationships between the various items of data that are stored. It is different to describe the data in logical or physical.The logical data base description is referred to as a schema .A schema is a chart of the types of data that one used. It gives the names of the entities and attributes, and specifics the relations between them. It is a framework into which the values of the data-items can be fitted.We must distinguish between a record type and a instance of the record. When we talk about a “personnel record”,this is really a record type.There are no data values associated with it.The term schema is used to mean an overall chart of all of the dataitem types and record types stored in a data he uses. Many different subschema can be derived from one schema.The schema and the subschema are both used by the data-base management system, the primary function of which is to serve the application programs by executing their data operations.A DBMS will usually be handing multiple data calls concurrently. It must organize its system buffers so that different data operations can be in process together. It provides a data definition language to specify the conceptual schema and most likely, some of the details regarding the implementation of the conceptual schema by the physical schema. The data definition language is a high-level language, enabling one to describe the conceptual schema in terms of a “data model” .The choice of a data model is a difficult one, since it must be rich enough in structure to describe significant aspects of the real world, yet it must be possible to determine fairly automatically an efficient implementation of the conceptual schema by a physical schema. It should be emphasized that while a DBMS might be used to build small data bases, many data bases involve millions of bytes, and an inefficient implementation can be disastrous.We will discuss the data model in the following.3、Three Data ModelsLogical schemas are defined as data models with the underlying structure of particular database management systems superimposed on them. At the present time, there are three main underlying structures for database management systems. These are :RelationalHierarchicalNetworkThe hierarchical and network structures have been used for DBMS since the 1960s. The relational structure was introduced in the early 1970s.In the relational model, the entities and their relationships are represented by two-dimensional tables. Every table represents an entity and is made up of rows and columns. Relationships between entities are represented by common columns containing identical values from a domain or range of possible values.The last user is presented with a simple data model. His and her request are formulated in terms of the information content and do not reflect any complexities due to system-oriented aspects. A relational data model is what the user sees, but it is not necessarily what will be implemented physically.The relational data model removes the details of storage structure and access strategy from the user interface. The model provides a relatively higher degree of data. To be able to make use of this property of the relational data model however, the design of the relations must be complete and accurate.Although some DBMS based on the relational data model are commercially available today, it is difficult to provide a complete set of operational capabilities with required efficiency on a large scale. It appears today that technological improvements in providing faster and more reliable hardware may answer the question positively.The hierarchical data model is based on a tree-like structure made up of nodes and branches. A node is a collection of data attributes describing the entity at that point.The highest node of the hierarchical tree structure is called a root. The nodes at succeeding lower levels are called children .A hierarchical data model always starts with a root node. Every node consists of one or more attributes describing the entity at that node. Dependent nodes can follow the succeeding levels. The node in the preceding level becomes the parent node of the new dependent nodes. A parent node can have one child node as a dependent or many children nodes. The major advantage of the hierarchical data model is the existence of proven database management systems that use the hierarchical data model as the basic structure. There is a reduction of data dependency but any child node is accessible only through its parent node, the many-to –many relationship can be implemented only in a clumsy way. This often results in a redundancy in stored data.The network data model interconnects the entities of an enterprise into a network. In the network data model a data base consists of a number of areas. An area contains records. In turn, a record may consist of fields. A set which is a grouping of records, may reside in an area or span a number of areas. A set type is based on the owner record type and the member record type. The many-to many relation-ship, which occurs quite frequently in real life can be implemented easily. The network data model is very complex, the application programmer must be familiar with the logical structure of the data base.4、Logical Design and Physical DesignLogical design of databases is mainly concerned with superimposing the constructs of the data base management system on the logical data model. There are three mainly models: hierarchical, relational, network we have mentioned above.The physical model is a framework of the database to be stored on physical devices. The model must be constructed with every regard given to the performance of the resulting database. One should carry out an analysis of the physical model with average frequencies of occurrences of the grou pings of the data elements, with expected space estimates, and with respect to time estimates for retrieving and maintaining the data.The database designer may find it necessary to have multiple entry points into a database, or to access a particular segment type with more than one key. To provide this type of access; it may be necessary to invert the segment on the keys. Thephysical designer must have expertise in knowledge of the DBMS functions and understanding of the characteristics of direct access devices and knowledge of the applications.Many data bases have links between one record and another, called pointers. A pointer is a field in one record which indicates where a second record is located on the storage devices.Records that exist on storage devices is a given physical sequence. This sequencing may be employed for some purpose. The most common pupose is that records are needed in a given sequence by certain data-processing operations and so they are stored in that sequences.Different applications may need records in different sequences.The most common method of ordering records is to have them in sequence by a key —that key which is most commonly used for addressing them. An index is required to find any record without a lengthy search of the file.If the data records are laid out sequentially by key, the index for that key can be much smaller than they are nonsequential.Hashing has been used for addressing random-access storages since they first came into existence in the mid-1950s. But nobody had the temerity to use the word hashing until 1968.Many systems analysis has avoided the use of hashing in the suspicion that it is complicated. In fact, it is simple to use and has two important advantages over indexing. First, it finds most records with only one seek and second, insertion and deletions can be handled without added complexity. Indexing, however, can be used with a file which is sequential by prime key and this is an overriding advantage, for some batch-pro-cessing applications.Many data-base systems use chains to interconnect records also. A chain refers to a group of records scatters within the files and interconnected by a sequence of pointers. The software that is used to retrive the chained records will make them appear to the application programmer as a contiguous logical file.The primary disadvantage of chained records is that many read operations areneeded in order to follow lengthy chains. Sometimes this does not matter because the records have to be read anyway. In most search operations, however, the chains have to be followed through records which would not otherwise to read. In some file organizations the chains can be contained within blocked physical records so that excessive reads do not occur.Rings have been used in many file organizations. They are used to eliminate redundancy. When a ring or a chain is entered at a point some distance from its head, it may be desirable to obtain the information at the head quickly without stepping through all the intervening links.5、Data Description LanguagesIt is necessary for both the programmers and the data administrator to be able to describe their data precisely; they do so by means of data description languages. A data description language is the means of declaring to data-base management system what data structures will be used.A data description languages giving a logical data description should perform the folloeing functions:It should give a unique name to each data-item type, file type, data base and other data subdivision.It should identify the types of data subdivision such as data item segment , record and base file.It may define the type of encoding the program uses in the data items (binary , character ,bit string , etc.)It may define the length of the data items and the range of the values that a data item can assume .It may specify the sequence of records in a file or the sequence of groups of record in the data base .It may specify means of checking for errors in the data .It may specify privacy locks for preventing unauthorized reading or modification of the data .These may operate at the data-item ,segment ,record, file or data-base level and if necessary may be extended to the contents(value) of individual data items .The authorization may , on the other hand, be separate defined .It is more subject to change than the data structures, and changes in authorization proceduresshould not force changes in application programs.A logical data description should not specify addressing ,indexing ,or searching techniques or specify the placement of data on the storage units ,because these topics are in the domain of physical ,not logical organization .It may give an indication of how the data will be used or of searching requirement .So that the physical technique can be selected optimally but such indications should not be logically limiting.Most DBMS have their own languages for defining the schemas that are used . In most cases these data description languages are different to other programmer language, because other programmer do not have the capability to define to variety of relationship that may exit in the schemas.附录 B 外文译文1、软件工程软件是指令的序列,该指令序列由一种或者多种程序语言编写,它能使计算机应用于某些事物的运用自动化。
CAT_systran
- terminology extraction. . . )
- New Language Pairs (FAEN, HIEN, UREN, CSEN, UKEN,
- SKEN, PLEN, ARFR, SREN) - SYSTRAN Lite on
PDA
4
主要内容
MT System Complexity
SYSTRAN机器翻译软件技术培训 SYSTRAN机器翻译软件技术培训
Beijing, 7 July 2006
PROPRIETARY & CONFIDENTIAL
Systran 机器翻译软件技术培训 (一)
Systran 6 系统及功能使用简述
2
Systran -翻译能力强、使用广泛 翻译能力强、
An added value for multiple enterprise applications
- Inbound/inward type communication
• Self-service Translator over the Intranet (Office, Email, Chat, Collaboration) • Market Intelligence based on foreign language content sources • Cross-lingual research and translation on knowledge bases
- (nbrules, sizedictionary ) × nbLPs
- High flexibility required - Stability - Intrinsic complexity of language description
英语翻译工具的介绍说明
英语翻译工具的介绍说明为您收集的翻译工具,提供全面的英语翻译工具信息,希望对您有用!现在网络上英语翻译工具五花八门,到底哪些才是真正有用的呢?根据自己这几年英语翻译经历,给大家总结几个好用的英语翻译工具。
1、谷歌翻译谈到多国语言翻译,大家最熟悉还是谷歌,它可以提供全世界80种语言之间的即时翻译。
可谓是所有在线翻译工具中可翻译语种最多的英语翻译工具。
2、译客传说译客传说是为译员量身定制的移动平台,实现了译员之间的实时连接,提高译员翻译效率,为译员提供翻译日志记录、翻译术语记录调用云术语库、翻译行业资讯新闻、翻译招聘信息、翻译简历一键发送、翻译业务、社区交友等,是译员的手机掌上乐园。
3、ICAT辅助翻译软件iCAT辅助翻译软件提供了云端术语管理平台,已具有2000w以上的术语供译员收藏使用。
它能够翻译语种包括:中文、繁体中文、英、日、韩、德、法、俄、西班牙等。
导出译后稿,纯译文、段段对照、并列对照三个格式译文4、高校译云是唯一的高校翻译平台,它聚集高校翻译资源、市场翻译需求的专属沟通平台。
如果你有翻译需求,可以在上面找适宜的高校翻译团队来翻译。
经常做翻译的人都知道,翻译软件在工作中有很大作用,一款好的软件可以使翻译速度提升几倍,同时也能提高翻译质量~1、iOl8:火云译客翻译软件口号是:为译客而生!软件如其口号,各功能模块都是按照译员需求来的,集术语管理、查词、术语共享、在线翻译、在线交流分享等翻译辅助类功能于一体的软件,依托iol8语联网强大的资源优势,不仅有2000万权威术语库,还可以帮助用户快速批注、审核译文!杜绝错、漏译。
翻译速度快速提升!2、TOLQ:众包式网页翻译效劳平台Tolq是一个众包式网页翻译效劳平台,帮助企业进展语言翻译。
可以帮助你迅速的将网页翻译成多达35种语言,从而让你的网站迅速并且零障碍的被世界上的绝大多数人了解。
3、WorldLingoWorldLingo是一家国际著名的跨国翻译公司,其网站提供在线翻译效劳,可翻译文本、文档、网站和电子邮件,有单词数限制。
软件工程毕业论文文献翻译中英文对照
软件工程毕业论文文献翻译中英文对照学生毕业设计(论文)外文译文学生姓名: 学号专业名称:软件工程译文标题(中英文):Qt Creator白皮书(Qt Creator Whitepaper)译文出处:Qt network 指导教师审阅签名: 外文译文正文:Qt Creator白皮书Qt Creator是一个完整的集成开发环境(IDE),用于创建Qt应用程序框架的应用。
Qt是专为应用程序和用户界面,一次开发和部署跨多个桌面和移动操作系统。
本文提供了一个推出的Qt Creator和提供Qt开发人员在应用开发生命周期的特点。
Qt Creator的简介Qt Creator的主要优点之一是它允许一个开发团队共享一个项目不同的开发平台(微软Windows?的Mac OS X?和Linux?)共同为开发和调试工具。
Qt Creator的主要目标是满足Qt开发人员正在寻找简单,易用性,生产力,可扩展性和开放的发展需要,而旨在降低进入新来乍到Qt的屏障。
Qt Creator 的主要功能,让开发商完成以下任务: , 快速,轻松地开始使用Qt应用开发项目向导,快速访问最近的项目和会议。
, 设计Qt物件为基础的应用与集成的编辑器的用户界面,Qt Designer中。
, 开发与应用的先进的C + +代码编辑器,提供新的强大的功能完成的代码片段,重构代码,查看文件的轮廓(即,象征着一个文件层次)。
, 建立,运行和部署Qt项目,目标多个桌面和移动平台,如微软Windows,Mac OS X中,Linux的,诺基亚的MeeGo,和Maemo。
, GNU和CDB使用Qt类结构的认识,增加了图形用户界面的调试器的调试。
, 使用代码分析工具,以检查你的应用程序中的内存管理问题。
, 应用程序部署到移动设备的MeeGo,为Symbian和Maemo设备创建应用程序安装包,可以在Ovi商店和其他渠道发布的。
, 轻松地访问信息集成的上下文敏感的Qt帮助系统。
软件工程专业外文翻译
软件工程专业外文翻译英文原文SSH is Spring + struts + Hibernate an integration framework, is one of the more popular a Web application framework・SpringLight weight 一一from two aspects in terms of size and cost of the Spring are lightweight・A complete Spring framework can in one size only1MB multiple JAR files released .And Spring required processing overhead is not worth mentioning・Inversion of control 一一Spring through a known as inversion of control (IoC) technology promotes loose coupling .When using IoC, an object depend on other objects will be passed in through passive way, but not the object of its own to create or find a dependent object .You can think of IoC and JNDI instead 一一not the object from the container for dependent, but in different container object is initialized object request on own initiative will rely on to it.Aspect oriented programming 一一Spring provides rich support, allowed by separating the application's business logic and system level service cohesiondevelopment .Application object only realize they should do 一一complete business logic・ They are not responsible for other system level concerns.Container 一一Spring contains and management application object configuration and life cycle, in this sense, it is a kind of container, you can configure each of your bean to be created 一一Based on a reconfigurable prototype (prototype), your bean can create a single instance or every time when they are needed to generate a new examples 一一and how they are interrelated・ However, Spring should not be confused with the traditional heavyweight EJB container, they are often large and unwieldy, difficult to use・ StrutsStruts on Model, View and Controller are provided with the corresponding components・ActionServlet, this is Struts core controller, responsible for intercepting the request from the user・Action, this class is typically provided by the user, the controller receives from the ActionServlet request, and according to the request tocall the model business logic method to processing the request, and the results will be returned to the JSP page display.54Part ModelBy ActionForm and JavaBean, where ActionForm used to package the user the request parameters, packaged into a ActionForm object, the object to be forwarded to the Action ActionServlet Action ActionFrom, according to which the request parameters processing a user request・JavaBean encapsulates the underlying business logic, including database access・ Part ViewThis section is implemented by JSP・Struts provides a rich library of tags, tag library can be reduced through the use of the script, a custom tag library can be achieved with Model effective interaction, and increased practical function.The Controller componentThe Controller component is composed of two parts 一一the core of the system controller, the business logic controller・System core controller, the corresponding ActionServlet.The controller is provided with the Struts framework, HttpServlet class inheritance, so it can be configured to mark Servlet .The controller is responsible for all HTTP requests, and then according to the user request to decide whether or not to transfer to business logic controller・ Business logic controller, responsible for processing a user request, itself does not have the processing power, it calls the Model to complete the dea1. The corresponding Action part・HibernateHibernate is an open source object relation mapping framework, it had a very lightweight JDBC object package, makes Java programmers can usearbitrary objects to manipulate database programming thinking .Hibernate can be applied in any use of JDBC occasions, can be in the Java client programto use, also can be in Servlet / JSP Web applications, the most revolutionary, Hibernate can be applied in theEJB J2EE schema to replace CMP, complete data persistence.The core of Hibernate interface has a total of 5, are: Session, SessionFactory, Query, Transaction and Configuration. The 5 core interface in any development will be used in. Through these interfaces, not only can the persistent object access, but also to carry out a transaction control.55中文翻译SSH为spring+ struts+ hibernate的一个集成框架,是忖前较流行的一种Web应用程序开源框架。
外文翻译---软件测试策略
附录英文文献SOFTW ARE TESTING STEATEGIESA strategy for software testing integrates software test case design methods into a well-planned series of steps that result in the successful construction of software .As important ,a software testing strategy provides a rode map for the software developer, the quality assurance organization ,and the customer—a rode map that describes the steps to be conducted as part of testing, when these steps are planned and then undertaken, and how much effort, time, and resources will be required. Therefore , any testing strategy must incorporate test planning, test case design, test execution, and resultant data collection .一INTEGRATION TESTINGA neophyte in the software world might ask a seemingly legitimate question once all modules have been unit tested:“IF they all work individually, why do you doubt that they’ll work when we put them together?”The problem, of course, is“putting them together”—interfacing . Data can be lost across an interface; one module can have an inadvertent, adverse affect on another; subfunctions, when combiner, may not produce the desired major function; individually acceptable imprecision may be magnified to unacceptable levels; global data structures can present problems—sadly, the list goes on and on .Integration testing is a systematic technique for constructing the program structure while conducting tests to uncover errors associated with interfacing. The objective is to take unit tested modules and build a program structure that has been dictated by design.There is often a tendency to attempt non-incremental integration; that is, to construct the program using a :“big bang”approach. All modules are combined in advance .the entire program in tested as a whole. And chaos usually results! A set of errors are encountered. Correction is difficult because isolation of causes is complicated by the vast expanse of the entire program. Once these errors are corrected, new ones appear and the process continues in a seemingly endless, loop.Incremental integration is the antithesis of the big bang approach. The program is constructed and tested in small segments, where errors are easier to isolate and correct; interfaces are more likely to be tested completely; and a systematic test approach may be applied. In the sections that follow, a number of different incremental integration strategies are discussed.1.1 Top-Down IntegrationTop-Down Integration is an incremental approach to construction of program structure. Modules are integrated by moving downward through the control hierarchy, beginning with the main control module (main program). Modules subordinate (and ultimately subordinate) to the main control module are incorporated into the structure in either a depth-first or breadth-first manner.Depth-first integration would integrate all modules on a major control path of the structure. Selection of a major path is somewhat arbitrary and depends on application-specific characteristics. For example, selecting the left hand path, modules M1,M2, and M5 would be integrated first. Next, M8 or (if necessary for proper functioning of M2) M6 would be integrated. Then, the central and right hand control paths are built. Breadth-first integration incorporates all modules directly subordinate at each level, moving across the structure horizontally. From the figure, modules M2, M3, and M4 would be integrated first. The next control level, M5, M6, and so on, follows.The integration process is performed in a series of steps:(1)The main control module is used as a test driver, and stubs are substituted for all modules directly subordinate to the main control module.(2)Depending on the integration approach selected (i.e., depth- or breadth-first), subordinate stubs are replaced one at a time with actual modules.(3)Tests are conducted as each module is integrated.(4)On completion of each set of tests, another stub is replaced with the real module.(5)Regression testing may be conducted to ensure that new errors have not been introduced.The process continues from step 2 until the program structure is built.The top-down integration strategy verifies major control or decision points early in the test process. In a well-factored program structure, decision making occurs at upper levels in the hierarchy and is therefore encountered first. If major controlprogram do exist, early recognition is essential. If depth-first integration is selected, a complete function of the software may be implemented and demonstrated. For example, consider a classic transaction structure in which a complex series of interactive inputs path are requested, acquired, and validated via an incoming path. The incoming path may be integrated in a top-down manner. All input processing (for subsequent transaction dispatching) maybe demonstrated before other elements of the structure have been integrated. Early demonstration of functional capability is a confidence builder for both the developer and the customer.Top-down strategy sounds relatively uncomplicated, but in practice, logistical problems can arise. The most common of these problems occurs when processing at low levels in the hierarchy is required to adequately test upper levels, Stubs replace low-level modules at the beginning of top-down testing; therefore no significant data can flow upward in the program structure. The tester is left with three choices: 1 delay many tests until stubs are replaced with actual modules, 2 develop stubs that perform limited functions that simulate the actual module, or 3 integrate the software from the bottom of the hierarchy upward.The first approach (delay tests until stubs are replaced by actual modules) causes us to lose some control over correspondence between specific tests and incorporation of specific modules. This can lead to difficulty in determining the cause of errors and tends to violate the highly constrained nature of top-down approach. The second approach is workable, but can lead to significant overhead, as stubs become more and more complex. The third approach is called bottom-up testing.1.2 Bottom-Up IntegrationBottom-up integration testing, as its name implies, begins construction and testing with atomic modules (i.e., modules at the lowest level in the program structure). Because modules are integrated from the bottom up, processing required for modules subordinate to a given level is always available and the need for stubs is eliminated.A bottom-up integration strategy may be implemented with the following steps:1 Low-level modules are combined into clusters (sometimes called builds) that perform a specific software subfunction.1. A driver (a control program for testing) is written to coordinate test case input and output.2 .The cluster is tested.3.Drivers are removed and clusters are combined moving upward in the program structure.Modules are combined to form clusters 1,2, and 3. Each of the clusters is tested using a driver (shown as a dashed block) Modules in clusters 1 and 2 are subordinate to M1. Drivers D1 and D2 are removed, and the clusters are interfaced directly to M1. Similarly, driver D3 for cluster 3 is removed prior to integration with module M2. Both M1 and M2 will ultimately be integrated with M3, and so forth.As integration moves upward, the need for separate test drivers lessens, In fact, if the top tow levels of program structure are integrated top-down, the number of drivers can be reduced substantially and integration of clusters is greatly simplified.1.3 Regression TestingEach time a new module is added as part of integration testing, the software changes. New data flow paths are established, new I/O may occur, and new control logic is invoked. These changes may cause problems functions that regression testing is the re-execution of some subset of tests that have already been conducted to ensure that changes have not propagated unintended side effects.In a broader context, successful tests (of any kind) result in the discovery of errors, and errors must be corrected. Whenever software is corrected, some aspect of the software configuration (the program, its documentation, or the data that support it) is changes. Regression testing is the activity that helps to ensure that changes (due to testing or for other reasons) do not introduce unintended behavior or additional errors.Regression testing maybe conducted manually, be re-executing a subset of all test cases or using automated capture playback tools. Capture-playback tools enable the software engineer to capture test cases and results for subsequent playback and comparison. The regression test suite (the subset of tests to be executed) contains three different classes of test cases:1.A representative sample of tests that will exercise all software functions.2.Additional tests that focus on software functions that are likely to be affected by the change.3.Tests that focus on the software components that have been changed.As integration testing proceeds, the number of regression tests can grow quite large. Therefore, the regression test suite should be designed to include only those tests that address one or more classes of errors in each of the major program functions. It is impractical and inefficient to re-execute every test for every program functiononce a change has occurred.1.4 Comments on Integration TestingThere has been much discussion of the relative advantages and disadvantages of top-down versus bottom-up integration testing. In general, the advantages of one strategy tend to result in disadvantages for the other strategy. The major disadvantage of the top-down approach is the need for stubs and the attendant testing difficulties that can be associated with them. Problems associated with stubs maybe offset by the advantage of testing major control functions early. The major disadvantage of bottom up integration is that “the program as an entity does not exist until the last module is added”. This drawback is tempered by easier test case design and a lack of stubs.Selection of an integration strategy depends upon software characteristics and sometimes, project schedule. In general, a combined approach (sometimes called sandwich testing) that uses a top-down strategy for upper levels of the program structure, coupled with a bottom-up strategy for subordinate levels maybe the best compromise.As integration testing is conducted, the tester should identify critical modules. A critical module has one or more of following characteristics: 1 addresses several software requirements;2 has a high level of control (resides relatively high in the program structure); 3 is complex or error-prone(cyclomatic complexity maybe used as an indicator ); or 4 has definite performance requirements. Critical modules should be tested as early as is possible. In addition, regression tests should focus on critical module function.二SYSTEM TESTING2.1 Recovery TestingMany computer-based systems must recover from faults and resume processing within a prespecified time. In some cases, a system must be fault tolerant; that is, processing fault must not cause overall system function to cease. In other cases, a system failure must be corrected whining a specified period of time or severe economic damage will occur.Recovery testing is a system test that forces the software to fail in variety of ways and recovery is properly performed. If recovery is automatic (performed by the system itself), re-initialization, checkpointing mechrequires human intervention, the mean time to repair is evaluated to determine whether it is within acceptable limits.2.2 Security TestingAny computer-based system that manages sensitive information or cause actions that can improperly harm (or benefit) individuals is a target for improper or illegal penetration.Penetration spans a broad range of activities: hackers who attempt to penetrate systems for sport; disgruntled employees who attempt to penetrate for revenge; and dishonest individuals who attempt to penetrate for illicit personal gain.Security testing attempts to verify that protection mechanisms built into a system will in fact protect it from improper penetration. To quote Beizer:“The system’s security must, of course, be tested for invulnerability from frontal attack—but must also be tested for invulnerability from flank or rear attack.”During security testing, the tester plays the role of the individual who desires to penetrate the system. Anything goes! The tester may attempt to acquires passwords through external clerical means, may attack the system with custom software designed to break down any defenses that have been constructed; may overwhelm the errors, hoping to penetrate during recovery; may browse through insecure data, hoping to find the key to system entry; and so on.Given enough time and resources, good security testing will ultimately penetrate a system. The role of the system designer is to make penetration cost greater than the value of the information that will be obtained.2.3 Stress TestingDuring earlier software testing steps, while-box techniques resulted in through evaluation of normal program functions and performance. Stress tests are designed to confront programs with abnormal situations. In essence, the tester who performs stress testing asks:“How high can we crank this up before it fail?”Stress testing executes a system in a manner that demands resources in abnormal quantity, frequency, or volume. For example, 1 special tests maybe designed that generate 10 interrupts per second, when one or two is the average rate; 2 input data rates maybe increased by an order of magnitude to determine how input functions will respond; 3 test cases that require maximum memory or other resources maybe executed;4 test cases that may cause thrashing in a virtual operating system maybe designed; or 5 test cases that may cause excessive hunting for disk resident datamaybe created. Essentially, the tester attempts to break the problem.A variation of stress testing is a technique called sensitivity testing. In some situations (the most common occur in mathematical algorithms) a very small rang of data contained within the bounds of valid data for a program may cause extreme and even erroneous processing or profound performance degradation. This situation is analogous to a singularity in a mathematical function. Sensitivity testing attempts to uncover data combinations within valid input classes that may cause instability or improper processing.2.4 Performance TestingFor real-time and embedded systems, software that provides required function but does not conform to performance requirements is unacceptable. Performance testing is designed to test run-time performance of software within the context of an integrated system. Performance testing occurs throughout all steps in the testing process. Even at the unit level, the performance of an individual module maybe assessed as while-box test are conducted. H0owever, it is not until all system elements are fully integrated that the true performance of a system can be ascertained.软件测试策略软件测试策略把软件测试案例的设计方法集成到一系列已经周密计划的步骤中去,从而使软件的开发得以成功地完成。
计算机辅助翻译在科技英语翻译中的应用研究
计算机辅助翻译在科技英语翻译中的应用研究随着科技的不断进步,计算机辅助翻译已经成为科技英语翻译的主流工具。
它为翻译工作带来了巨大的效率和质量提升。
本文将从计算机辅助翻译的定义、功能和应用实例三个方面来深入探讨它在科技英语翻译中的应用研究。
一、计算机辅助翻译的定义计算机辅助翻译(Computer Assisted Translation,简称CAT)是利用计算机辅助翻译软件快速识别、翻译文本的过程。
翻译人员在这个过程中可以利用计算机协助翻译、查词、翻译、检查、编辑以及进行术语管理等工作。
计算机辅助翻译是人工翻译和机器翻译的中间点,既保留了人工翻译的思维模式,也借助了机器翻译系统提供的各种翻译资源,如词典、句库、术语库等。
二、计算机辅助翻译的功能计算机辅助翻译软件的功能主要包括翻译记忆、术语管理、机器翻译等。
1.翻译记忆它是个人和企业在翻译过程中所使用的术语和短语的一个数据库。
它保存了原文和翻译的句子对,每次用户使用翻译工具时,翻译记忆库能够帮助翻译和自动翻译加快翻译的速度、提高翻译的质量。
翻译记忆的最主要功能在于提高翻译的一致性,避免晦涩的术语和重复的翻译,提高工作效率。
2.术语管理术语管理是各个行业所特有的术语的一个管理系统,通过术语管理的系统能够自动检索企业术语库中的相关术语,辅助翻译人员迅速找到词汇,提高翻译的准确性、一致性与专业程度。
术语管理的优点包括增强翻译准确性、提高翻译标准化率、简化多语种文化下的翻译工作、增强翻译质量控制等。
3.机器翻译机器翻译(Machine Translation)指利用计算机软件自动将一种语言的文本转化成另一种语言的过程。
它通过计算机程序模拟人类翻译的过程,使得计算机能够自动将源文本翻译成目标语言。
虽然机器翻译的翻译准确率与人工翻译难以相比,但它仍在科技英语翻译中发挥着重要的作用。
机器翻译可以快速提供初步的翻译结果,然后再由人工翻译进一步校对,从而减少翻译成本,加快翻译速度。
EDA技术及软件中英文对照外文翻译文献
中英文资料翻译EDA技术及软件EDA是电子设计自动化(Electronic Design Automation)的缩写,在20世纪90年代初从计算机辅助设计(CAD)、计算机辅助制造(CAM)、计算机辅助测试(CAT)和计算机辅助工程(CAE)的概念发展而来。
EDA技术就是以计算机为工具,设计者在EDA软件平台上,用硬件描述语言HDL完成设计文件,然后由计算机自动地完成逻辑编译、化简、分割、综合、优化、布局、布线和仿真,直至对于特定目标芯片的适配编译、逻辑映射和编程下载等工作。
1 EDA技术的概念EDA技术是在电子CAD技术基础上发展起来的计算机软件系统,是指以计算机为工作平台,融合了应用电子技术、计算机技术、信息处理及智能化技术的最新成果,进行电子产品的自动设计。
利用EDA工具,电子设计师可以从概念、算法、协议等开始设计电子系统,大量工作可以通过计算机完成,并可以将电子产品从电路设计、性能分析到设计出IC版图或PCB版图的整个过程的计算机上自动处理完成。
现在对EDA的概念或范畴用得很宽。
包括在机械、电子、通信、航空航天、化工、矿产、生物、医学、军事等各个领域,都有EDA的应用。
目前EDA技术已在各大公司、企事业单位和科研教学部门广泛使用。
例如在飞机制造过程中,从设计、性能测试及特性分析直到飞行模拟,都可能涉及到EDA 技术。
本文所指的EDA技术,主要针对电子电路设计、PCB设计和IC设计。
EDA设计可分为系统级、电路级和物理实现级。
2 EDA常用软件EDA工具层出不穷,目前进入我国并具有广泛影响的EDA软件有:multiSIM7(原EWB的最新版本)、PSPICE、OrCAD、PCAD、Protel、Viewlogic、Mentor、Graphics、Synopsys、LSIIogic、Cadence、MicroSim等等。
这些工具都有较强的功能,一般可用于几个方面,例如很多软件都可以进行电路设计与仿真,同进还可以进行PCB自动布局布线,可输出多种网表文件与第三方软件接口。
外文翻译---软件和软件工程
外文翻译:Software and software engineering ----the software appearance and enumeratesAs the decade of the 1980s began, a front page story in business week magazine trumpeted the following headline:” software: the new driving force.”software had come of age—it had become a topic for management concern. during the mid-1980s,a cover story in foreune lamented “A Growing Gap in Software,”and at the close of the decade, business week warned managers about”the Software Trap—Automate or else.”As the 1990s dawned , a feature story in Newsweek asked ”Can We Trust Our Software? ”and The wall street journal related a major software company’s travails with a front page article entitled “Creating New Software Was an Agonizing Task …” these headlines, and many others like them, were a harbinger of a new understanding of the importance of computer software ---- the opportunities that it offers and the dangers that it poses.Software has now surpassed hardware as the key to the success of many computer-based systems. Whether a computer is used to run a business, control a product, or enable a system , software is the factor that differentiates . The completeness and timeliness of information provided by software (and related databases) differentiate one company from its competitors. The design and “human friendliness” of a software product differentiate it from competing products with an otherwise similar function .The intelligence and function provided by embedded software often differentiate two similar industrial or consumer products. It is software that can make the difference.During the first three decades of the computing era, the primary challenge was to develop computer hardware that reduced the cost of processing and storing data .Throughout the decade of the 1980s,advances in microelectronics resulted in more computing power at increasingly lower cost. Today, the problem is different .The primary challenge during the 1990s is to improve thequality ( and reduce the cost ) of computer-based solutions- solutions that are implemented with software.The power of a 1980s-era mainframe computer is available now on a desk top. The awesome processing and storage capabilities of modern hardware represent computing potential. Software is the mechanism that enables us to harness and tap this potential.The context in which software has been developed is closely coupled to almost five decades of computer system evolution. Better hardware performance, smaller size and lower cost have precipitated more sophisticated computer-based syst ems. We’re moved form vacuum tube processors to microelectronic devices that are capable of processing 200 million connections per second .In popular books on “the computer revolution,”Osborne characterized a “new industrial revolution,” Toffer called the advent of microelectronics part of “the third wave of change” in human history , and Naisbitt predicted that the transformation from an industrial society to an “information society” will have a profound impact on our lives. Feigenbaum and McCorduck suggested that information and knowledge will be the focal point for power in the twenty-first century, and Stoll argued that the “ electronic community” created by networks and software is the key to knowledge interchange throughout the world . As the 1990s began , Toffler described a “power shift” in which old power structures( governmental, educational, industrial, economic, and military) will disintegrate as computers and software lead to a “democratization of knowledge.”Figure 1-1 depicts the evolution of software within the context of. computer-based system application areas. During the early years of computer system development, hardware underwent continual change while software was viewed by many as an afterthought. Computer programming was a "seat-of-the-pants" art for which few systematic methods existed. Software development was virtually unmanaged--until schedules slipped or costs began to escalate. During this period, abatch orientation was used for most systems. Notable exceptions were interactive systems such as the early American Airlines reservation system and real-time defense-orientedsystems such as SAGE. For the most part, however, hardware was dedicated to the union of, a single program that in turn was dedicated to a specific application.Evolution of softwareDuring the early years, general-purpose hardware became commonplace. Software, on the other hand, was custom-designed for each application and had a relatively limited distribution. Product software(i.e., programs developed to be sold to one or more customers) was in its infancy . Most software was developed and ultimately used by the same person or organization. You wrote it, you got it running , and if it failed, you fixed it. Because job mobility was low , managers could rest assured that you’d be there when bugs were encountered.Because of this personalized software environment, design was an implicit process performed in one’s head, and action was often nonexistent. During the early years we learned much about the implementation of computer-based systems, but relatively little about computer system engineering .In fairness , however , we must acknowledge the many outstanding computer-based systems that were developed during this era. Some of these remain in use today and provide landmark achievements that continue to justify admiration.The second era of computer system evolution (Figure 1.1) spanned the decade from themid-1960s to the late 1970s. Multiprogramming and multiuse systems introduced new concepts of human-machine interaction. Interactive techniques opened a new world of applications and new levels of hardware and software sophistication . Real-time systems could collect, analyze, and transform data form multiple sources , thereby controlling processes and producing output in milliseconds rather than minutes . Advances in on-line storage led to the first generation of database management systems.The second era was also characterized by the use of product software and the advent of "software houses." Software was developed for widespread distribution in a multidisciplinary market. Programs for mainframes and minicomputers were distributed to hundreds and sometimesthousands of users. Entrepreneurs from industry, government, and academia broke away to "develop the ultimate software package" and earn a bundle of money.As the number of computer-based systems grew, libraries of computer software began to expand. In-house development projects produced tens of thousands of program source statements. Software products purchased from the outside added hundreds of thousands of new statements. A dark cloud appeared on the horizon. All of these programs--all of these source statements-had to be corrected when faults were detected, modified as user requirements changed, or adapted to new hardware that was purchased. These activities were collectively called software maintenance. Effort spent on software maintenance began to absorb resources at an alarming rate.Worse yet, the personalized nature of many programs made them virtually unmentionable. A "software crisis" loomed on the horizon.The third era of computer system evolution began in the mid-1970s and continues today. The distributed system--multiple computers, each performing functions concurrently and communicating with one another- greatly increased the complexity of computer-based systems. Global and local area networks, high-bandwidth digital communications, and increasing demands for 'instantaneous' data access put heavy demands on software developers.The third era has also been characterized by the advent and widespread use of microprocessors, personal computers, and powerful desk-top workstations. The microprocessor has spawned a wide array of intelligent products-from automobiles to microwave ovens, from industrial robots to blood serum diagnostic equipment. In many cases, software technology is being integrated into products by technical staff who understand hardware but are often novices in software development.The personal computer has been the catalyst for the growth of many software companies. While the software companies of the second era sold hundreds or thousands of copies of their programs, the software companies of the third era sell tens and even hundreds of thousands of copies. Personal computer hardware is rapidly becoming a commodity, while software provides the differentiating characteristic. In fact, as the rate of personal computer sales growth flattened during the mid-1980s, software-product sales continued to grow. Many people in industry and at home spent more money on software than they did to purchase the computer on which the software would run.The fourth era in computer software is just beginning. Object-oriented technologies (Chapters 8 and 12) are rapidly displacing more conventional software development approaches in many application areas. Authors such as Feigenbaum and McCorduck [FEI83] and Allman [ALL89] predict that "fifth-generation" computers, radically different computing architectures, and their related software will have a profound impact on the balance of political and industrial power throughout the world. Already, "fourth-generation" techniques for software development (discussed later in this chapter) are changing the manner in which some segments of the software community build computer programs. Expert systems and artificial intelligence software has finally moved from the laboratory into practical application for wide-ranging problems in the real world. Artificial neural network software has opened exciting possibilities for pattern recognition and human-like information processing abilities.As we move into the fourth era, the problems associated with computer software continue to intensify:Hardware sophistication has outpaced our ability to build software to tap hardware's potential.Our ability to build new programs cannot keep pace with the demand for new programs.Our ability to maintain existing programs is threatened by poor design and inadequate resources.In response to these problems, software engineering practices--the topic to which this book is dedicated--are being adopted throughout the industry.An Industry PerspectiveIn the early days of computing, computer-based systems were developed usinghardware-oriented management. Project managers focused on hardware because it was the single largest budget item for system development. To control hardware costs, managers instituted formal controls and technical standards. They demanded thorough analysis and design before something was built. They measured the process to determine where improvements could be made. Stated simply, they applied the controls, methods, and tools that we recognize as hardware engineering. Sadly, software was often little more than an afterthought.In the early days, programming was viewed as an "art form." Few formal methods existed and fewer people used them. The programmer often learned his or her craft by trial and error. The jargon and challenges of building computer software created a mystique that few managers cared to penetrate. The software world was virtually undisciplined--and many practitioners of the clay loved it!Today, the distribution of costs for the development of computer-based systems has changed dramatically. Software, rather than hardware, is often the largest single cost item. For the past decade managers and many technical practitioners have asked the following questions: Why does it take so long to get programs finished?Why are costs so high?Why can't we find all errors before we give the software to our customers?Why do we have difficulty in measuring progress as software is being developed?These, and many other’ questions, are a manifestation of the concern about software and the manner in which it is developed--a concern that has tend to the adoption of software engineering practices.译文:软件和软件工程——软件的出现及列举在二十世纪八十年代的前十年开始的时候, 在商业周刊杂志里一个头版故事大声宣扬以下标题:“软件,我们新的驱动力!”软件带来了一个时代------它成为了一个大家关心的主题。
软件工程专业毕业设计外文文献翻译
软件工程专业毕业设计外文文献翻译1000字本文将就软件工程专业毕业设计的外文文献进行翻译,能够为相关考生提供一定的参考。
外文文献1: Software Engineering Practices in Industry: A Case StudyAbstractThis paper reports a case study of software engineering practices in industry. The study was conducted with a large US software development company that produces software for aerospace and medical applications. The study investigated the company’s software development process, practices, and techniques that lead to the production of quality software. The software engineering practices were identified through a survey questionnaire and a series of interviews with the company’s software development managers, software engineers, and testers. The research found that the company has a well-defined software development process, which is based on the Capability Maturity Model Integration (CMMI). The company follows a set of software engineering practices that ensure quality, reliability, and maintainability of the software products. The findings of this study provide a valuable insight into the software engineering practices used in industry and can be used to guide software engineering education and practice in academia.IntroductionSoftware engineering is the discipline of designing, developing, testing, and maintaining software products. There are a number of software engineering practices that are used in industry to ensure that software products are of high quality, reliable, and maintainable. These practices include software development processes, software configuration management, software testing, requirements engineering, and project management. Software engineeringpractices have evolved over the years as a result of the growth of the software industry and the increasing demands for high-quality software products. The software industry has developed a number of software development models, such as the Capability Maturity Model Integration (CMMI), which provides a framework for software development organizations to improve their software development processes and practices.This paper reports a case study of software engineering practices in industry. The study was conducted with a large US software development company that produces software for aerospace and medical applications. The objective of the study was to identify the software engineering practices used by the company and to investigate how these practices contribute to the production of quality software.Research MethodologyThe case study was conducted with a large US software development company that produces software for aerospace and medical applications. The study was conducted over a period of six months, during which a survey questionnaire was administered to the company’s software development managers, software engineers, and testers. In addition, a series of interviews were conducted with the company’s software development managers, software engineers, and testers to gain a deeper understanding of the software engineering practices used by the company. The survey questionnaire and the interview questions were designed to investigate the software engineering practices used by the company in relation to software development processes, software configuration management, software testing, requirements engineering, and project management.FindingsThe research found that the company has a well-defined software development process, which is based on the Capability Maturity Model Integration (CMMI). The company’s software development process consists of five levels of maturity, starting with an ad hoc process (Level 1) and progressing to a fully defined and optimized process (Level 5). The company has achieved Level 3 maturity in its software development process. The company follows a set of software engineering practices that ensure quality, reliability, and maintainability of the software products. The software engineering practices used by the company include:Software Configuration Management (SCM): The company uses SCM tools to manage software code, documentation, and other artifacts. The company follows a branching and merging strategy to manage changes to the software code.Software Testing: The company has adopted a formal testing approach that includes unit testing, integration testing, system testing, and acceptance testing. The testing process is automated where possible, and the company uses a range of testing tools.Requirements Engineering: The company has a well-defined requirements engineering process, which includes requirements capture, analysis, specification, and validation. The company uses a range of tools, including use case modeling, to capture and analyze requirements.Project Management: The company has a well-defined project management process that includes project planning, scheduling, monitoring, and control. The company uses a range of tools to support project management, including project management software, which is used to track project progress.ConclusionThis paper has reported a case study of software engineering practices in industry. The study was conducted with a large US software development company that produces software for aerospace and medical applications. The study investigated the company’s software development process,practices, and techniques that lead to the production of quality software. The research found that the company has a well-defined software development process, which is based on the Capability Maturity Model Integration (CMMI). The company uses a set of software engineering practices that ensure quality, reliability, and maintainability of the software products. The findings of this study provide a valuable insight into the software engineering practices used in industry and can be used to guide software engineering education and practice in academia.外文文献2: Agile Software Development: Principles, Patterns, and PracticesAbstractAgile software development is a set of values, principles, and practices for developing software. The Agile Manifesto represents the values and principles of the agile approach. The manifesto emphasizes the importance of individuals and interactions, working software, customer collaboration, and responding to change. Agile software development practices include iterative development, test-driven development, continuous integration, and frequent releases. This paper presents an overview of agile software development, including its principles, patterns, and practices. The paper also discusses the benefits and challenges of agile software development.IntroductionAgile software development is a set of values, principles, and practices for developing software. Agile software development is based on the Agile Manifesto, which represents the values and principles of the agile approach. The manifesto emphasizes the importance of individuals and interactions, working software, customer collaboration, and responding to change. Agile software development practices include iterative development, test-driven development, continuous integration, and frequent releases.Agile Software Development PrinciplesAgile software development is based on a set of principles. These principles are:Customer satisfaction through early and continuous delivery of useful software.Welcome changing requirements, even late in development. Agile processes harness change for the customer's competitive advantage.Deliver working software frequently, with a preference for the shorter timescale.Collaboration between the business stakeholders and developers throughout the project.Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.Working software is the primary measure of progress.Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.Continuous attention to technical excellence and good design enhances agility.Simplicity – the art of maximizing the amount of work not done – is essential.The best architectures, requirements, and designs emerge from self-organizing teams.Agile Software Development PatternsAgile software development patterns are reusable solutions to common software development problems. The following are some typical agile software development patterns:The Single Responsibility Principle (SRP)The Open/Closed Principle (OCP)The Liskov Substitution Principle (LSP)The Dependency Inversion Principle (DIP)The Interface Segregation Principle (ISP)The Model-View-Controller (MVC) PatternThe Observer PatternThe Strategy PatternThe Factory Method PatternAgile Software Development PracticesAgile software development practices are a set ofactivities and techniques used in agile software development. The following are some typical agile software development practices:Iterative DevelopmentTest-Driven Development (TDD)Continuous IntegrationRefactoringPair ProgrammingAgile Software Development Benefits and ChallengesAgile software development has many benefits, including:Increased customer satisfactionIncreased qualityIncreased productivityIncreased flexibilityIncreased visibilityReduced riskAgile software development also has some challenges, including:Requires discipline and trainingRequires an experienced teamRequires good communicationRequires a supportive management cultureConclusionAgile software development is a set of values, principles, and practices for developing software. Agile software development is based on the Agile Manifesto, which represents the values and principles of the agile approach. Agile software development practices include iterative development, test-driven development, continuous integration, and frequent releases. Agile software development has many benefits, including increased customer satisfaction, increased quality, increased productivity, increased flexibility, increased visibility, and reduced risk. Agile software development also has some challenges, including the requirement for discipline and training, the requirement for an experienced team, the requirement for good communication, and the requirement for a supportive management culture.。
5个最好的翻译软件免费下载-亿度软件
1、Babylon中文版:全球最专业的翻译软件
Babylon是全球知名的免费翻译软件,支持中文、西班牙文、日文、韩文、德文、法文、意大利文、葡萄牙文、荷兰文、希伯来文等语言,助你迅速翻译网页、文档等内容。
Babylon还支持用中文输入,学习英文语法及词形变化,如不确定到底该用「GO」还是
3、谷歌金山词霸
谷歌金山词霸是国内著名软件开发商金山软件(Kingsoft)与全球最大的搜索引擎公司Google(谷歌)联合推出的翻译软件,功能强大,体积小巧,完全免费!
海量网络词典收录新词流行词:由爱词霸百万词友共建的《爱词霸百科词典》和海量的《Google网络词典》,囊括所有新词,流行词,内容紧跟时代。
31.Anonymous
2011-9-30 12:27 am
以后的日子我一个人过
32.Anonymous
2011-10-09 8:47 am
很好的!
33.Anonymous
2011-11-23 8:13 am
我不是个好妻子吗 我明明知道他有带女人 可是一次次的原谅他 还不满足回来对我很坏发表看法:
非常感激!
5.Anonymous
2009-12-08 10:48 pm
有道翻译软件
6.123
2010-2-03 10:19 am
金山的不错
7.米米
2010-2-16 12:24 pm
灵格斯词霸除了单词发音都不错
8.Anonymous
转载超过10篇需经本人书面许可。
相关文章:
3种自动检测识别外文语种的翻译软件+网站
有道手机词典下载-免费强大的手机翻译软件
最好的全文翻译软件金山快译下载-100%免费
外文翻译-软件工程
中文2860字Software engineeringFrom:/zh-cn/%E8%BD%AF%E4%BB%B6%E5%B7%A5%E7%A8%8B Software engineering is the study of the use of engineering methods to build and maintain effective, practical and high-quality software disciplines. It involves the programming language, database, software development tools, system platform, standards, design patterns and so on.In modern society, the software used in many ways. Typical software such as email, embedded systems, human-machine interface, office packages, operating systems, compilers, databases, games. Meanwhile, almost all the various sectors of computer software applications, such as industry, agriculture, banking, aviation and government departments. These applications facilitate the economic and social development, improve people's working efficiency, while improving the quality of life. Software engineers is to create software applications of people collectively, according to which software engineers can be divided into different areas of system analysts, software designers, system architects, programmers, testers and so on. It is also often used to refer to a variety of software engineers, programmers.OriginIn view of difficulties encountered in software development, North Atlantic Treaty Organization (NATO) in 1968 organized the first Conference on Software Engineering, and will be presented at the "software engineering" to define the knowledge required for software development, and suggested that "software development the activities of similar projects should be. " Software Engineering has formally proposed since 1968, this time to accumulate a large number of research results, widely lot of technical practice, academia and industry through the joint efforts of software engineering is gradually developing into a professional discipline. Definitioncreation and use of sound engineering principles in order to obtain reliable and economically efficient software.application of systematic, follow the principle can be measured approach to development, operation and maintenance of software; that is to be applied to software engineering.The development, management and updating software products related to theories, methods and tools.A knowledge or discipline (discipline), aims to produce good quality, punctual delivery, within budget and meet users need software.the practical application of scientific knowledge in the design, build computer programs, and the accompanying documents produced, and the subsequent operation and maintenance.Use systematic production and maintenance of software products related to technology and management expertise to enable software development and changes in the limited time and under cost.Construction team of engineers developed the knowledge of large software systemsdisciplines.the software analysis, design, implementation and maintenance of a systematic method.the systematic application of tools and techniques in the development of computer-based applications.Software Engineering and Computer ScienceSoftware development in the end is a science or an engineering, this is a question to be debated for a long time. In fact, both the two characteristics of software development. But this does not mean that they can be confused with each other. Many people think that software engineering, computer science and information science-based as in the traditional sense of the physical and chemical engineering as. In the U.S., about 40% of software engineers with a degree in computer science. Elsewhere in the world, this ratio is also similar. They will not necessarily use every day knowledge of computer science, but every day they use the software engineering knowledge.For example, Peter McBreen that software "engineering" means higher degree of rigor and proven processes, not suitable for all types of software development stage. Peter McBreen in the book "Software Craftsmanship: The New Imperative" put forward the so-called "craftsmanship" of the argument, consider that a key factor in the success of software development, is to develop the skills, not "manufacturing" software process.Software engineering and computer programmingSoftware engineering exists in a variety of applications exist in all aspects of software development. The program design typically include program design and coding of the iterative process, it is a stage of software development.Software engineering, software project seeks to provide guidance in all aspects, from feasibility analysis software until the software after completion of maintenance work. Software engineering that software development and marketing activities are closely related. Such as software sales, user training, hardware and software associated with installation. Software engineering methodology that should not be an independent programmer from the team and to develop, and the program of preparation can not be divorced from the software requirements, design, and customer interests.Software engineering design of industrial development is the embodiment of a computer program.Software crisisSoftware engineering, rooted in the 20th century to the rise of 60,70 and 80 years of software crisis. At that time, many of the software have been a tragic final outcome. Many of the software development time significantly beyond the planned schedule. Some projects led to the loss of property, and even some of the software led to casualties. While software developers have found it increasingly difficult for software development.OS 360 operating system is considered to be a typical case. Until now, it is still used in the IBM360 series host. This experience for decades, even extremely complexsoftware projects do not have a set of programs included in the original design of work systems. OS 360 is the first large software project, which uses about 1,000 programmers. Fred Brooks in his subsequent masterpiece, "The Mythical Man Month" (The Mythical Man-Month) in the once admitted that in his management of the project, he made a million dollar mistake.Property losses: software error may result in significant property damage. European Ariane rocket explosion is one of the most painful lesson.Casualties: As computer software is widely used, including hospitals and other industries closely related to life. Therefore, the software error might also result in personal injury or death.Was used extensively in software engineering is the Therac-25 case of accidents. In 1985 between June and January 1987, six known medical errors from the Therac-25 to exceed the dose leads to death or severe radiation burns.In industry, some embedded systems do not lead to the normal operation of the machine, which will push some people into the woods.MethodologyThere are many ways software engineering aspects of meaning. Including project management, analysis, design, program preparation, testing and quality control. Software design methods can be distinguished as the heavyweight and lightweight methods. Heavyweight methods produce large amounts of official documentation. Heavyweight development methodologies, including the famous ISO 9000, CMM, and the Unified Process (RUP).Lightweight development process is not an official document of the large number of requirements. Lightweight methods, including well-known Extreme Programming (XP) and agile process (Agile Processes).According to the "new methodology" in this article, heavyweight method presented is a "defensive" posture. In the application of the "heavyweight methods" software organizations, due to a software project manager with little or no involvement in program design, can not grasp the item from the details of the progress of the project which will have a "fear", constantly had to ask the programmer to write a lot of "software development documentation." The lightweight methods are presented "aggressive" attitude, which is from the XP method is particularly emphasized four criteria - "communication, simplicity, feedback and courage" to be reflected on. There are some people that the "heavyweight method" is suitable for large software team (dozens or more) use, and "lightweight methods" for small software team (a few people, a dozen people) to use. Of course, on the heavyweight and lightweight method of approach has many advantages and disadvantages of debate, and various methods are constantly evolving.Some methodologists think that people should be strictly followed in the development and implementation of these methods. But some people do not have the conditions to implement these methods. In fact, the method by which software development depends on many factors, but subject to environmental constraints. Software development processSoftware development process, with the subsequent development of technologyevolution and improvement. From the early waterfall (Waterfall) development model to the subsequent emergence of the spiral iterative (Spiral) development, which recently began the rise of agile development methodologies (Agile), they showed a different era in the development process for software industry different awareness and understanding of different types of projects for the method.Note distinction between software development process and software process improvement important difference between. Such as ISO 15504, ISO 9000, CMM, CMMI such terms are elaborated in the framework of software process improvement, they provide a series of standards and policies to guide software organizations how to improve the quality of the software development process, the ability of software organizations, and not give a specific definition of the development process. Development of software engineering"Agile Development" (Agile Development) is considered an important software engineering development. It stressed that software development should be able to possible future changes and uncertainties of a comprehensive response.Agile development is considered a "lightweight" approach. In the lightweight approach should be the most prestigious "Extreme Programming" (Extreme Programming, referred to as XP).Correspond with the lightweight approach is the "heavyweight method" exists. Heavyweight approach emphasizes the development process as the center, rather than people-centered. Examples of methods such as heavyweight CMM / PSP / TSP.Aspect-oriented programming (Aspect Oriented Programming, referred to as the AOP) is considered to software engineering in recent years, another important development. This aspect refers to the completion of a function of a collection of objects and functions. In this regard the contents related to generic programming (Generic Programming) and templates.软件工程From:/zh-cn/%E8%BD%AF%E4%BB%B6%E5%B7%A5%E7%A8%8B软件工程是一门研究用工程化方法构建和维护有效的、实用的和高质量的软件的学科。
软件工程-外文翻译
The strategic role of management information systemsAfter studying this chapter, you will be able to:1。
Analyze six major information systems in the organizations。
2。
Describe the relationship among various types of information systems。
3。
Understand the characteristics of a strategic information system.4。
Describe how information systems in business strategy to be used for three layers.5。
Explain the problem of the establishment and maintenance of strategic information systems.Orchids Paper Company ----- return to the right directionOrchids Paper Co.Ltd. has been a lower cost paper manufacturer which produces napkin,handkerchief paper, tissues and toilet paper for fifty years. However, in the middle of 1990s, the company lost its developmental way. To take good advantage of the prosperous economic situation in the late twentieth century in the 80's, employers began to squeeze into the ascendant private-label paper market in California (the company headquarters at that time). Unfortunately, Orchids nearly went bankrupt because of the dual pressure from the high cost strategy and the debt from leveraged buyout. At the moment, its raw material and production costs exceeded its profits from customers. Orchids were forced to file for bankruptcy in 1992 and 1995.Orchids' new management organization lead by the general manager, Mike Safe and chief financial officer Jim Swagerty decided to focus on core markets, where had value-seeking customers. They moved the company from California to Pryor, Oklahoma, where the utility costs were low (paper is a resource-intensive industries) and the company's recycled papers were salable. They used a low-cost strategy so that the firm's production capacity will be maximized when companies emphasize timely delivery and allow customers to clearly understand the implementation of their orders. Orchids target market is the span from Oklahoma to Atlanta.Before the reorganization,Orchids is well known for poor service and late delivery. The company did not implement the operating and reporting practices and the financial department can not provide timely and accurate information.Orchids installed a new manufacturing resource planning systems (MRP-Ⅱ) and a financial system. The two software management systems from the Marion Ohio can monitor and coordinate sales, Inventory and financial data. They can also provide the charts based daily operations for the company. Workers and all departments can directly access to the products and order information through a central server linked with the stored data through 25personal computers. Finance Department staff can also use this system to provide timely and accurate information about the operating capacity, transportation and the product usability. They can also answer the customers’ questions. Therefore, finance department staffs make use of the financial capabilities to do more about controlling and customer service. Because employees can easily access to ensure immediate and accurate information needed to order delivery. Orchids Company can keep operating costs low. This system also makes the management of Orchids in the absence of bloated bodies and the sharp reduction in the total number of case workers to run properly. Orchids started to make profit again and its organizational and technological changes made it win a place in the industry which has traditionally been monopolized by large companies.Orchids Paper used the information systems to get the lead in the competitive advantage by providing low-cost high service products. However, compared to the simply the technological leap, it is more important to maintain this competitive edge. Managers need to find ways to maintain this competitive advantage for many years. Specifically, managers need to face the following challenges:1. Comprehensive integration: although in the company different systems are designed to serve different levels and different departments, more and more companies discovery the benefits of integrated systems. Many companies are pursuing enterprise resource planning (ERP). However, the integrated system is difficult and costly for the different organizational levels and functions to exchange of information through the technology. Managers need to determine which level of information system needs to integrate and how much it costs.2. The ability to maintain the competitive advantage: the competitive advantage brought by the strategic information systems cannot sustain long enough to ensure long-term profitability. Competitors can also install the strategic information systems. Competitive advantage is not always maintained as the market is changing rapidly. Business and economic environment is changing also. Internet can make some of the company's competitive advantage disappear soon. Technology and customer expectations are changing as well. Classic strategic information systems, such as American Airlines SABRE computer reservation systems, ATM systems and City Bank Federal Express package tracking system are benefiting users because they are the first in their respective industries. But the competitors apply the corresponding systems later. Relying on information systems solely can not get lasting business advantage. Information system originally used in decision-making often becomes a survival tool (for each company in order to survive in the industry to take some measures), or information system or even inhibit the future success of organizations to make the necessary decisions.ORCHIDS Paper Company's experience shows that information systems are very important in support of the organization's goals and making the company in the leading role in competition. In this chapter, we introduce the functions of various information systems in the organization. Then,we present the issues of the company in the competition and the methods that the Information System provides a competitive advantage in three different commercial levels.2.1 The function of the major Information System in the organizationBecause of the different attention to different targets, different characteristics and different levels in the various departments in an organization, there are different kinds of information systems. Single system cannot provide organizations with all the required information. Figure 2-1 is a description of the methods of all kinds of information systems in the organization. In the chart, the organization is divided into strategic layer, management layer, knowledge layer and business layer. And then it is further translated into the various functions into areas such as sales, marketing, production, finance, accounting and human resources. Information System is set up to meet the requirements of different organizations.2.2.1 Four different information systemsThere are four different information systems which are used for different levels of the organization. They are business layer systems, knowledge-tier system, management system and strategic level system.Business layer supports managers’work through tracking the basic business activities and things of the organization. Basic operations are such as sales progress, cash store, payroll, customer credibility determination and plant logistics. On this level,the main purpose of the system is to answer normal questions, analyze the problem of the logistics and inventory number of the organization. What is Mr. William payment and what is the problem? To answer these questions, the information must be available and the information should be current and accurate. The examples of business layer of the information system: the system using ATM data to record the bank deposit, the system to record daily time that employees work in factories, etc.Knowledge level information systems support the employees who are working for the knowledge and data in the organization. Knowledge level information system is intended to help businesses find new knowledge. New knowledge will be integrated into enterprises and help companies control document things。
adobe acrobat dc划词翻译
adobe acrobat dc划词翻译(实用版)目录1.Adobe Acrobat DC 简介2.划词翻译功能概述3.划词翻译操作步骤4.功能优缺点分析5.总结正文一、Adobe Acrobat DC 简介Adobe Acrobat DC 是一款专业的 PDF 编辑和阅读软件,由 Adobe 公司开发。
该软件具有强大的 PDF 处理功能,如创建、编辑、合并、拆分、旋转、缩放、标注等。
同时,Adobe Acrobat DC 还支持多种操作系统,如 Windows、Mac OS 和 Linux,满足了不同用户的需求。
二、划词翻译功能概述Adobe Acrobat DC 的划词翻译功能是指在阅读 PDF 文件时,用户可以使用鼠标或触摸笔选中需要翻译的文本,然后通过软件内置的翻译功能将选中的文本翻译成其他语言。
这一功能对于需要阅读或研究外文资料的用户来说非常实用。
三、划词翻译操作步骤1.打开 Adobe Acrobat DC,加载需要翻译的 PDF 文件。
2.在 PDF 文件中,用鼠标或触摸笔选中需要翻译的文本。
3.右键点击选中的文本,选择“翻译”选项。
或者在顶部菜单栏选择“编辑”>“复制文本”>“翻译”。
4.软件会自动将选中的文本翻译成您设置的默认语言。
如果您需要更改翻译语言,可以在“编辑”>“首选项”>“文档”中设置。
5.翻译完成后,您可以选择将翻译后的文本插入到原文档中,也可以将其复制到其他文档中。
四、功能优缺点分析优点:1.划词翻译功能方便用户在阅读 PDF 文件时快速了解外文内容。
2.支持多种语言,满足不同用户的需求。
3.操作简单,容易上手。
缺点:1.翻译质量可能不如专业翻译软件,对于一些专业术语或复杂句子翻译效果不佳。
2.翻译功能属于 Adobe Acrobat DC 的附加功能,需要用户额外购买软件。
五、总结总的来说,Adobe Acrobat DC 的划词翻译功能为用户提供了一种快速了解外文 PDF 内容的途径,尤其适合需要阅读外文资料的学者和研究人员。
机电技术教育专业外文翻译--计算机辅助设计与制造
外文原文:Modern design and manufacturingCAD/CAMCAD/CAM is a term which means computer-aided design and computer-aided manufacturing. It is the technology concerned with the use of digital computers to perform certain functions in design and production. This technology is moving in the direction of greater integration(一体化)of design and manufacturing, two activities which have traditionally been treated as distinct(清楚的)and separate functions in a production firm. Ultimately, CAD/CAM will provide the technology base for the computer-integrated factory of the future.Computer-aided design (CAD) can be defined as the use of computer systems to assist in the creation, modification, analysis, or optimization(最优化)of a design. The computer systems consist of the hardware and software to perform the specialized design functions required by the particular user firm. The CAD hardware typically includes the computer, one or more graphics display terminals, keyboards, and other peripheral equipment. The CAD software consists of the computer programs to implement(实现,执行)computer graphics to facilitate the engineering functions of the user company. Examples of these application programs include stress-strain(压力-应变)analysis of components(部件), dynamic(动态的)response of mechanisms, heat-transfer calculations, and numerical control part programming. The collection of application programs will vary from one user firm to the next because their product lines, manufacturing processes, and customer markets are different these factors give rise to differences in CAD system requirements.Computer-aided manufacturing (CAM) can be defined as the use of computer systems to plan, manage, and control the operations of a manufacturing plant through either direct or indirect computer interface with the plant’s production resources. As indicated by the definition, the applications of computer-aided manufacturing fall into two broad categories:puter monitoring and control.2.manufacturing support applications.The distinction between the two categories is fundamental to an understanding of computer-aided manufacturing.In addition to the applications involving a direct computer-process interface(界面,接口)for the purpose of process monitoring and control, compute-aided manufacturing also includes indirect applications in which the computer serves a support role in the manufacturing operations of the plant. In these applications, the computer is not linked directly to the manufacturing process. Instead, the computer is used “off-line”(脱机)to provide plans, schedules, forecasts, instructions, and information by which the firm’s production resources can be managed more effectively. The form of the relationship between the computer and the process is represented symbolically in the figure given below. Dashed lines(虚线)are used to indicate that the communication and control link is an off-line connection, with human beings often required to consummate(使圆满)the interface. However, human beings are presently required in the application either to provide input to the computer programs or to interpret the computer output and implement the required action.CAM for manufacturing supportWhat is CAD/CAM software?Many toolpaths are simply too difficult and expensive to program manually. For these situations, we need the help of a computer to write an NC part program.The fundamental concept of CAD/CAM is that we can use a Computer-AidedDrafting (CAD) system to draw the geometry of a workpiece on a computer. Once the geometry is completed, then we can use a computer-Aided Manufacturing (CAM) system to generate an NC toolpath based on the CAD geometry.The progression(行进,级数)from a CAD drawing all the way to the working NC code is illustrated as follows:Step 1: The geometry is defined in a CAD drawing. This workpiece contains a pocket to be machined. It might take several hours to manually write the code for this pocket(凹槽,型腔). However, we can use a CAM program to create the NC code in a matter of minutes.Step 2: The model is next imported into the CAM module. We can then select the proper geometry and define the style of toolpath to create, which in this case is a pocket. We must also tell the CAM system which tools to use, the type of material, feed, and depth of cut information.Step 3: The CAM model is then verified to ensure that the toolpaths are correct. If any mistakes are found, it is simple to make changes at this point.Step 4: The final product of CAD/CAM process is the NC code. The NC code is produced by post-processing(后处理)the model, the code is customized(定制,用户化)to accommodate the particular variety of CNC control.Another acronym that we may run into is CAPP, which stands for Computer-Aided Part Programming. CAPP is the process of using computers to aid in the programming of NC toolpaths. However, the acronym CAPP never really gained widespread acceptance, and today we seldom hear this term. Instead, the more marketable CAD/CAM is used to express the idea of using computers to help generate NC part programs. This is unfortunate because CAM is an entire group of technologies related to manufacturing design and automation-not just the software that is used to program CNC machine tools.Description of CAD/CAM Components and FunctionsCAD/CAM systems contain both CAD and CAM capabilities – each of whichhas a number of functional elements. It will help to take a short look at some of these elements in order to understand the entire process.1.CAD ModuleThe CAD portion of the system is used to create the geometry as a CAD model. The CAD model is an electronic description of the workpiece geometry that is mathematically precise. The CAD system, whether stand alone or as part of a CAD/CAM package, tends to be available in several different levels of sophistication. (强词夺理,混合)2-D line drawings 两维线条图Geometry is represented in two axes, much like drawing on a sheet of paper. Z-level depths will have to be added on the CAM end.3-D wireframe models 三维线框模型Geometry is represented in three-dimensional space by connecting elements that represent edges and boundaries. Wiregrames can be difficult to visualize(想象,形象化,显现), but all Z axis information is available for the CAM operations.3-D surface models 三维表面模型These are similar to wireframes except that a thin skin has been stretched over the wireframe model to aid in visualization.Inside, the model is empty. Complex contoured Surfaces are possible with surface models.3-D solid modeling 三维实体模型This is the current state of the market technology that is used by all high-end software. The geometry is represented as a solid feature that contains mass. Solid models can be sliced(切片,部分,片段)open to reveal internal features and not justa thin skin.2.CAM ModuleThe CAM module is used to create the machining process model based upon the geometry supplied in the CAD model. For example, the CAD model may contain a feature that we recognize as a pocket .We could apply a pocketing routine to the geometry, and then all of the toolpaths would be automatically created to produce the pocket. Likewise, the CAD model(模子,铸型)may contain geometry that should beproduced with drilling operations. We can simply select the geometry and instruct the CAM system to drill holes at the selected locations.The CAM system will generate a generic(一般的,普通的)intermediate(中间的,媒介)code that describes the machining operations, which can later be used to produce G & M code or conversational programs. Some systems create intermediate code in their own proprietary(所有的,私人拥有的)language, which others use open standards such as APT for their intermediate files.The CAM modules also come in several classes and levels of sophistication. First, there is usually a different module available for milling, turning, wire EDM, and fabrication(装配). Each of the processes is unique enough that the modules are typically sold as add-ins(附加软件). Each module may also be available with different levels of capability. For example, CAM modules for milling are often broken into stages as follows, starting with very simple capabilities and ending with complex, multi-axis toolpaths :● 21/2-axis machining● Three-axis machining with fourth-axis positioning● Surface machining● Simultaneous five-axis machiningEach of these represents a higher level of capability that may not be needed in all manufacturing environments. A job shop might only require 3-axis capability. An aerospace contractor might need a sophisticated 5-axis CAM package that is capable of complex machining. This class of software might start at $5,000 per installation, but the most sophisticated modules can cost $15,000 or more. Therefore, there is no need to buy software at such a high level that we will not be able to use it to its full potential.3.Geometry vs. toolpathOne important concept we must understand is that the geometry represented by the CAD drawing may not be exactly the same geometry that is produced on the CNC machine C machine tools are equipped to produce very accurate toolpaths as long as the toolpaths are either straight lines or circular arcs. CAD systems are alsocapable of producing highly accurate geometry of straight line and circular arcs, but they can also produce a number of other classes of curves. Most often these curves are represented as Non-Uniform(不均匀的,不一致的)Rational Bezier Splines (NURBS) (非均匀有理B样条). NURBS curves can represent virtually any geometry, ranging from a straight line or circular arc to complex surfaces.Take, for example, the geometric entity that we call an ellipse(椭圆形). An ellipse is a class of curve that is mathematically different from a circular arc. An ellipse is easily produced on a CAD system with the click of the mouse. However, a standard CNC machine tool cannot be use to directly problem an ellipse – it can only create lines and circular arcs. The CAM system will reconcile(使和解,使顺从)this problem by estimating the curve with line segments.CNC machine tools usually only understand circular arcs or straight lines. Therefore, the CAM system must estimate curved surfaces with line segments. The curve in this illustration is that of an ellipse, and the toolpath generated consists of tangent line segments that are contained within a tolerance zone.The CAM system will generate a bounding geometry on either side of the true curve to form a tolerance zone.It will then produce a toolpath from the line segment that stays contained within the tolerance zone. The resulting toolpath will not be mathematically correct – the CAM system will only be able to estimate the surface. This basic method is used to produce estimated toolpaths for both 2-D curves and 3-D surface curves.Some CAM programs also have the ability to convert the line segments into arc segments. This can reduce the number of blocks in the program and lead to smoother surfaces.The programmer can control the size of the tolerance zone to create a toolpath that is as accurate as is needed. Smaller tolerance zones will produce finer toolpaths and more numerous line segments, while larger tolerance zones will produce fewer line segments and coarser(粗糙的)toolpaths. Each line segment will require a block of code in the NC program, so the NC part program can grow very large when using this technique.We must use caution when machining surfaces. It is easy to rely on the computer to generate the correct tooolpath, but finished surfaces are further estimated during machining with ball end mills.If we do not pay attention to the limitations of these techniques, then the accuracy of the finished workpiece may be compromised (妥协,折衷).4.Tool and material librariesTo create the machining operations, the CAM system will need to know which cutting tools are available and what material we are machining. CAM systems take care of this by providing customizable (可定制的)libraries of cutting tools and materials. Tool libraries contain information about the shape and style of the tool. Material libraries contain information that is used to optimize(使最优化)the cutting speeds and feeds. The CAM system uses this information together to create the correct toolpaths and machining parameters.(参数)The format of these tool and material libraries is often proprietary(专利的,独占的,私有的)and can present some portability issues.Proprietary(轻便,移动)tool and material files cannot be easily modified or used on another system. More progressive (改革论者,进步论者,前进的)CAM developers tend to produce their tool and material libraries as database files that can be easily modified and customized for other applications.5.Verification and post-processorCAM systems usually provide the ability to verify that the proposed toolpaths are correct. This can be via a simple backplot(背景绘制)of the tool centerline or via a sophisticated solid model of the machining operations. The solids verifications(确认,查证)is often a third-party software that the CAD/CAM software company has licensed.(得到许可的)However, it may be available as a standalone package. The post-processor is a software program that takes a generic intermediate code and formats the NC code for each particular machine tool control. The post-processor(后置处理器)can often be customized through templates(模板)and variables to provide the required customization. (用户化,专用化,定制)6.Portability 轻便,可带的Portability of electronic data is the Achilles` heel(唯一致命的弱点)of CAD/CAM systems and continues to be a time-consuming concern. CAD files are created in a number of formats and have to be shared between many organizations. It is very expensive to create a complex model on a CAD system; therefore, we want to maximize the portability of our models and minimize the need for recreating the geometry on another system.DXF, DWG, IGES, SAT, STL and parasolids are a few of the common formats for CAD data exchange.CAM process models are not nearly as portable as CAD models. We cannot usually take a CAM model developed in one system and transfer it to another platform. The only widely accepted standard for CAM model interchange is a version of Automatically Programmed Tool (APT). APT is a programming language used to describe machining operations. APT is an open standard that is well documented and can be accessed by third-party software developers. A number of CAD/CAM systems can export to this standard, and the CAM file can later be used by post-processors and verification software.There are some circumstances when the proprietary intermediate files created by certain CAD/CAM systems can be fed directly into a machine tool without any additional post-processing. This is an ideal solution, but there is not currently any standard governing this exchange.One other option for XAD/CAM model exchange is to use a reverse post-processor. A reverse post-processor can create a CAD/CAM model from a G &M-code of NC part program. These programs do work; however, the programmer must spend a considerable amount of time determining the design intent of the model and to separate the toolpaths from the geometry. Overall, reverse post-processing has very limited applications.Software issues and trendsThroughout industry, numerous software packages are used for CAD andCAD/CAM. Pure CAD systems are used in all areas of design, and virtually any product today is designed With CAD software-gone are the days of pencil and paper drawings.CAD/CAM software, on the other hand, is more specialized. CAD/CAM is a small but important niche(适当的位置)confined to machining and fabrication organizations, and it is found in much smaller numbers than its CAD big brother.CAD/CAM systems contain both the software for CAD design and the CAM software for creating toolpaths and NC code. However, the CAD portion is often weak and unrefined when compared to much of the leading pure CAD software. This mismatch sets up the classic(第一流的,标准的)argument between the CAD designers and the CAD/CAM programmer on what is the best way to approach CAD/CAM.A great argument can be made for creating all geometry on an industry-leading CAD system and then importing the geometry into a CAD/CAM system.A business is much better off if its engineers only have to create a CAD model one time and in one format. The geometry can then be imported into the CAD/CAM package for process modeling. Furthermore, industry-leading CAD software tends to set an unofficial standard. The greater the acceptance of the standard, the greater the return on investment for the businesses that own the software.The counter argument comes from small organizations that do not have the need or resources to own both an expensive, industry-standard CAD package and an expensive CAD/CAM package. They tend to have to redraw the geometry from the paper engineering drawing or import models with imperfect(有缺点的,未完成的)translators. Any original models will end up being stored as highly non-standardized CAD/CAM files. These models will have dubious(可疑的,不确定的)prospects(景色,前景,期望)of ever being translated to a more standardized version.Regardless of the path that is chosen, organizations and individuals tend to become entrenched(以壕沟防护)in a particular technology. If they have invested tremendous effort and time into learning and assimilating(吸收)a technology, then it becomes very difficult to change to a new technology, even when presented withoverwhelming(压倒性的,无法抵抗的)evidence of a better method. It can be quite painful to change. Of course, if we had a crystal ball and could see into the future, this would never happen; but the fact is that we cannot always predict what the dominant (有统治权的,占优势的)technology will be even a few years down the road.The result is technology entrenchment(堑墩)that can be very difficult and expensive to get out from under. About the only protection we can find is to select the technology that appears to be the most standardized (even if it is imperfect) and stay with it-then, if major changes appear down the road, we will be in a better position to adapt.外文原文:计算机辅助设计与制造CAD/CAM是表示计算机辅助设计和计算机辅助制造的专业术语。
软件工程—外文翻译
Artificial Immune Systems:A Novel Paradigm to Pattern RecognitionAbstractThis chapter introduces a new computational intelligence paradigm to perform pattern recognition, named Artificial Immune Systems (AIS). AIS take inspiration from the immune system in order to build novel computational tools to solve problems in a vast range of domain areas. The basic immune theories used to explain how the immune system perform pattern recognition are described and their corresponding computational models are presented. This is followed with a survey from the literature of AIS applied to pattern recognition. The chapter is concluded with a trade-off between AIS and artificial neural networks as pattern recognition paradigms. Keywords: Artificial Immune Systems;Negative Selection;Clonal Selection;Immune Network 1 IntroductionThe vertebrate immune system (IS) is one of the most intricate bodily systems and its complexity is sometimes compared to that of the brain. With the advances in the biology and molecular genetics, the comprehension of how the immune system behaves is increasing very rapidly. The knowledge about the IS functioning has unraveled several of its main operative mechanisms. These mechanisms have demonstrated to be very interesting not only from a biological standpoint, but also under a computational perspective. Similarly to the way the nervous system inspired the development of artificial neural networks (ANN), the immune system has now led to the emergence of artificial immune systems (AIS) as a novel computational intelligence paradigm.Artificial immune systems can be defined as abstract or metaphorical computational systems developed using ideas, theories, and components, extracted from the immune system. Most AIS aim at solving complex computational or engineering problems, such as pattern recognition, elimination, and optimization. This is a crucial distinction between AIS and theoretical immune system models. While the former is devoted primarily to computing, the latter is focused on the modeling of the IS in order to understand its behavior, so that contributions can be made to the biological sciences. It is not exclusive, however, the use of one approach into the other and, indeed, theoretical models of the IS have contributed to the development of AIS.This chapter is organized as follows. Section 2 describes relevant immune theories for pattern recognition and introduces their computational counterparts. In Section 3, we briefly describe how to model pattern recognition in artificial immune systems, and present a simple illustrative example. Section 4 contains a survey of AIS for pattern recognition, and Section 5 contrast the use of AIS with the use of ANN when applied to pattern recognition tasks. The chapter is concluded in Section 6.2 Biological and Artificial Immune SystemsAll living organisms are capable of presenting some type of defense against foreign attack. The evolution of species that resulted in the emergence of the vertebrates also led to the evolution of the immune system of this species. The vertebrate immune system is particularly interesting due to its several computational capabilities, as will be discussed throughout this section.The immune system of vertebrates is composed of a great variety of molecules, cells, and organs spread all over the body. There is no central organ controlling the functioning of the immune system, and there are several elements in transit and in different compartments performing complementary roles. The main task of the immune system is to survey the organism in the search for malfunctioning cells from their own body (e.g., cancer and tumour cells), and foreign disease causing elements (e.g., viruses and bacteria). Every element that can be recognized by the immune system is called an antigen (Ag). The cells that originally belong to our body and are harmless to its functioning are termed self (or self antigens), while the disease causing elements are named nonself (or nonself antigens). The immune system, thus, has to be capable of distinguishing between what is self from what is nonself; a process called self/nonself discrimination, and performed basically through pattern recognition events.From a pattern recognition perspective, the most appealing characteristic of the IS is the presence of receptor molecules, on the surface of immune cells, capable of recognising an almost limitless range of antigenic patterns. One can identify two major groups of immune cells, known as B-cells and T-cells. These two types of cells are rather similar, but differ with relation to how they recognise antigens and by their functional roles. B-cells are capable of recognising antigens free in solution (e.g., in the blood stream), while T-cells require antigens to be presented by other accessory cells.Antigenic recognition is the first pre-requisite for the immune system to be activated and to mount an immune response. The recognition has to satisfy some criteria. First, the cell receptor recognises an antigen with a certain affinity, and a binding between the receptor and the antigen occurs with strength proportional to this affinity. If the affinity is greater than a given threshold, named affinity threshold, then the immune system is activated. The nature of antigen, type of recognising cell, and the recognition site also influence the outcome of an encounter between an antigen and a cell receptor.The human immune system contains an organ called thymus that is located behind the breastbone, which performs a crucial role in the maturation of T-cells. After T-cells are generated, they migrate into the thymus where they mature. During this maturation, all T-cells that recognise self-antigens are excluded from the population of T-cells; a process termed negative selection. If a B-cell encounters a nonself antigen with a sufficient affinity, it proliferates and differentiates into memory and effector cells; a process named clonal selection. In contrast, if a B-cell recognises a self-antigen, it might result in suppression, as proposed by the immune network theory. In the following subsections, each of these processes (negative selection, clonal selection, and network theory) will be described separately, along with their computational algorithms counterparts.2.1 Negative SelectionThe thymus is responsible for the maturation of T-cells; and is protected by a blood barrier capable of efficiently excluding nonself antigens from the thymic environment. Thus, most elements found within the thymus are representative of self instead of nonself. As an outcome, the T-cells containing receptors capable of recognising these self antigens presented in the thymus are eliminated from the repertoire of T-cells through a process named negative selection. All T-cells that leave the thymus to circulate throughout the body are said to be tolerant to self, i.e., they do not respond to self.From an information processing perspective, negative selection presents an alternative paradigm to perform pattern recognition by storing information about the complement set (nonself) of the patterns to be recognised (self). A negative selection algorithm has been proposed in the literature with applications focused on the problem of anomaly detection, such as computer and network intrusion detection, time series prediction, image inspection and segmentation, and hardware fault tolerance. Given an appropriate problem representation (Section 3), define the set of patterns to be protected and call it the self- set (P). Based upon the negative selection algorithm, generate a set of detectors (M) that will be responsible to identify all elements that do not belong to the self-set, i.e., the nonself elements.After generating the set of detectors (M), the next stage of the algorithm consists in monitoring the system for the presence of nonself patterns (Fig 2(b)). In this case, assume a set P* of patterns to be protected. This set might be composed of the set P plus other new patterns, or it can be a completely novel set.For all elements of the detector set, that corresponds to the nonself patterns, check if it recognises (matches) an element of P* and, if yes, then a nonself pattern was recognized and an action has to be taken. The resulting action of detecting nonself varies according to the problem under evaluation and extrapolates the pattern recognition scope of this chapter.2.2 Clonal SelectionComplementary to the role of negative selection, clonal selection is the theory used to explain how an immune response is mounted when a nonself antigenic pattern is recognised by a B-cell. In brief, when a B-cell receptor recognises a nonself antigen with a certain affinity, it is selected to proliferate and produce antibodies in high volumes. The antibodies are soluble forms of the B-cell receptors that are released from the B-cell surface to cope with the invading nonself antigen. Antibodies bind to antigens leading to their eventual elimination by other immune cells. Proliferation in the case of immune cells is asexual, a mitotic process; the cells divide themselves (there is no crossover). During reproduction, the B-cell progenies (clones) undergo a hyper mutation process that, together with a strong selective pressure, result in B-cells with antigenic receptors presenting higher affinities with the selective antigen. This whole process of mutation and selection is known as the maturation of the immune response and is analogous to the natural selection of species. In addition to differentiating into antibody producing cells, the activated Bcells with high antigenic affinities are selected to become memory cells with long life spans. These memory cells are pre-eminent in future responses to this same antigenic pattern, or a similar one.Other important features of clonal selection relevant from the viewpoint of computation are:1. An antigen selects several immune cells to proliferate. The proliferation rate of each immune cell is proportional to its affinity with the selective antigen: the higher the affinity, the higher the number of offspring generated, and vice-versa;2. In complete opposition to the proliferation rate, the mutation suffered by each immune cell during reproduction is inversely proportional to the affinity of the cell receptor with the antigen: the higher the affinity, the smaller the mutation, and vice-versa.Some authors have argued that a genetic algorithm without crossover is a reasonable model of clonal selection. However, the standard genetic algorithm does not account for important properties such as affinity proportional reproduction and mutation. Other authors proposed a clonal selection algorithm, named CLONALG, to fulfil these basic processes involved in clonal selection. This algorithm was initially proposed to perform pattern recognition and then adapted to solvemulti-modal optimisation tasks. Given a set of patterns to be recognised (P), the basic steps of the CLONALG algorithm are as follows:1. Randomly initialise a population of individuals (M);2. For each pattern of P, present it to the population M and determine its affinity (match) with each element of the population M;3. Select n1 of the best highest affinity elements of M and generate copies of these individuals proportionally to their affinity with the antigen. The higher the affinity, the higher the number of copies, and vice-versa;4. Mutate all these copies with a rate proportional to their affinity with the input pattern: the higher the affinity, the smaller the mutation rate, and vice-versa.5. Add these mutated individuals to the population M and reselect n2 of these maturated (optimised) individuals to be kept as memories of the system;6. Repeat Steps 2 to 5 until a certain criterion is met, such as a minimum pattern recognition or classification error.Note that this algorithm allows the artificial immune system to become increasingly better at its task of recognising patterns (antigens). Thus, based upon an evolutionary like behaviour, CLONALG learns to recognise patterns.2.3 Immune NetworkThe immune network theory proposes that the immune system has a dynamic behaviour even in the absence of external stimuli. It is suggested that the immune cells and molecules are capable of recognising each other, what endows the system with an eigen behaviour that is not dependent on foreign stimulation. Several immunologists have refuted this theory, however its computational aspects are relevant and it has proved itself to be a powerful model for computational systems.According to the immune network theory, the receptor molecules contained in the surface of the immune cells present markers, named idiotopes, which can be recognized by receptors on other immune cells. These idiotopes are displayed in and/or around the same portions of the receptors that recognise nonself antigens. To explain the network theory, assume that a receptor (antibody) Ab1 ona B-cell recognises a nonself antigen Ag. Assume now, that this same receptor Ab1 also recognises an idiotope i2 on another B-cell receptor Ab2. Keeping track of the fact that i2 is part of Ab2, Ab1 is capable of recognising both Ag and Ab2. Thus, Ab2 is said to be the internal image of Ag, more precisely, i2 is the internal image of Ag. The recognition of idiotopes on a cell receptor by other cell receptors, lead to ever increasing sets of connected cell receptors and molecules. Note that the network in this case, is a network of affinities, which different from the ‘hardwired’ network of the nervous system. As a result of the network recognition events, it was suggested that the recognition of a cell receptor by another cell receptor results in network suppression, whilst the recognition of an antigen by a cell receptor results in network activation and cell proliferation. The original theory did not account explicitly for the results of network activation and/or suppression, and the various artificial immune networks found in the literature model it in a particular form.3 Modelling Pattern Recognition in AISUp to this point, the most relevant immune principles and their corresponding computational counterparts to perform pattern recognition have been presented. In order to apply these algorithms to computational problems, there is a need to specify a limited number of other aspects of artificial immune systems, not as yet covered. The first aspect to introduce is the most relevant representations to be applied to model self and nonself patterns. Here the self-patterns correspond to the components of the AIS responsible for recognising the input patterns (nonself). Secondly, the mechanism by which the evaluation of the degree of match (affinity), or degree of recognition, of an input pattern by an element of the AIS has to be discussed. To model immune cells, molecules, and the antigenic patterns, the shape-space approach proposed is usually adopted. Although AIS model recognition through pattern matching, given certain affinity functions to be described further, performing pattern recognition through complementarity or similarity is based more on practical aspects than on biological plausibility. The shape-space approach proposes that an attribute string s = ás1, s2,…,sLñ in an L dimensional shape-space, S, (s ÎSL), can represent any immune cell or molecule. Each attribute of this string is supposed to represent a feature of the immune cell or molecule, such as its charge, van der Wall interactions, etc. In the development of AIS the mapping from the attributes to their biological counterparts is usually not relevant. The type of attributes used to represent the string will define partially the shape-space under study, and is highly dependent on the problem domain. Any shape-space constructed from a finite alphabet of length k constitutes ak-ary Hamming shape-space. As an example, an attribute string built upon the set of binary elements {0,1} corresponds to a binary Hamming shape-space. It can be thought of, in this case, of a problem of recogn ising a set of characters represented by matrices composed of 0’s and 1’s. Each element of a matrix corresponds to a pixel in the character. If the elements of s are represented by real-valued vectors, then we have an Euclidean shape-space. Most of the AIS found in the literature employ binary Hamming or Euclidean shape-spaces. Other types of shape-spaces are also possible, such as symbolic shape-spaces, which combine different (symbolic) attributes in the representation of a single string s. These are usually found in data mining applications, where the data might contain symbolic information like age, name, etc., of a set of patterns.Another important characteristic of the artificial immune systems is that most of them are population based. It means that they are composed of a set of individuals, representing immune cells and molecules, which have to perform a given role; in our context, pattern recognition. If we recapitulate the three immune processes reviewed, negative selection, clonal selection, and immune network, all of them rely on a population M of individuals to recognise a set P of patterns. The negative selection algorithm has to define a set of detectors for nonself patterns; clonal selection reproduces, maturates, and selects self-cells to recognise a set of nonself; and the immune network maintains a set of individuals, connected as a network, to recognize self and nonself.Consider first the binary Hamming shape-space case, which is the most widely used. There are several expressions that can be employed in the determination of the degree of match or affinity between an element of P and an element of M . The simplest case is to simply calculate theHamming distance (DH ) between these two elements, as given by Eq. (1). Another approach is to search for a sequence of r -contiguous bits, and if the number of r -contiguous matches between the strings is greater than a given threshold, then recognition is said to have occurred. As the lastapproach to be mentioned here, we can describe the affinity measure of Hunt, given by Eq. (2). This last method has the advantage that it favours sequences of complementary matches, thus searching for similar regions between the attribute strings (patterns).1,LH i D where δ==∑ 10i i if p m otherwise δ≠⎧=⎨⎩ (1) 2i l H i D D =+∑ (2)where i l is the length of the i -th sequence of matching bits longer than 2.In the case of Euclidean shape-spaces, the Euclidean distance can be used to evaluate the affinity between any two components of the system. Other approaches such as the Manhattan distance may also be employed.Note that all the methods described rely basically, on determining the match between strings. However, there are AIS in the literature that take into account other aspects, such as the number of patterns matched by each antibody.4 A Survey of AIS for Pattern RecognitionThe applications of artificial immune systems are vast, ranging from machine learning to robotic autonomous navigation. This section will review some of the works from the AIS literature applied to the pattern recognition domain. The rationale is to provide a guide to the literature and a brief description of the scope of applications of the algorithms. The section is divided into two parts for ease of comprehension: 1) computer security, and 2) other applications. The problem ofprotecting computers (or networks of computers) from viruses, unauthorised users, etc., constitutes a rich field of research for pattern recognition systems. Due, mainly, to the appealing intuitive metaphor of building artificial immune systems to detect computer viruses, there has been a great interest from the computer science community to this particular application. The use of the negative and clonal selection algorithms have been widely tested on this application. The former because it isan inherent anomaly (change) detection system, constituting a particular case of a pattern recognition device. The latter, the clonal selection algorithm, has been used in conjunction to negative selection due to its learning capabilities. Other more classical pattern recognition tasks, such as character recognition, and data analysis have also been studied within artificial immune systems.5 AIS and ANN for Pattern RecognitionSimilar to the use of artificial neural networks, performing pattern recognition with an AIS usually involves three stages: 1) defining a representation for the patterns; 2) adapting (learning or evolving) the system to identify a set of typical data; and 3) applying the system to recognise a set of new patterns (that might contain patterns used in the adaptive phase).Refering to the three immune algorithms presented (negative selection, clonal selection, and immune network), coupled with the process of modelling pattern recognition in the immune system, as described in Section 3, this section will contrast AIS and ANN focusing the pattern recognition applications. Discussion will be based on computational aspects, such as basic components, adaptation mechanisms, etc. Common neural networks for pattern recognition will be considered, such as single and multi-layer perceptrons, associative memories, and self-organising networks. All these networks are characterised by set(s) of units (artificial neurons); they adapt to the environment through a learning (or storage) algorithm, they can have their architectures dynamically adapted along with the weights, and they have the basic knowledge stored in the connection strengths.Component: The basic unit of an AIS is an attribute string s (along with its connections in network models) represented in the appropriate shape-space. This string s might correspond to an immune cell or molecule. In an ANN, the basic unit is an artificial neuron composed of an activation function, a summing junction, connection strengths, and an activation threshold. While artificial neurons are usually processing elements, attribute strings representing immune cells and molecules are information storage and processing components.Location of the components: In immune network models, the cells and molecules usually present a dynamic behaviour that tries to mimic or counteract the environment. This way, the network elements will be located according to the environmental stimuli. Unlike the immune network models, ANN have their neurons positioned in fixed predefined locations in the network. Some neural network models also adopt fixed neighbourhood patterns for the neurons. If a network pattern of connectivity is not adopted for the AIS, each individual element will have a position in the population that might vary dynamically. Also, a metadynamic process might allow the introduction and/or elimination of particular units.Structure: In negative and clonal AIS, the components are usually structured around matrices representing repertoires or populations of individuals. These matrices might have fixed or variable dimensions. In artificial immune networks and artificial neural networks, the components of the population are interconnected and structured around patterns of connectivity. Artificial immune networks usually have an architecture that follows the spatial distribution of the antigens represented in shape-space, while ANN usually have pre-defined architectures, and weights biasedby the environment.Memory: The attribute strings representing the repertoire(s) of immune cells and molecules, and their respective numbers, constitute most of the knowledge contained in an artificial immune system. Furthermore, parameters like the affinity threshold can also be considered part of the memory of an AIS. In artificial immune network models, the connection strengths among units also carry endogenous and exogenous information, i.e., they quantify the interactions of the elements of the AIS themselves and also with the environment. In most cases, memory is content-addressable and distributed. In the standard (earliest) neural network models, knowledge was stored only in the connection strengths of individual neurons. In more sophisticate strategies, such as constructive and pruning algorithms, and networks with self-adaptive parameters, the final number of network layers, neurons, connections, and the shapes of their respective activation functions are also part of the network knowledge. The memory is usually self-associative or content-addressable, and distributed.Adaptation: Adaptation usually refers to the alteration or adjustment in the structure or behaviour of a system so that its pattern of response to other components of the system and to the environment changes. Although both evolutionary and learning processes involve adaptation, there is a conceptual difference between them. Evolution can be seen as a change in the genetic composition of a population of individuals during successive generations. It is a result of natural selection acting on the genetic variation among individuals. In contrast, learning can be seen as a long lasting change in behaviour as a result of previous experience. While AIS might present both types of adaptation, learning and evolution, ANNs adapt basically through learning procedures.Plasticity and diversity: Metadynamics refers basically to two processes: 1) the recruitment of new components into the system, and 2) the elimination of useless elements from the system. As consequences of metadynamics, the architecture of the system can be more appropriately adapted to the environment, and its search capability (diversity) increased. In addition, metadynamics reduces redundancy within the system by eliminating useless components. Metadynamics in the immune algorithms corresponds to a continuous insertion and elimination of the basic elements(cells/molecules) composing the system. In ANN, metadynamics is equivalent to the pruning and/or insertion of new connections, units, and layers in the network.Interaction with other components: The interaction among cells and molecules in AIS occurs through the recognition (matching) of attribute strings by cell receptors (other attribute strings). In immune network models, the cells usually have weighted connections that allow them to interact with (recognise and be recognised by) other cells. These weights can be stimulatory or suppressive indicating the degree of interaction with other cells. Artificial neural networks are composed of a set (or sets) of interconnected neurons whose connection strengths assume any positive or negative values, indicating an excitatory or inhibitory activation. The interaction with other neurons in the network occurs explicitly through these connection strengths, where a single neuron receives and processes inputs from the environment (or network neurons) in the same or other layer(s). An individual neuron can also receive an input from itself.Interaction with the environment: In pattern recognition applications, the environment is sually represented as a set of input patterns to be learnt, recognised, and/or classified. In AIS, an attributestring represents the genetic information of the immune cells and molecules. This string is compared with the patterns received from the environment. If there is an explicit antigenic population to be recognised (set of patterns), all or some antigens can be presented to the whole or parts of the AIS. At the end of the learning or recognition phase, each component of the AIS might recognise some of the input patterns. The artificial neurons have connections that receive input signals from the environment. These signals are processed by neurons and compared with the information contained in the artificial neural network, such as the connection strengths. After learning, the whole ANN might (approximately) recognise the input patterns.Threshold: Under the shape-space formalism, each component of the AIS interacts with other cells or molecules whose complements lie within a small surrounding region, characterised by a parameter named affinity threshold. This threshold determines the degree of recognition between the immune cells and the presented input pattern. Most current models of neurons include a bias (or threshold). This threshold determines the neuron activation, i.e., it indicates how sensitive the neuron activation will be with relation to the input signal.Robustness: Both paradigms are highly robust due mainly to the presence of populations or networks of components. These elements, cells, molecules, and neurons, can act collectively,co-operatively, and competitively to accomplish their particular tasks. As knowledge is distributed over the many components of the system, damage or failure to individual elements might not significantly deteriorate the overall performance. Both AIS and ANN are highly flexible and noise tolerant. An interesting property of immune network models and negative selection algorithms is that they are also self-tolerant, i.e., they learn to recognise themselves. In immune network models, the cells interact with each other and usually present connection strengths quantifying these interactions. In negative selection algorithms, the self-knowledge is performed by storing information about its complement.State: At each iteration, time step or interval, the state of an AIS corresponds to the concentration of the immune cells and molecules, and/or their affinities. In the case of immune network models, the connection strengths among units are also part of the current state of the system. In artificial neural networks, the activation level of the output neurons determines the state of the system. Notice that this activation level of the output neurons takes into account the number of connection strengths and their respective values, the shape of activation functions and the network dimension.Control: Any immune principle, theory or process can be used to control the types of interaction among the many components of an AIS. As examples, clonal selection can be employed to build an antibody repertoire capable of recognising a set of antigenic patterns, and negative selection can be used to define a set of antibodies (detectors) for the recognition of anomalous patterns. Differential or difference equations can be applied to the control of how an artificial immune network will interact with itself and the environment. Basically, three learning paradigms can be used to train an ANN: 1) supervised, 2) unsupervised, and 3) reinforcement learning.Generalisation capability: In the AIS case, cells and molecules capable of recognising a certain pattern, can recognise not only this specific pattern, but also any structurally related pattern.。
zetore pdf translate (2)
zetore pdf translate标题:Zetore PDF翻译:提高效率的利器引言概述:在当今信息化时代,PDF文档的使用频率越来越高。
然而,对于不懂外语的人来说,阅读和理解外文PDF文档常常成为一大难题。
幸运的是,有一款名为Zetore的PDF翻译工具能够帮助我们解决这个问题。
本文将从五个大点来阐述Zetore PDF翻译的功能和优势。
正文内容:1. 提供多语种翻译功能1.1 支持主流语种翻译Zetore PDF翻译工具内置了多种主流语种的翻译功能,包括英语、法语、德语、西班牙语、日语等。
用户可以根据自己的需求选择合适的语种进行翻译,大大提高了阅读和理解外文PDF文档的效率。
1.2 提供自动识别语种功能Zetore PDF翻译工具还具备自动识别语种的功能,当用户打开一个外文PDF文档时,工具会自动检测文档所使用的语种,并提供相应的翻译选项。
这一功能省去了用户手动选择语种的繁琐操作,提高了翻译的准确性和效率。
1.3 支持多种翻译引擎Zetore PDF翻译工具不仅提供了内置的翻译引擎,还支持用户自定义选择其他翻译引擎。
用户可以根据自己的需求选择合适的翻译引擎,以获得更准确、更专业的翻译结果。
2. 提供多种翻译方式2.1 文字翻译Zetore PDF翻译工具支持将PDF文档中的文字内容进行翻译,并将翻译结果直接显示在PDF文档中。
用户可以通过阅读翻译后的文档,更加方便地理解和理解外文文档的内容。
2.2 划词翻译Zetore PDF翻译工具还支持划词翻译功能,用户只需选中需要翻译的文字,工具会自动弹出翻译结果。
这种方式不仅能够提供即时的翻译结果,还可以避免对整个PDF文档进行翻译的繁琐操作。
2.3 语音翻译除了文字翻译和划词翻译,Zetore PDF翻译工具还支持语音翻译功能。
用户可以通过语音输入外文内容,工具会自动将语音转化为文字并进行翻译。
这种方式不仅方便快捷,还能够满足一些用户对语音输入的需求。
经典外文翻译(电气工程专业英语2)
电力电子结构单元:一种电力电子的系统化方法在过去的五年中,美国海军已经通过电力电子机构单元这个软件(PEBB)投入了很多在电力电子技术上。
这种投资对于现在和未来的美海军舰艇是至关重要的。
同时,它对于电力电子工业也是至关重要的。
有一些像因特网一样的投入,对整个社会都是有利的。
美国国防部高级研究计划局(DARPA)开发因特网以实现防御的功能。
现在一个宏伟的工业已经因为这种投资而发展了起来。
没有哪一家公司或者私人组织可以独立承担的器开发互联网的费用。
投入的回报有可能对整个社会是有很大益处的,但是对于任何一个组织而言,这是一种巨大的负担。
像因特网一样,PEBB软件集中于核心问题并且试图确认将来美国海军的需要可以被商业现成产品系统技术满足。
这是一个理想的双赢的情况,美海军得到了价格合理的电力电子技术,电力电子工业得到了核心技术和科学的的支持,否则这对于他们来说是支付不起的。
美国海军促进了这些核心概念的开放存取和相应技术实施之间的竞争。
许多现代的范例已经通过PEBB软件研究来适应电力电子技术。
这些范例比方有开放的即插即用的体系结构、多单元设计、多层次设计、集成和同步工程。
即插即用功率即插即用的的体系结构把电力电子系统设计的更像一个个人电脑。
电力模型被插入它们的应用程序中并且自行运转。
运行程序可以判断出什么进入了,谁制作的,并且知道怎样去运行。
每一个电力模型会维持它们本身的安全运行限制。
这些视觉上的视线需要组织来开发标准的界面和协议。
即插即用体系结构的动机之一就是更低的成本和更多的程序功能。
新型电力电子产品的要求超越了供给它们功能的资源。
有能力做这件事的人非常难寻觅。
下一代的工程师更倾向于成为电脑或者软件工程师而不是电力电子工程师。
它们希望在它们的电脑上完成系统设计,而那些电力部分可以像他们的电子产品一样自动得到。
电力电子的绝大部分是传统的设计。
如果每一个人得到一台电脑必须要有设计师来设计所有的面板和包装,软件工程师来设计汇编语言和运行程序的开发,那么个人笔记本的产业还会有吗?这不就是古老的IBM360(主体框架电脑)方式吗?拥有将专业技术制作成设备的开放设计体系使设计者事倍功半,并且使专业知识可以应用于越来越多的程序。
学会使用电脑翻译软件
学会使用电脑翻译软件章节一:电脑翻译软件的定义及发展历程电脑翻译软件是指能够将一种自然语言翻译成另一种自然语言的计算机程序。
它通过利用人工智能、机器学习和大数据等技术,实现自动翻译的功能。
随着计算机技术的飞速发展,电脑翻译软件也经历了多个阶段的演进。
早期的电脑翻译软件主要采用基于规则的方法,利用人工编写的语法和词典来进行翻译。
然而,这种方法面临着规则复杂、适应性不强的问题。
后来的统计机器翻译(SMT)方法则通过分析大量的双语平行语料库来学习翻译模型,提高了翻译的准确性。
近年来,基于神经网络的神经机器翻译(NMT)方法逐渐成为主流,通过深度学习技术能够更好地捕捉句子之间的语义和语法关系,提高了翻译的流畅度。
章节二:电脑翻译软件的应用领域电脑翻译软件在现代社会的各个领域都有广泛的应用。
在商务交流方面,电脑翻译软件可以帮助企业进行跨国合作、进出口贸易和国际会议等活动,使得各国之间的语言障碍不再成为阻碍。
在旅游领域,电脑翻译软件可以帮助游客更好地理解陌生语言的指示、菜单和交通信息,提高旅行的便利性。
在学术研究方面,电脑翻译软件可以帮助学者们阅读和理解外文文献,促进学术交流和知识传播。
此外,电脑翻译软件还广泛应用于新闻媒体、法律文件、医疗健康等领域。
章节三:电脑翻译软件的优势和挑战电脑翻译软件相比人工翻译具有一些明显的优势。
首先,电脑翻译软件可以在短时间内完成大量的翻译工作,提高工作效率。
其次,电脑翻译软件可以减少人工翻译过程中的错误和主观影响,提高翻译的准确性。
再次,电脑翻译软件可以进行持续的学习和优化,不断提高翻译质量。
然而,电脑翻译软件仍然面临一些挑战。
首先,各种语言之间的语义和文化差异使得翻译过程更加复杂。
电脑翻译软件在处理歧义和多义词时还存在困难。
其次,电脑翻译软件很难做到真正的语境理解,容易产生错误的翻译。
此外,由于某些领域的专业术语和行业背景知识,电脑翻译软件在特定领域的翻译质量相对较低。
章节四:使用电脑翻译软件的技巧和注意事项在使用电脑翻译软件时,有一些技巧和注意事项可以帮助提高翻译质量。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
北京**********毕业设计(论文)文献翻译题目北京市地铁票价计算系统的设计学生姓名学号专业名称软件工程年级 2011级指导教师职称讲师所在系(院)计算机科学与技术系2014年 12月 11日JSPJSP(JavaServer Pages)は、Sun Microsystems会社に多くの会社が参加を呼びかけ、一緒に一種の動的ウェブページの技術基準。
JSP技術似ASP技術、それは伝統のホームページHTMLファイル(* . htm、* . html)で挿入Java プログラム段(Scriptlet)やJSPマーク(tag)、形成JSPファイル(* . jsp)。
JSPのWebアプリケーションの開発ではマルチプラットフォームの、つまりはLinux運行も、他のOSを実行する。
Javaプログラミング言語で作成JSP技術を使用tags XMLの類とscriptlets、パッケージ発生ダイナミックページの処理ロジック。
ホームページを通じてscriptlets訪問もtagsと存在するサーバーの資源の応用のロジック。
JSPはホームページの論理とホームページの設計と表示が分離し、再使用可能に基づいて支持のコンポーネントの設計に基づくWebアプリケーションをの開発が急速にやすい。
Webサーバーに訪問JSPページの請求は、まず実行中のセグメントは、それを実行結果とともにJSPファイルのHTMLソースへと帰るお客様に。
挿入のJavaプログラム段操作できるデータベース、再配向ページなどを実現に必要な機能を動的ウェブページ。
Java Servlet JSPと同じように、サーバー執行のクライアントが、通常に戻るのはHTMLテキストブラウザでさえあれば、そのクライアントの閲覧。
JSPの1 . 0規範の最後のバージョンは1999年9月に発売した、12月にリリースされた1 . 1規範。
現在は新しいJSP1.2規範は、JSP2.0規範の意見募集稿もすでに登場。
JSPのページはHTMLソースのJavaと組み込みそのコードになる。
サーバーはページのクライアントからこれらのJava請求されてコードの処理が行われ、それを生成のHTMLページに戻りクライアントのブラウザ。
Java Servlet JSPの技術の基礎は、さらに大型のWebアプリケーションの開発が必要Servlet JSPと協力できるJava。
JSPを備えたJava技術のシンプルで使いやすい、完全なオブジェクト指向は、プラットフォームの関係性かつ安全で信頼性の高い、主にインターネットのすべての特徴。
数年前、Marty誘いを受け、参加について、ソフトウェア技術の小型(20人)セミナー.してMarty隣の人はジェームズGosling --- Jav aプログラミング言語の発明者。
いくつかの位置を隔てて、ワシントンから大手のソフトウェア会社でのシニアマネージャー。
討論の過程の中で、セミナーの主席をJiniの議題は、当時は新しいJava技術.主席に尋ねる。
彼はこのマネージャーは彼の考えを続けると、彼らはこの技術を注視して、もしこの技術が流行して、彼らに従うが会社の「受け入れて、拡張(embrace an d exten d)の策略。
この時、Gosling 勝手に挿話「あなたの意味は実は引き受けないで(disgrace an d disten d)拡充。
」最終的に変換servler JS Pページ。
したがって、根本的に、JSPページが執行のいかなる任務を使ってもいいservler完成。
しかし、この下の同性などを意味しないservlerやJSPページに全ての状況が等しい適用。
問題は、技術の能力ではなく、二者の利便性、生産性と保守性の違い。
結局、特定プラットフォームJavaプログラミング言語で完成のことで、同様にアセンブラ言語で完瞭するが、どんな種類の言語は非常に重要だ。
単独で使うとservlerに比べ、JSP下記のメリットを提供する:a . JS PでHTMLの編纂とメンテナンスをより簡単に。
JSPの中で使える定番のHTML:ない追加の反スラッシュない追加の二重引用符でもない、が掛るJava 文法。
b .使用できる標準のウェブサイトの開発ツール。
たとえにJSPを知らないHTMLツール、私たちも使えるし、それらのため見落としJSPラベル(JSP tag s)。
cに開発チームに分けられる。
Javaプログラマが取り組む動態コード。
Web 開発者を集中して表示マネージャー層(プレゼンレイヤ)に。
大型のプロジェクトについて、この区別は極めて重要。
開発チームの大きさによって、およびプロジェクトの複雑さは、静的なHTMLと動態的内容について弱い分離(weaker separatio n)と強い分離(stronger separatio n)。
ここでは、この討論はご使用を中止しservlets JS Pだけを使って。
ほとんどのプロジェクトも同時にこの2種類の技術を使う。
プロジェクトの中のいくつかの請求に対して、あなたがMVC構図の下で組み合わせて利用するこの2つの技術。
私たちはいつものように適切なツールで完成に対応した仕事はservletだけはあなたの箱を埋めて。
JSPの技術の強さJSPの技術の強さ(1)一度作成、あちこち運行。
この点でよりもっと素晴らしいJava PH Pは、システムのほか、コードを変更しなくても。
(2)システムのマルチプラットフォームのサポート。
ほぼすべてのプラットフォーム上の任意の環境の中で開発して任意の環境の中でシステムの展開は、任意の環境の中で拡張。
比べASP / PH Pの局限性は明らかに分かった。
(3)強い弾力性。
ないから小さなJarファイルを実行することができるServlet / JS P、は複数のサーバーを群と負荷のバランス、台以上のアプリケーションを事務処理、情報処理、1台のサーバー無数サーバーは、Javaの巨大な生命力を示した。
(4)の多様化と強力な開発ツール支持。
という点ASPに似て、Javaはすでに多くの非常に優秀な開発ツール、それに多くを無料で手に入れて、そして多くの運行はスムーズには多種プラットフォームの下。
JSPの技術の弱い:ASP(1)と同様、Javaのいくつかの優位性がそれは緻命的な問題。
まさにためにマルチプラットフォームの機能のために、極度の伸縮能力などが増えたので、製品の複雑性。
(2)Javaの運行速度はクラスで常駐メモリに仕上げたので、それはいくつかの場合に使用のメモリによりユーザー数は確かに「最低価格性能比。
一方から、それがハードディスクの空間を汲み一連の. jav aファイルや.クラスファイル、及び対応のバージョンのファイル。
知る必要がServlet:(1)JSPページに変換Servlet。
わからないでどのように働いてServlet JS P。
(2)JSPはスタティックHTML、専用のJSPラベル、Javaコード構成。
どちらのタイプのJavaコード?もちろんServletコード!もし分からないServletプログラミング、じゃないこのコードを作成。
(3)でいくつかの任務を完成Servlet よりJSP仲良しで完成。
JSP得意生成は大量組織秩序のストラクチャードHTMLや他の文字データ構成のページ。
Servlet得意生成バイナリデータの構築構造の多様なページや、あるいは出力の少ない執行出力ない任務(たとえばリダイレクト)。
(4)一部の任務に適してServletと組み合わせて使うJSP完成し、非単独使用ServletやJSP。
JavaScriptと比べてJavaScriptやJavaプログラミング言語はまったく別物で、前者にクライアントのダイナミックHTMLドキュメントを生成、ブラウザマウント構築のページの一部。
これは一つに役立つ機能、普通の機能(JSPとだけがサーバ端は運行)が重なる。
従来のHTMLページのように、JSPページを含む可能にはJavaScript のSCRIPTラベル。
実は、JSPできる人も用い動態生成送信クライアントのJavaScript。
だから、JavaScriptは一項の競争する技術、それは一つの補充技術。
JavaScriptでも使えサーバー側で、最も人にはSU N ON E(以前のiPlane t)、II SとBroadVisio nサーバー。
しかし、Javaさらに強力かつ柔軟で、信頼できる移植。
JavaScriptでも使えサーバー側で、最も人にはSU N ON E(以前のiPlane t)、II SとBroadVisio nサーバー。
しかし、Javaさらに強力かつ柔軟で、信頼できる移植。
JSPの6種類の内蔵の対象:Request、respons e、アウト、セッション、applicatio n、confi g、pagecontex t、ぺージ、exceptio n。
1. reques t対象:対象ユーザーの情報を提出する、対象の適切なメソッドを呼び出すパッケージの情報を得ることができて、つまりその使用を対象ユーザーの情報を得ることができて提出する。
2 . respons e対象:お客様の請求についての動態に応え、クライアントに送信データ。
3 .セッションオブジェクト1 .何はセッション:セッション対象はJSP内蔵対象、それは初めてのJSP ページが搭載されて時の自動作成、完成会話期間管理。
お客様から画面を開くと接続サーバーから顧客離れこのサーバーのブラウザを閉じ終わり、と言われる会話。
ときに客先訪問のサーバー中、このサーバーのいくつかのページの間の接続を繰り返し、繰り返し刷新のページには、サーバーを通じて何らかの方法は同一の顧客には、需要のセッションを対象。
2 .セッション対象のID:ときに取引先を訪問するのは初めてのサーバー上のページにJSP、JSPエンジンを生むセッション対象、同時に分配のStringタイプのID番号、JSPエンジンが同時にこのID番号を送信してクライアント、Cookieのように、お客様の間を対象とセッションた対応の関係。
お客様が再訪問接続サーバーの他のページの時、もう分配を顧客に新しいセッションを対象に、取引先までブラウザを閉じた後、そのお客様のセッションがサーバ端対象がキャンセル、そして取引先との会話の対応関係に消え。
お客様が再開ブラウザ再接続サーバーにサーバーを再作成の新しい顧客とのセッションを対象。
4 . aplicatio n対象何はapplication:1。
サーバ起動後に生まれましたapplication対象がまた、顧客が訪れたのは、サイトの閲覧に各ページの間、このapplication相手も同じまで、サーバーが閉じ。