软件工程外文翻译文献
软件工程(外文翻译文献)
外文文献资料1、Software EngineeringSoftware is the sequences of instructions in one or more programming languages that comprise a computer application to automate some business function. Engineering is the use of tools and techniques in problem solving. Putting the two words together, software engineering is the systemtic application of tools and techniques in the development of computer-based applications.The software engineering process describes the steps it takes to deelop the system. We begin a development project with the notion that there is a problem to be solved via automation. The process is how you get from problem recognition to a working solution. A quality process is desirable because it is more likely to lead to a quality product. The process followed by a project team during the development life cycle of an application should be orderly, goal-oriented, enjoyable, and a learning experience.Object-oriented methodology is an approach to system lifecycle development that takes a top-down view of data objects, their allowable actions, and the underlying communication requirement to define a system architecture. The data and action components are encapsulated, that is , they are combined together, to form abstract data types Encapsulation means that if I know what data I want ,I also know the allowable processes against that data. Data are designed as lattice hierarchies of relationships to ensure that top-down, hierarchic inheritance and side ways relationships are accommodated. Encapsulated objects are constrained only to communicate via messages. At a minimum, messages indicate the receiver and action requested. Messages may be more elaborate, including the sender and data to be acted upon.That we try to apply engineering discipline to software development does not mean that we have all the answers about how to build applications. On the contrary, we still build systems that are not useful and thus are not used. Part of the reason for continuing problems in application development, is that we are constantly trying to hita moving target. Both the technology and the type of applications needed by businesses are constantly changing and becoming more complex. Our ability to develop and disseminate knowledge about how to successfully build systems for new technologies and new application types seriously lags behind technological and business changes.Another reason for continuing problems in application development is that we aren’t always free to do what we like and it is hard to change habits and cultures from the old way of doing things, as well as get users to agree with a new sequence of events or an unfamiliar format for documentation.You might ask then, if many organizations don’t use good software engineering practices, why should I bother learning them? There are two good answers to this question. First, if you never know the right thing to do, you have no chance of ever using it. Second, organizations will frequently accept evolutionary, small steps of change instead of revolutionary, massive change. You can learn individual techniques that can be applied without complete devotion to one way of developing systems. In this way, software engineering can speed changee in their organizations by demonstrating how the tools and techniques enhance th quality of both the product and the process of building a system.2、Data Base System1、IntroductionThe development of corporate databases will be one of the most important data-processing activities for the rest of the 1970s. Date will be increasingly regarded as a vital corporate resource, which must be organized so as to maximize their value. In addition to the databases within an organization, a vast new demand is growing for database services, which will collect, organize, and sell data.The files of data which computers can use are growing at a staggering rate. The growth rate in the size of computer storage is greater than the growth in the size or power of any other component in the exploding data processing industry. The more data the computers have access to, the greater is their potential power. In all walks of life and in all areas of industry, data banks will change the areas of what it is possiblefor man to do. In the end of this century, historians will look back to the coming of computer data banks and their associated facilities as a step which changed the nature of the evolution of society, perhaps eventually having a greater effect on the human condition than even the invention of the printing press.Some most impressive corporate growth stories of the generation are largely attributable to the explosive growth in the need of information.The vast majority of this information is not yet computerized. However, the cost of data storage hardware is dropping more rapidly than other costs in data processing. It will become cheaper to store data on computer files than to store them on paper. Not only printed information will be stored. The computer industry is improving its capability to store line drawing, data in facsimile form, photo-graphs, human speech, etc. In fact, any form of information other than the most intimate communications between humans can be transmitted and stored digitally.There are two main technology developments likely to become available in the near future. First, there are electromagnetic devices that will hold much more data than disks but have much longer access time. Second, there are solid-state technologies that will give microsecond access time but capacities are smaller than disks.Disks themselves may be increased in capacity somewhat. For the longer term future there are a number of new technologies which are currently working in research labs which may replace disks and may provide very large microsecond-access-time devices. A steady stream of new storage devices is thus likely to reach the marketplace over the next 5 years, rapidly lowering the cost of storing data.Given the available technologies, it is likely that on-line data bases will use two or three levels of storage. One solid-state with microsecond access time, one electromagnetic with access time of a fraction of a second. If two ,three ,or four levels of storage are used, physical storage organization will become more complex ,probably with paging mechanisms to move data between the levels; solid-state storage offers the possibility of parallel search operation and associativememory.Both the quantity of data stored and the complexity of their organization are going up by leaps and bounds. The first trillion bit on-line stores are now in use . in a few year’s time ,stores of this size may be common.A particularly important consideration in data base design is to store the data so that the can be used for a wide variety of applications and so that the way they can be changed quickly and easily. On computer installation prior to the data base era it has been remarkably difficult to change the way data are used. Different programmers view the data in different ways and constantly want to modify them as new needs arise modification , however ,can set off a chain reaction of changes to existing programs and hence can be exceedingly expensive to accomplish .Consequently , data processing has tended to become frozen into its old data structures .To achieve flexibility of data usage that is essential in most commercial situations . Two aspects of data base design are important. First, it should be possible to interrogate and search the data base without the lengthy operation of writing programs in conventional programming languages. Second ,the data should be independent of the programs which use them so that they can be added to or restructured without the programs being changed .The work of designing a data base is becoming increasing difficult , especially if it is to perform in an optimal fashion . There are many different ways in which data can be structured ,and they have different types of data need to be organized in different ways. Different data have different characteristics , which ought to effect the data organization ,and different users have fundamentally different requirements. So we need a kind of data base management system(DBMS)to manage data.Data base design using the entity-relationship model begins with a list of the entity types involved and the relationships among them. The philosophy of assuming that the designer knows what the entity types are at the outset is significantly different from the philosophy behind the normalization-based approach.The entity-relationship(E-R)approach uses entity-relationship diagrams. The E-Rapproach requires several steps to produre a structure that is acceptable by the particular DBMS. These steps are:(1) Data analysis(2) Producing and optimizing the entity model.(3) Logical schema development(4) Physical data base design process.Developing a data base structure from user requirements is called data bases design. Most practitioners agree that there are two separate phases to the data base design process. The design of a logical database structure that is processable by the data base management system(DBMS)d escribes the user’s view of data, and is the selection of a physical structure such as the indexed sequential or direct access method of the intended DBMS.Current data base design technology shows many residual effects of its outgrowth from single-record file design methods. File design is primarily application program dependent since the data has been defined and structured in terms of individual applications to use them. The advent of DBMS revised the emphasis in data and program design approaches.There are many interlocking questions in the design of data-base systems and many types of technique that one can use is answer to the question so many; in fact, that one often sees valuable approaches being overlooked in the design and vital questions not being asked.There will soon be new storage devices, new software techniques, and new types of data bases. The details will change, but most of the principles will remain. Therefore, the reader should concentrate on the principles.2、Data base systemThe conception used for describing files and data bases has varied substantially in the same organization.A data base may be defined as a collection of interrelated data stored together with as little redundancy as possible to serve on or more applications in an optimal fashion; the data are stored so that they are independent of programs which use thedata; a common and controlled approach is used in adding new data and in modifying and retrieving existing data within the data base. One system is said to contain a collection of data bases if they are entirely separate in structure.A data base may be designed for batch processing, real-time processing, or in-line processing. A data base system involve application program, DBMS, and data base.One of the most important characteristics of most data bases is that they will constantly need to change and grow. Easy restructuring of the data base must be possible as new data types and new applications are added. The restructuring should be possible without having to rewrite the application program and in general should cause as little upheaval as possible. The ease with which a data base can be changed will have a major effect on the rate at which data-processing application can be developed in a corporation.The term data independence is often quoted as being one of the main attributes of a data base. It implies that the data and the application programs which use them are independent so that either may be changed without changing the other. When a single set of data items serves a variety of applications, different application programs perceive different relationships between the data items. To a large extent, data-base organization is concerned with the representation of relationship between data items and records as well as how and where the data are stored. A data base used for many applications can have multiple interconnections between the data item about which we may wish to record. It can describes the real world. The data item represents an attribute, and the attribute must be associated with the relevant entity. We design values to the attributes, one attribute has a special significance in that it identifies the entity.An attribute or set of attribute which the computer uses to identify a record or tuple is referred to as a key. The primary key is defined as that key used to uniquely identify one record or tuple. The primary key is of great importance because it is used by the computer in locating the record or tuple by means of an index or addressing algorithm.If the function of a data base were merely to store data, its organization would be simple. Most of the complexities arise from the fact that is must also show the relationships between the various items of data that are stored. It is different to describe the data in logical or physical.The logical data base description is referred to as a schema .A schema is a chart of the types of data that one used. It gives the names of the entities and attributes, and specifics the relations between them. It is a framework into which the values of the data-items can be fitted.We must distinguish between a record type and a instance of the record. When we talk about a “personnel record”,this is really a record type.There are no data values associated with it.The term schema is used to mean an overall chart of all of the dataitem types and record types stored in a data he uses. Many different subschema can be derived from one schema.The schema and the subschema are both used by the data-base management system, the primary function of which is to serve the application programs by executing their data operations.A DBMS will usually be handing multiple data calls concurrently. It must organize its system buffers so that different data operations can be in process together. It provides a data definition language to specify the conceptual schema and most likely, some of the details regarding the implementation of the conceptual schema by the physical schema. The data definition language is a high-level language, enabling one to describe the conceptual schema in terms of a “data model” .The choice of a data model is a difficult one, since it must be rich enough in structure to describe significant aspects of the real world, yet it must be possible to determine fairly automatically an efficient implementation of the conceptual schema by a physical schema. It should be emphasized that while a DBMS might be used to build small data bases, many data bases involve millions of bytes, and an inefficient implementation can be disastrous.We will discuss the data model in the following.3、Three Data ModelsLogical schemas are defined as data models with the underlying structure of particular database management systems superimposed on them. At the present time, there are three main underlying structures for database management systems. These are :RelationalHierarchicalNetworkThe hierarchical and network structures have been used for DBMS since the 1960s. The relational structure was introduced in the early 1970s.In the relational model, the entities and their relationships are represented by two-dimensional tables. Every table represents an entity and is made up of rows and columns. Relationships between entities are represented by common columns containing identical values from a domain or range of possible values.The last user is presented with a simple data model. His and her request are formulated in terms of the information content and do not reflect any complexities due to system-oriented aspects. A relational data model is what the user sees, but it is not necessarily what will be implemented physically.The relational data model removes the details of storage structure and access strategy from the user interface. The model provides a relatively higher degree of data. To be able to make use of this property of the relational data model however, the design of the relations must be complete and accurate.Although some DBMS based on the relational data model are commercially available today, it is difficult to provide a complete set of operational capabilities with required efficiency on a large scale. It appears today that technological improvements in providing faster and more reliable hardware may answer the question positively.The hierarchical data model is based on a tree-like structure made up of nodes and branches. A node is a collection of data attributes describing the entity at that point.The highest node of the hierarchical tree structure is called a root. The nodes at succeeding lower levels are called children .A hierarchical data model always starts with a root node. Every node consists of one or more attributes describing the entity at that node. Dependent nodes can follow the succeeding levels. The node in the preceding level becomes the parent node of the new dependent nodes. A parent node can have one child node as a dependent or many children nodes. The major advantage of the hierarchical data model is the existence of proven database management systems that use the hierarchical data model as the basic structure. There is a reduction of data dependency but any child node is accessible only through its parent node, the many-to –many relationship can be implemented only in a clumsy way. This often results in a redundancy in stored data.The network data model interconnects the entities of an enterprise into a network. In the network data model a data base consists of a number of areas. An area contains records. In turn, a record may consist of fields. A set which is a grouping of records, may reside in an area or span a number of areas. A set type is based on the owner record type and the member record type. The many-to many relation-ship, which occurs quite frequently in real life can be implemented easily. The network data model is very complex, the application programmer must be familiar with the logical structure of the data base.4、Logical Design and Physical DesignLogical design of databases is mainly concerned with superimposing the constructs of the data base management system on the logical data model. There are three mainly models: hierarchical, relational, network we have mentioned above.The physical model is a framework of the database to be stored on physical devices. The model must be constructed with every regard given to the performance of the resulting database. One should carry out an analysis of the physical model with average frequencies of occurrences of the grou pings of the data elements, with expected space estimates, and with respect to time estimates for retrieving and maintaining the data.The database designer may find it necessary to have multiple entry points into a database, or to access a particular segment type with more than one key. To provide this type of access; it may be necessary to invert the segment on the keys. Thephysical designer must have expertise in knowledge of the DBMS functions and understanding of the characteristics of direct access devices and knowledge of the applications.Many data bases have links between one record and another, called pointers. A pointer is a field in one record which indicates where a second record is located on the storage devices.Records that exist on storage devices is a given physical sequence. This sequencing may be employed for some purpose. The most common pupose is that records are needed in a given sequence by certain data-processing operations and so they are stored in that sequences.Different applications may need records in different sequences.The most common method of ordering records is to have them in sequence by a key —that key which is most commonly used for addressing them. An index is required to find any record without a lengthy search of the file.If the data records are laid out sequentially by key, the index for that key can be much smaller than they are nonsequential.Hashing has been used for addressing random-access storages since they first came into existence in the mid-1950s. But nobody had the temerity to use the word hashing until 1968.Many systems analysis has avoided the use of hashing in the suspicion that it is complicated. In fact, it is simple to use and has two important advantages over indexing. First, it finds most records with only one seek and second, insertion and deletions can be handled without added complexity. Indexing, however, can be used with a file which is sequential by prime key and this is an overriding advantage, for some batch-pro-cessing applications.Many data-base systems use chains to interconnect records also. A chain refers to a group of records scatters within the files and interconnected by a sequence of pointers. The software that is used to retrive the chained records will make them appear to the application programmer as a contiguous logical file.The primary disadvantage of chained records is that many read operations areneeded in order to follow lengthy chains. Sometimes this does not matter because the records have to be read anyway. In most search operations, however, the chains have to be followed through records which would not otherwise to read. In some file organizations the chains can be contained within blocked physical records so that excessive reads do not occur.Rings have been used in many file organizations. They are used to eliminate redundancy. When a ring or a chain is entered at a point some distance from its head, it may be desirable to obtain the information at the head quickly without stepping through all the intervening links.5、Data Description LanguagesIt is necessary for both the programmers and the data administrator to be able to describe their data precisely; they do so by means of data description languages. A data description language is the means of declaring to data-base management system what data structures will be used.A data description languages giving a logical data description should perform the folloeing functions:It should give a unique name to each data-item type, file type, data base and other data subdivision.It should identify the types of data subdivision such as data item segment , record and base file.It may define the type of encoding the program uses in the data items (binary , character ,bit string , etc.)It may define the length of the data items and the range of the values that a data item can assume .It may specify the sequence of records in a file or the sequence of groups of record in the data base .It may specify means of checking for errors in the data .It may specify privacy locks for preventing unauthorized reading or modification of the data .These may operate at the data-item ,segment ,record, file or data-base level and if necessary may be extended to the contents(value) of individual data items .The authorization may , on the other hand, be separate defined .It is more subject to change than the data structures, and changes in authorization proceduresshould not force changes in application programs.A logical data description should not specify addressing ,indexing ,or searching techniques or specify the placement of data on the storage units ,because these topics are in the domain of physical ,not logical organization .It may give an indication of how the data will be used or of searching requirement .So that the physical technique can be selected optimally but such indications should not be logically limiting.Most DBMS have their own languages for defining the schemas that are used . In most cases these data description languages are different to other programmer language, because other programmer do not have the capability to define to variety of relationship that may exit in the schemas.附录 B 外文译文1、软件工程软件是指令的序列,该指令序列由一种或者多种程序语言编写,它能使计算机应用于某些事物的运用自动化。
计算机 软件工程 外文翻译 外文文献 英文文献 VB的数据库编程分析 数据库的发展、应用和基本原理
ANALYSIS OF DATABASE PROGRAMMING IN VBVB (Visual Basic) is a visualization programming environment that Microsoft Corporation promotes based on the Basic language.It is simple and easy to study.It has formidable function so that many computer amateurs really like it.A lot of application softwares all use VB as the software development platform.When we use VB to develop the application software,how to use the database and carry on the management of the database is concerned by all exploiters.VB has provided many tools and methods for database programming.What method is used to visit the database depends on users’different demands,a simple analysis of the VB database programming is explained as followings.1.DAO TechnologyBy using Microsoft company’s Jet Database Engine (Jet database engine),DAO (Data Access Object) the technology mainly provides visit to ISAM (smooth index search method) type database,such as the realization of the visit to database like FoxPro,Access, Dbase.1.1 Use Data ControlsData controls are produced by using “Data” button in the toolbox.It has 3 basic attributes:Connect,Database Name and RecordSource.Connect attribute specifys the database type that data controls visit,the default database is the Access database.The value of the Database Name attribute is the database filename which contains the complete path.The Record Source attribute is the recordset that we visit,also can be tables or SQL sentences.If we will visit table stud of database file teacher mdb of TEMP folder under D plate,then Data controls’s Connect attribute is null,and the Database Name attribute is “D: \temp \ teacher mdb”,the value of the Record Source attribute is “stud”.This can accomplish the binding between Data controls and database records. Through the methods of Data controls like Add new,Update,Delete, Move last,we can visit the database as every request.When we browse the content in database,Data controlls is used frequently with DBGrid,it provides data inquiry in grid way.1.2 Use DAO Object StorehouseThe model of the DAO object storehouse is mainly using hierarchical structure, Dentine is the object in the topmost story,below it are two object sets,Errors and workspace, under the workspace object,is the Databases set.When the application procedure quotes the DAO object storehouse,it produces only a Dentine object,and produces a default automatical working space object named workspace.When not mentioned,all database operations are all work in workspace(0),which is a default work area.But we must pay attention:The Jet engine will not starts automatically after VB has been loaded. Only when we choose References in the menu of Project can we select Microsoft DAO 3.5 Object Library to use.We create databases with the method “Create Database”in DAO,use“CreateTable” method to bulid tables,use the “Open Database ” to open the database ,use “Open recordset”method to open recordset,use Add new,Update,Delete,Move first,Edit methods to realize each kind of operations about tables.2.RDO TechnologyRDO provides a connection to related ODBC data pool.When we need to visit other databases like SQL Server,Oracle,especially to establish the customer/server application procedure,we may use the long range data controls RDC (Remote Data Control) and long range data objects RDO (Remote Data Control) to realize the visit to the database through the ODBC driver.By using ODBC to visit some database we must first install the corresponding drivers,like establish a data pool,through assigned data pool to visit corresponding database.To establish the ODBC data pool is open the window of “the control panel”, double-clicks the icon of ODBC executive program,single-click “Add” butt on to create the data pool of the opening ODBC data pool supervisor dialog box,and choose corresponding database.2.1 Use RDC ControlsSimilar to the use of DATA Controls, we use Data source name attributes to assign the data source name that controls bind,and we use SQL attributes to assign the recordset, The difference is that,we have to use the SQL sentences to assign the SQL attribute in RDC Controls . When we browse the database we may find it is used with DBGrid frequently.2.2 Use RDO Object StorehouseBefore we use RDO object,we should choose References in the menu of Project, click”Microsoft Remote Data Object 2.0”,then we can continue.The step we use RDO to visit the ODBC data pool is:(1) Set a RDO environment object.(2) Open an ODBC data pool with the method of Open connection.(3) Establish a result object with the method of Open Result set.(4) Use assigned method to operate the records of resultset.After founds the as this result collection object,is similar with the DAO object storehouse use,may through transfer method realizations and so on its Add new,Update, Delete visit to assign the data pool each kind of request.3.ADO TechnologyADO (ActiveX Data Objects) is the latest data access technology of Microsoft,It uses data accessing connection UDA (Universal Data Access),to standard all datas as a kind of data pool,through the filtration of OLEDB connection,transforms as a kind of general data format in the same way,enables the application procedure to visit this kind of datas.OLEDB is an underlying level of the data accessing connection,with it we may visit kinds of data pools,including traditional related databases,as well as electronic mail systemand self-definition commercial object.3.1 Use ADO ControlsSingle-click the Components command in the menu of the Project,select “Microsoft ADO Data Control in the Components dialog box 6.0 (OLE DB)”,we may add ADO controls to the box of controls.We set the OLEDB Provider and assigned database file by setting the Connection string attribute of ADO,and we set the Record Source attribute as record source that ADO connected.Similar to DAO and RDO, with it, we are able to visit all kinds of database fastly.3.2 Use The ADO Object StorehouseSingle click the References orders in the Project menu, select “Microsoft ActiveX Data Objects 2.0 Library” in the References dialog box, you may add ADO object.Old object models, like DAO and RDO, are look like levels,a lower data object like Recordset is the sub-object of higher level objects like Environment and the Queried. But ADO is actually different, it defined a group of plane top object, the most important ADO objects are Connection, Recordset and Command.The Connection object is used to establish the connection of application procedure and the data pool.The Command object is used in defining a SQL sentence, a memory process or other commands that operates the datas.Recordset object preserves recordsets after executions.By using alternative means of the recordset object, we can modify,delete and inquire the recordset.4 ConclusionsVB provides many methods to accomplish the operation to the database, in which DAO mainly finish the visit to ISAM database, RDO provides connection to the ODBC data pool, both RDO and DAO have developed as matured technology.Before VB 6.0 ,the main technology is about database visit, however,the Active Data Objects(ADO) ,as new generation of database interface which is promoted by Microsoft, is designed to work with new data accessing level OLEDB, so that it provides general data accessing (Universal Data Access), it provides quite a lot advantages to the programmers, including easy use, the familiar contact surface, high velocity ,as well as the lower memory. As a result of above reasons, ADO will gradually replace other data accessing connections, and will becomes the fundamental mode of the VB of visit database.VB的数据库编程分析VB(Visual Basic)是微软公司推出的基于Basic语言的可视化编程环境,以其简单易学、功能强大而倍受广大电脑爱好者的青睐。
计算机 java 外文翻译 外文文献 英文文献
英文原文:Title: Business Applications of Java. Author: Erbschloe, Michael, Business Applications of Java -- Research Starters Business, 2008DataBase: Research Starters - BusinessBusiness Applications of JavaThis article examines the growing use of Java technology in business applications. The history of Java is briefly reviewed along with the impact of open standards on the growth of the World Wide Web. Key components and concepts of the Java programming language are explained including the Java Virtual Machine. Examples of how Java is being used bye-commerce leaders is provided along with an explanation of how Java is used to develop data warehousing, data mining, and industrial automation applications. The concept of metadata modeling and the use of Extendable Markup Language (XML) are also explained.Keywords Application Programming Interfaces (API's); Enterprise JavaBeans (EJB); Extendable Markup Language (XML); HyperText Markup Language (HTML); HyperText Transfer Protocol (HTTP); Java Authentication and Authorization Service (JAAS); Java Cryptography Architecture (JCA); Java Cryptography Extension (JCE); Java Programming Language; Java Virtual Machine (JVM); Java2 Platform, Enterprise Edition (J2EE); Metadata Business Information Systems > Business Applications of JavaOverviewOpen standards have driven the e-business revolution. Networking protocol standards, such as Transmission Control Protocol/Internet Protocol (TCP/IP), HyperText Transfer Protocol (HTTP), and the HyperText Markup Language (HTML) Web standards have enabled universal communication via the Internet and the World Wide Web. As e-business continues to develop, various computing technologies help to drive its evolution.The Java programming language and platform have emerged as major technologies for performing e-business functions. Java programming standards have enabled portability of applications and the reuse of application components across computing platforms. Sun Microsystems' Java Community Process continues to be a strong base for the growth of the Java infrastructure and language standards. This growth of open standards creates new opportunities for designers and developers of applications and services (Smith, 2001).Creation of Java TechnologyJava technology was created as a computer programming tool in a small, secret effort called "the Green Project" at Sun Microsystems in 1991. The Green Team, fully staffed at 13 people and led by James Gosling, locked themselves away in an anonymous office on Sand Hill Road in Menlo Park, cut off from all regular communications with Sun, and worked around the clock for18 months. Their initial conclusion was that at least one significant trend would be the convergence of digitally controlled consumer devices and computers. A device-independent programming language code-named "Oak" was the result.To demonstrate how this new language could power the future of digital devices, the Green Team developed an interactive, handheld home-entertainment device controller targeted at the digital cable television industry. But the idea was too far ahead of its time, and the digital cable television industry wasn't ready for the leap forward that Java technology offered them. As it turns out, the Internet was ready for Java technology, and just in time for its initial public introduction in 1995, the team was able to announce that the Netscape Navigator Internet browser would incorporate Java technology ("Learn about Java," 2007).Applications of JavaJava uses many familiar programming concepts and constructs and allows portability by providing a common interface through an external Java Virtual Machine (JVM). A virtual machine is a self-contained operating environment, created by a software layer that behaves as if it were a separate computer. Benefits of creating virtual machines include better exploitation of powerful computing resources and isolation of applications to prevent cross-corruption and improve security (Matlis, 2006).The JVM allows computing devices with limited processors or memory to handle more advanced applications by calling up software instructions inside the JVM to perform most of the work. This also reduces the size and complexity of Java applications because many of the core functions and processing instructions were built into the JVM. As a result, software developers no longer need to re-create the same application for every operating system. Java also provides security by instructing the application to interact with the virtual machine, which served as a barrier between applications and the core system, effectively protecting systems from malicious code.Among other things, Java is tailor-made for the growing Internet because it makes it easy to develop new, dynamic applications that could make the most of the Internet's power and capabilities. Java is now an open standard, meaning that no single entity controls its development and the tools for writing programs in the language are available to everyone. The power of open standards like Java is the ability to break down barriers and speed up progress.Today, you can find Java technology in networks and devices that range from the Internet and scientific supercomputers to laptops and cell phones, from Wall Street market simulators to home game players and credit cards. There are over 3 million Java developers and now there are several versions of the code. Most large corporations have in-house Java developers. In addition, the majority of key software vendors use Java in their commercial applications (Lazaridis, 2003).ApplicationsJava on the World Wide WebJava has found a place on some of the most popular websites in the world and the uses of Java continues to grow. Java applications not only provide unique user interfaces, they also help to power the backend of websites. Two e-commerce giants that everybody is probably familiar with (eBay and Amazon) have been Java pioneers on the World Wide Web.eBayFounded in 1995, eBay enables e-commerce on a local, national and international basis with an array of Web sites-including the eBay marketplaces, PayPal, Skype, and -that bring together millions of buyers and sellers every day. You can find it on eBay, even if you didn't know it existed. On a typical day, more than 100 million items are listed on eBay in tens of thousands of categories. Recent listings have included a tunnel boring machine from the Chunnel project, a cup of water that once belonged to Elvis, and theV olkswagen that Pope Benedict XVI owned before he moved up to the Popemobile. More than one hundred million items are available at any given time, from the massive to the miniature, the magical to the mundane, on eBay; the world's largest online marketplace.eBay uses Java almost everywhere. To address some security issues, eBay chose Sun Microsystems' Java System Identity Manager as the platform for revamping its identity management system. The task at hand was to provide identity management for more than 12,000 eBay employees and contractors.Now more than a thousand eBay software developers work daily with Java applications. Java's inherent portability allows eBay to move to new hardware to take advantage of new technology, packaging, or pricing, without having to rewrite Java code ("eBay drives explosive growth," 2007).Amazon (a large seller of books, CDs, and other products) has created a Web Service application that enables users to browse their product catalog and place orders. uses a Java application that searches the Amazon catalog for books whose subject matches a user-selected topic. The application displays ten books that match the chosen topic, and shows the author name, book title, list price, Amazon discount price, and the cover icon. The user may optionally view one review per displayed title and make a buying decision (Stearns & Garishakurthi, 2003).Java in Data Warehousing & MiningAlthough many companies currently benefit from data warehousing to support corporate decision making, new business intelligence approaches continue to emerge that can be powered by Java technology. Applications such as data warehousing, data mining, Enterprise Information Portals (EIP's), and Knowledge Management Systems (which can all comprise a businessintelligence application) are able to provide insight into customer retention, purchasing patterns, and even future buying behavior.These applications can not only tell what has happened but why and what may happen given certain business conditions; allowing for "what if" scenarios to be explored. As a result of this information growth, people at all levels inside the enterprise, as well as suppliers, customers, and others in the value chain, are clamoring for subsets of the vast stores of information such as billing, shipping, and inventory information, to help them make business decisions. While collecting and storing vast amounts of data is one thing, utilizing and deploying that data throughout the organization is another.The technical challenges inherent in integrating disparate data formats, platforms, and applications are significant. However, emerging standards such as the Application Programming Interfaces (API's) that comprise the Java platform, as well as Extendable Markup Language (XML) technologies can facilitate the interchange of data and the development of next generation data warehousing and business intelligence applications. While Java technology has been used extensively for client side access and to presentation layer challenges, it is rapidly emerging as a significant tool for developing scaleable server side programs. The Java2 Platform, Enterprise Edition (J2EE) provides the object, transaction, and security support for building such systems.Metadata IssuesOne of the key issues that business intelligence developers must solve is that of incompatible metadata formats. Metadata can be defined as information about data or simply "data about data." In practice, metadata is what most tools, databases, applications, and other information processes use to define, relate, and manipulate data objects within their own environments. It defines the structure and meaning of data objects managed by an application so that the application knows how to process requests or jobs involving those data objects. Developers can use this schema to create views for users. Also, users can browse the schema to better understand the structure and function of the database tables before launching a query.To address the metadata issue, a group of companies (including Unisys, Oracle, IBM, SAS Institute, Hyperion, Inline Software and Sun) have joined to develop the Java Metadata Interface (JMI) API. The JMI API permits the access and manipulation of metadata in Java with standard metadata services. JMI is based on the Meta Object Facility (MOF) specification from the Object Management Group (OMG). The MOF provides a model and a set of interfaces for the creation, storage, access, and interchange of metadata and metamodels (higher-level abstractions of metadata). Metamodel and metadata interchange is done via XML and uses the XML Metadata Interchange (XMI) specification, also from the OMG. JMI leverages Java technology to create an end-to-end data warehousing and business intelligence solutions framework.Enterprise JavaBeansA key tool provided by J2EE is Enterprise JavaBeans (EJB), an architecture for the development of component-based distributed business applications. Applications written using the EJB architecture are scalable, transactional, secure, and multi-user aware. These applications may be written once and then deployed on any server platform that supports J2EE. The EJB architecture makes it easy for developers to write components, since they do not need to understand or deal with complex, system-level details such as thread management, resource pooling, and transaction and security management. This allows for role-based development where component assemblers, platform providers and application assemblers can focus on their area of responsibility further simplifying application development.EJB's in the Travel IndustryA case study from the travel industry helps to illustrate how such applications could function. A travel company amasses a great deal of information about its operations in various applications distributed throughout multiple departments. Flight, hotel, and automobile reservation information is located in a database being accessed by travel agents worldwide. Another application contains information that must be updated with credit and billing history from a financial services company. Data is periodically extracted from the travel reservation system databases to spreadsheets for use in future sales and marketing analysis.Utilizing J2EE, the company could consolidate application development within an EJB container, which can run on a variety of hardware and software platforms allowing existing databases and applications to coexist with newly developed ones. EJBs can be developed to model various data sets important to the travel reservation business including information about customer, hotel, car rental agency, and other attributes.Data Storage & AccessData stored in existing applications can be accessed with specialized connectors. Integration and interoperability of these data sources is further enabled by the metadata repository that contains metamodels of the data contained in the sources, which then can be accessed and interchanged uniformly via the JMI API. These metamodels capture the essential structure and semantics of business components, allowing them to be accessed and queried via the JMI API or to be interchanged via XML. Through all of these processes, the J2EE infrastructure ensures the security and integrity of the data through transaction management and propagation and the underlying security architecture.To consolidate historical information for analysis of sales and marketing trends, a data warehouse is often the best solution. In this example, data can be extracted from the operational systems with a variety of Extract, Transform and Load tools (ETL). The metamodels allow EJBsdesigned for filtering, transformation, and consolidation of data to operate uniformly on data from diverse data sources as the bean is able to query the metamodel to identify and extract the pertinent fields. Queries and reports can be run against the data warehouse that contains information from numerous sources in a consistent, enterprise-wide fashion through the use of the JMI API (Mosher & Oh, 2007).Java in Industrial SettingsMany people know Java only as a tool on the World Wide Web that enables sites to perform some of their fancier functions such as interactivity and animation. However, the actual uses for Java are much more widespread. Since Java is an object-oriented language like C++, the time needed for application development is minimal. Java also encourages good software engineering practices with clear separation of interfaces and implementations as well as easy exception handling.In addition, Java's automatic memory management and lack of pointers remove some leading causes of programming errors. Most importantly, application developers do not need to create different versions of the software for different platforms. The advantages available through Java have even found their way into hardware. The emerging new Java devices are streamlined systems that exploit network servers for much of their processing power, storage, content, and administration.Benefits of JavaThe benefits of Java translate across many industries, and some are specific to the control and automation environment. For example, many plant-floor applications use relatively simple equipment; upgrading to PCs would be expensive and undesirable. Java's ability to run on any platform enables the organization to make use of the existing equipment while enhancing the application.IntegrationWith few exceptions, applications running on the factory floor were never intended to exchange information with systems in the executive office, but managers have recently discovered the need for that type of information. Before Java, that often meant bringing together data from systems written on different platforms in different languages at different times. Integration was usually done on a piecemeal basis, resulting in a system that, once it worked, was unique to the two applications it was tying together. Additional integration required developing a brand new system from scratch, raising the cost of integration.Java makes system integration relatively easy. Foxboro Controls Inc., for example, used Java to make its dynamic-performance-monitor software package Internet-ready. This software provides senior executives with strategic information about a plant's operation. The dynamic performance monitor takes data from instruments throughout the plant and performs variousmathematical and statistical calculations on them, resulting in information (usually financial) that a manager can more readily absorb and use.ScalabilityAnother benefit of Java in the industrial environment is its scalability. In a plant, embedded applications such as automated data collection and machine diagnostics provide critical data regarding production-line readiness or operation efficiency. These data form a critical ingredient for applications that examine the health of a production line or run. Users of these devices can take advantage of the benefits of Java without changing or upgrading hardware. For example, operations and maintenance personnel could carry a handheld, wireless, embedded-Java device anywhere in the plant to monitor production status or problems.Even when internal compatibility is not an issue, companies often face difficulties when suppliers with whom they share information have incompatible systems. This becomes more of a problem as supply-chain management takes on a more critical role which requires manufacturers to interact more with offshore suppliers and clients. The greatest efficiency comes when all systems can communicate with each other and share information seamlessly. Since Java is so ubiquitous, it often solves these problems (Paula, 1997).Dynamic Web Page DevelopmentJava has been used by both large and small organizations for a wide variety of applications beyond consumer oriented websites. Sandia, a multiprogram laboratory of the U.S. Department of Energy's National Nuclear Security Administration, has developed a unique Java application. The lab was tasked with developing an enterprise-wide inventory tracking and equipment maintenance system that provides dynamic Web pages. The developers selected Java Studio Enterprise 7 for the project because of its Application Framework technology and Web Graphical User Interface (GUI) components, which allow the system to be indexed by an expandable catalog. The flexibility, scalability, and portability of Java helped to reduce development time and costs (Garcia, 2004)IssueJava Security for E-Business ApplicationsTo support the expansion of their computing boundaries, businesses have deployed Web application servers (WAS). A WAS differs from a traditional Web server because it provides a more flexible foundation for dynamic transactions and objects, partly through the exploitation of Java technology. Traditional Web servers remain constrained to servicing standard HTTP requests, returning the contents of static HTML pages and images or the output from executed Common Gateway Interface (CGI ) scripts.An administrator can configure a WAS with policies based on security specifications for Java servlets and manage authentication and authorization with Java Authentication andAuthorization Service (JAAS) modules. An authentication and authorization service can be written in Java code or interface to an existing authentication or authorization infrastructure. For a cryptography-based security infrastructure, the security server may exploit the Java Cryptography Architecture (JCA) and Java Cryptography Extension (JCE). To present the user with a usable interaction with the WAS environment, the Web server can readily employ a form of "single sign-on" to avoid redundant authentication requests. A single sign-on preserves user authentication across multiple HTTP requests so that the user is not prompted many times for authentication data (i.e., user ID and password).Based on the security policies, JAAS can be employed to handle the authentication process with the identity of the Java client. After successful authentication, the WAS security collaborator consults with the security server. The WAS environment authentication requirements can be fairly complex. In a given deployment environment, all applications or solutions may not originate from the same vendor. In addition, these applications may be running on different operating systems. Although Java is often the language of choice for portability between platforms, it needs to marry its security features with those of the containing environment.Authentication & AuthorizationAuthentication and authorization are key elements in any secure information handling system. Since the inception of Java technology, much of the authentication and authorization issues have been with respect to downloadable code running in Web browsers. In many ways, this had been the correct set of issues to address, since the client's system needs to be protected from mobile code obtained from arbitrary sites on the Internet. As Java technology moved from a client-centric Web technology to a server-side scripting and integration technology, it required additional authentication and authorization technologies.The kind of proof required for authentication may depend on the security requirements of a particular computing resource or specific enterprise security policies. To provide such flexibility, the JAAS authentication framework is based on the concept of configurable authenticators. This architecture allows system administrators to configure, or plug in, the appropriate authenticators to meet the security requirements of the deployed application. The JAAS architecture also allows applications to remain independent from underlying authentication mechanisms. So, as new authenticators become available or as current authentication services are updated, system administrators can easily replace authenticators without having to modify or recompile existing applications.At the end of a successful authentication, a request is associated with a user in the WAS user registry. After a successful authentication, the WAS consults security policies to determine if the user has the required permissions to complete the requested action on the servlet. This policy canbe enforced using the WAS configuration (declarative security) or by the servlet itself (programmatic security), or a combination of both.The WAS environment pulls together many different technologies to service the enterprise. Because of the heterogeneous nature of the client and server entities, Java technology is a good choice for both administrators and developers. However, to service the diverse security needs of these entities and their tasks, many Java security technologies must be used, not only at a primary level between client and server entities, but also at a secondary level, from served objects. By using a synergistic mix of the various Java security technologies, administrators and developers can make not only their Web application servers secure, but their WAS environments secure as well (Koved, 2001).ConclusionOpen standards have driven the e-business revolution. As e-business continues to develop, various computing technologies help to drive its evolution. The Java programming language and platform have emerged as major technologies for performing e-business functions. Java programming standards have enabled portability of applications and the reuse of application components. Java uses many familiar concepts and constructs and allows portability by providing a common interface through an external Java Virtual Machine (JVM). Today, you can find Java technology in networks and devices that range from the Internet and scientific supercomputers to laptops and cell phones, from Wall Street market simulators to home game players and credit cards.Java has found a place on some of the most popular websites in the world. Java applications not only provide unique user interfaces, they also help to power the backend of websites. While Java technology has been used extensively for client side access and in the presentation layer, it is also emerging as a significant tool for developing scaleable server side programs.Since Java is an object-oriented language like C++, the time needed for application development is minimal. Java also encourages good software engineering practices with clear separation of interfaces and implementations as well as easy exception handling. Java's automatic memory management and lack of pointers remove some leading causes of programming errors. The advantages available through Java have also found their way into hardware. The emerging new Java devices are streamlined systems that exploit network servers for much of their processing power, storage, content, and administration.中文翻译:标题:Java的商业应用。
软件工程英文参考文献(优秀范文105个)
软件工程英文参考文献(优秀范文105个)软件工程英文参考文献一:[1]Carine Khalil,Sabine Khalil. Exploring knowledge management in agile software developmentorganizations[J]. International Entrepreneurship and Management Journal,2020,16(4).[2]Kevin A. Gary,Ruben Acuna,Alexandra Mehlhase,Robert Heinrichs,Sohum Sohoni. SCALING TO MEET THE ONLINE DEMAND IN SOFTWARE ENGINEERING[J]. International Journal on Innovations in Online Education,2020,4(1).[3]Hosseini Hadi,Zirakjou Abbas,GoodarziVahabodin,Mousavi Seyyed Mohammad,Khonakdar Hossein Ali,Zamanlui Soheila. Lightweight aerogels based on bacterial cellulose/silver nanoparticles/polyaniline with tuning morphology of polyaniline and application in soft tissue engineering.[J]. International journal of biological macromolecules,2020,152.[4]Dylan G. Kelly,Patrick Seeling. Introducing underrepresented high school students to software engineering: Using the micro:bit microcontroller to program connected autonomous cars[J]. Computer Applications in Engineering Education,2020,28(3).[5]. Soft Computing; Research Conducted at School of Computing Science and Engineering Has Updated OurKnowledge about Soft Computing (Indeterminate Likert scale: feedback based on neutrosophy, its distance measures and clustering algorithm)[J]. News of Science,2020.[6]. Engineering; New Engineering Findings from Hanyang University Outlined (Can-based Aging Monitoring Technique for Automotive Asics With Efficient Soft Error Resilience)[J]. Journal of Transportation,2020.[7]. Engineering - Software Engineering; New Findings from University of Michigan in the Area of Software Engineering Reported (Multi-criteria Test Cases Selection for Model Transformations)[J]. Journal of Transportation,2020.[8]Tamas Galli,Francisco Chiclana,Francois Siewe. Software Product Quality Models, Developments, Trends, and Evaluation[J]. SN Computer Science,2020,1(2).[9]. Infotech; Infotech Joins BIM for Bridges and Structures Transportation Pooled Fund Project as an Official Software Advisor[J]. Computer Technology Journal,2020.[10]. Engineering; Study Findings from Beijing Jiaotong University Provide New Insights into Engineering (Analyzing Software Rejuvenation Techniques In a Virtualized System: Service Provider and User Views)[J]. Computer Technology Journal,2020.[11]. Soft Computing; Data on Soft Computing Reported by Researchers at Sakarya University (An exponential jerk system, its fractional-order form with dynamical analysis and engineering application)[J]. Computer Technology Journal,2020.[12]. Engineering; Studies from Henan University Yield New Data on Engineering (Extracting Phrases As Software Features From Overlapping Sentence Clusters In Product Descriptions)[J]. Computer Technology Journal,2020.[13]. Engineering; Data from Nanjing University of Aeronautics and Astronautics Provide New Insights into Engineering (A Systematic Study to Improve the Requirements Engineering Process in the Domain of Global Software Development)[J]. Computer Technology Journal,2020.[14]. Soft Computing; Investigators at Air Force Engineering University Report Findings in Soft Computing (Evidential model for intuitionistic fuzzy multi-attribute group decision making)[J]. Computer Technology Journal,2020.[15]. Engineering; Researchers from COMSATS University Islamabad Describe Findings in Engineering (A Deep CNN Ensemble Framework for Efficient DDoS Attack Detection in Software Defined Networks)[J]. Computer Technology Journal,2020.[16]Pedro Delgado-Pérez,Francisco Chicano. An Experimental and Practical Study on the EquivalentMutant Connection: An Evolutionary Approach[J]. Information and Software Technology,2020.[17]Koehler Leman Julia,Weitzner Brian D,Renfrew P Douglas,Lewis Steven M,Moretti Rocco,Watkins Andrew M,Mulligan Vikram Khipple,Lyskov Sergey,Adolf-Bryfogle Jared,Labonte Jason W,Krys Justyna,Bystroff Christopher,Schief William,Gront Dominik,Schueler-Furman Ora,Baker David,Bradley Philip,Dunbrack Roland,Kortemme Tanja,Leaver-Fay Andrew,StraussCharlie E M,Meiler Jens,Kuhlman Brian,Gray JeffreyJ,Bonneau Richard. Better together: Elements of successful scientific software development in a distributed collaborative munity.[J]. PLoS putational biology,2020,16(5).[18]. Mathematics; Data on Mathematics Reported by Researchers at Thapar Institute of Engineering and Technology (Algorithms Based on COPRAS and Aggregation Operators with New Information Measures forPossibility Intuitionistic Fuzzy Soft Decision-Making)[J]. Journal of Mathematics,2020.[19]. Engineering - Medical and Biological Engineering; Reports from Heriot-Watt University Describe Recent Advances in Medical and Biological Engineering (ANovel Palpation-based Method for Tumor Nodule Quantification In Soft Tissue-putational Framework and Experimental Validation)[J]. Journal ofEngineering,2020.[20]. Engineering - Industrial Engineering; Studies from Xi'an Jiaotong University Have Provided New Data on Industrial Engineering (Dc Voltage Control Strategy of Three-terminal Medium-voltage Power Electronic Transformer-based Soft Normally Open Points)[J]. Journal of Engineering,2020.[21]. Engineering; Reports from Hohai University Add New Data to Findings in Engineering (Soft Error Resilience of Deep Residual Networks for Object Recognition)[J]. Journal of Engineering,2020.[22]. Engineering - Mechanical Engineering; Study Data from K.N. Toosi University of Technology Update Understanding of Mechanical Engineering (Coupled Directional Dilation-Damage Approach to Model the Cyclic-Undrained Response of Soft Clay under Pure Principal Stress Axes Rotation)[J]. Journal of Engineering,2020.[23]. Soft Computing; Researchers from Abes Engineering College Report Details of New Studies and Findings in the Area of Soft Computing (An intelligent personalized web blog searching technique using fuzzy-based feedback recurrent neural network)[J]. Network Weekly News,2020.[24]. Engineering; Studies from University of Alexandria in the Area of Engineering Reported (Software Defined Network-Based Management for Enhanced 5G Network Services)[J]. Network Weekly News,2020.[25]. Soft Computing; Data on Soft Computing Discussed by Researchers at Department of Electrical and Communication Engineering [A metaheuristicoptimization model for spectral allocation incognitive networks based on ant colony algorithm (M-ACO)][J]. Computer Technology Journal,2020.[26]. Engineering - Software Engineering; Complutense University Madrid Reports Findings in Software Engineering (Recolibry Suite: a Set of Intelligent Tools for the Development of Remender Systems)[J]. Computer Technology Journal,2020.[27]. Engineering - Software Engineering; Data on Software Engineering Reported by Researchers at Gautam Buddha University (A novel quality prediction modelfor ponent based software system using ACO-NM optimized extreme learning machine)[J]. Computer Technology Journal,2020.[28]. Soft Computing; New Soft Computing Study Findings Recently Were Reported by Researchers at University College of Engineering (A novel QIM-DCT based fusion approach for classification of remote sensing images via PSO and SVM models)[J]. Computer Technology Journal,2020.[29]Morshedloo Fatemeh,Khoshfetrat Ali Baradar,Kazemi Davoud,Ahmadian Mehri. Gelatin improves peroxidase-mediated alginate hydrogel characteristics as a potential injectable hydrogel for soft tissueengineering applications.[J]. Journal of biomedical materials research. Part B, Applied biomaterials,2020.[30]Jung-Chieh Lee,Chung-Yang Chen. Exploring the team dynamic learning process in software process tailoring performance[J]. Journal of Enterprise Information Management,2020,33(3).[31]. Soft Computing; Study Results from Velammal Engineering College in the Area of Soft Computing Reported (Efficient routing in UASN during the thermohaline environment condition to improve the propagation delay and throughput)[J]. Mathematics Week,2020.[32]. Soft Matter; Findings from School of Materials Science and Engineering Provide New Insights into Soft Matter (A practical guide to active colloids: choosing synthetic model systems for soft matter physics research)[J]. Physics Week,2020.[33]Julio César Puche-Regaliza,Alfredo Jiménez,Pablo Arranz-Val. Diagnosis of Software Projects Based on the Viable System Model[J]. Systemic Practice and Action Research,2020,33(1).[34]Meinert Edward,Milne-Ives Madison,Surodina Svitlana,Lam Ching. Agile requirements engineering and software planning for a digital health platform to engage the effects of isolation caused by social distancing: A case study and feasibility study protocol.[J]. JMIR public health and surveillance,2020.[35]. Engineering - Civil Engineering; Studies Conducted at Shandong Jianzhu University on Civil Engineering Recently Published (Seismic Response Analysis and Control of Frame Structures with Soft First Storey under Near-Fault Ground Motions)[J]. Journal of Engineering,2020.软件工程英文参考文献二:[36]Chao-ze Lu,Guo-sun Zeng,Ying-jie Xie. Bigraph specification of software architecture and evolution analysis in mobile puting environment[J]. Future Generation Computer Systems,2020,108.[37]Ompal Singh, Saurabh Panwar, P. K. Kapur.. Determining Software Time-to-Market and Testing Stop Time when Release Time is a Change-Point[J]. International Journal of Mathematical, Engineering and Management Sciences,2020,5(2).[38]Ayushi Verma,Neetu Sardana,Sangeeta Lal. Developer Remendation for Stack Exchange Software EngineeringQ&A Website based on K-Means clustering and Developer Social Network Metric[J]. Procedia ComputerScience,2020,167.[39]Jagdeep Singh,Sachin Bagga,Ranjodh Kaur. Software-based Prediction of Liver Disease with Feature Selection and Classification Techniques[J]. Procedia Computer Science,2020,167.[40]. Engineering - Software Engineering; Studies from Concordia University Update Current Data on SoftwareEngineering (On the impact of using trivial packages: an empirical case study on npm and PyPI)[J]. Computer Technology Journal,2020.[41]. Engineering - Software Engineering; Study Findings from University of Alberta Broaden Understanding of Software Engineering (Building the perfect game - an empirical study of game modifications)[J]. Computer Technology Journal,2020.[42]. Engineering - Software Engineering; Investigators at National Research Council (CNR) Detail Findings in Software Engineering [A Framework for Quantitative Modeling and Analysis of Highly (Re)Configurable Systems][J]. Computer Technology Journal,2020.[43]. Engineering - Knowledge Engineering; Data from University of Paris Saclay Provide New Insights into Knowledge Engineering (Dynamic monitoring of software use with recurrent neural networks)[J]. Computer Technology Journal,2020.[44]. Engineering - Circuits Research; Findings from Federal University Santa Maria Yields New Data on Circuits Research (A New Cpfsk Demodulation Approach for Software Defined Radio)[J]. Computer Technology Journal,2020.[45]. Soft Computing; Investigators from Lovely Professional University Release New Data on Soft Computing (An intensify Harris Hawks optimizer fornumerical and engineering optimization problems)[J]. Computer Technology Journal,2020.[46]. GlobalLogic Inc.; GlobalLogic Acquires Meelogic Consulting AG, a European Healthcare and Automotive-Focused Software Engineering Services Firm[J]. Computer Technology Journal,2020.[47]. Engineering - Circuits and Systems Research; Data on Circuits and Systems Research Described by Researchers at Northeastern University (Softcharge: Software Defined Multi-device Wireless Charging Over Large Surfaces)[J]. Telemunications Weekly,2020.[48]. Soft Computing; Researchers from Department of Electrical and Communication Engineering Report on Findings in Soft Computing (Dynamic Histogram Equalization for contrast enhancement for digital images)[J]. Technology News Focus,2020.[49]Mohamed Ellithey Barghoth,Akram Salah,Manal A. Ismail. A Comprehensive Software Project Management Framework[J]. Journal of Computer and Communications,2020,08(03).[50]. Soft Computing; Researchers from Air Force Engineering University Describe Findings in Soft Computing (Random orthocenter strategy in interior search algorithm and its engineering application)[J]. Journal of Mathematics,2020.[51]. Soft Computing; Study Findings on Soft Computing Are Outlined in Reports from Department of MechanicalEngineering (Constrained design optimization of selected mechanical system ponents using Rao algorithms)[J]. Mathematics Week,2020.[52]Iqbal Javed,Ahmad Rodina B,Khan Muzafar,Fazal-E-Amin,Alyahya Sultan,Nizam Nasir Mohd Hairul,Akhunzada Adnan,Shoaib Muhammad. Requirements engineering issues causing software development outsourcing failure.[J]. PloS one,2020,15(4).[53]Raymond C.Z. Cohen,Simon M. Harrison,Paul W. Cleary. Dive Mechanic: Bringing 3D virtual experimentation using biomechanical modelling to elite level diving with the Workspace workflow engine[J]. Mathematics and Computers in Simulation,2020,175.[54]Emelie Engstr?m,Margaret-Anne Storey,Per Runeson,Martin H?st,Maria Teresa Baldassarre. How software engineering research aligns with design science: a review[J]. Empirical SoftwareEngineering,2020(prepublish).[55]Christian Lettner,Michael Moser,Josef Pichler. An integrated approach for power transformer modeling and manufacturing[J]. Procedia Manufacturing,2020,42.[56]. Engineering - Mechanical Engineering; New Findings from Leibniz University Hannover Update Understanding of Mechanical Engineering (A finite element for soft tissue deformation based on the absolute nodal coordinate formulation)[J]. Computer Technology Journal,2020.[57]. Science - Social Science; Studies fromUniversity of Burgos Yield New Information about Social Science (Diagnosis of Software Projects Based on the Viable System Model)[J]. Computer Technology Journal,2020.[58]. Technology - Powder Technology; Investigators at Research Center Pharmaceutical Engineering GmbH Discuss Findings in Powder Technology [Extended Validation and Verification of Xps/avl-fire (Tm), a Computational Cfd-dem Software Platform][J]. Computer Technology Journal,2020.[59]Guadalupe-Isaura Trujillo-Tzanahua,Ulises Juárez-Martínez,Alberto-Alfonso Aguilar-Lasserre,María-Karen Cortés-Verdín,Catherine Azzaro-Pantel.Multiple software product lines to configure applications of internet of things[J]. IETSoftware,2020,14(2).[60]Eduardo Juárez,Rocio Aldeco-Pérez,Jose.Manuel Velázquez. Academic approach to transform organisations: one engineer at a time[J]. IET Software,2020,14(2).[61]Dennys García-López,Marco Segura-Morales,Edson Loza-Aguirre. Improving the quality and quantity of functional and non-functional requirements obtained during requirements elicitation stage for the development of e-merce mobile applications: an alternative reference process model[J]. IETSoftware,2020,14(2).[62]. Guest Editorial: Software Engineering Applications to Solve Organisations Issues[J]. IET Software,2020,14(2).[63]?,?. Engine ControlUnit ? ? ?[J]. ,2020,47(4).[64]. Engineering - Software Engineering; Study Data from Nanjing University Update Understanding of Software Engineering (Identifying Failure-causing Schemas In the Presence of Multiple Faults)[J]. Mathematics Week,2020.[65]. Energy - Renewable Energy; Researchers from Institute of Electrical Engineering Detail New Studies and Findings in the Area of Renewable Energy (A Local Control Strategy for Distributed Energy Fluctuation Suppression Based on Soft Open Point)[J]. Journal of Mathematics,2020.[66]Ahmed Zeraoui,Mahfoud Benzerzour,WalidMaherzi,Raid Mansi,Nor-Edine Abriak. New software for the optimization of the formulation and the treatment of dredged sediments for utilization in civil engineering[J]. Journal of Soils andSediments,2020(prepublish).[67]. Engineering - Concurrent Engineering; Reports from Delhi Technological University Add New Data to Findings in Concurrent Engineering (Systematic literature review of sentiment analysis on Twitter using soft puting techniques)[J]. Journal of Engineering,2020.[68]. Engineering; New Findings from Future University in Egypt in the Area of Engineering Reported (Decision support system for optimum soft clay improvement technique for highway construction projects)[J]. Journal of Engineering,2020.[69]Erica Mour?o,Jo?o Felipe Pimentel,LeonardoMurta,Marcos Kalinowski,Emilia Mendes,Claes Wohlin. On the performance of hybrid search strategies for systematic literature reviews in softwareengineering[J]. Information and SoftwareTechnology,2020,123.[70]. Soft Computing; Researchers from Anna University Discuss Findings in Soft Computing (A novel fuzzy mechanism for risk assessment in software projects)[J]. News of Science,2020.软件工程英文参考文献三:[71]. Software and Systems Research; New Software and Systems Research Study Results from ChalmersUniversity of Technology Described (Why and How To Balance Alignment and Diversity of Requirements Engineering Practices In Automotive)[J]. Journal of Transportation,2020.[72]Anupama Kaushik,Devendra Kr. Tayal,Kalpana Yadav.A Comparative Analysis on Effort Estimation for Agile and Non-agile Software Projects Using DBN-ALO[J]. Arabian Journal for Science and Engineering,2020,45(6).[73]Subhrata Das,Adarsh Anand,Mohini Agarwal,Mangey Ram. Release Time Problem Incorporating the Effect of Imperfect Debugging and Fault Generation: An Analysis for Multi-Upgraded Software System[J]. International Journal of Reliability, Quality and Safety Engineering,2020,27(02).[74]Saerom Lee,Hyunmi Baek,Sehwan Oh. The role of openness in open collaboration: A focus onopen‐source software development projects[J]. ETRI Journal,2020,42(2).[75]. Soft Computing; Study Results from Computer Science and Engineering Broaden Understanding of Soft Computing (Efficient attribute selection technique for leukaemia prediction using microarray gene data)[J]. Computer Technology Journal,2020.[76]. Engineering - Computational Engineering; Findings from University of Cincinnati in the Area of Computational Engineering Described (Exploratory Metamorphic Testing for Scientific Software)[J]. Computer Technology Journal,2020.[77]. Organizational and End User Computing; Data from Gyeongnam National University of Science and Technology Advance Knowledge in Organizational and End User Computing (A Contingent Approach to Facilitating Conflict Resolution in Software Development Outsourcing Projects)[J]. Computer Technology Journal,2020.[78]. Soft Computing; Findings from Department of Industrial Engineering in the Area of Soft Computing Reported (Analysis of fuzzy supply chain performance based on different buyback contract configurations)[J]. Computer Technology Journal,2020.[79]Hana M kaouar,Bechir Zalila,Jér?me Hugues,Mohamed Jmaiel. A formal approach to AADL model-based software engineering[J]. International Journal on SoftwareTools for Technology Transfer,2020,22(5).[80]Riesch Michael,Nguyen Tien Dat,Jirauschek Christian. bertha: Project skeleton for scientific software.[J]. PloS one,2020,15(3).[81]. Computers; Findings from Department of Computer Sciences and Engineering Reveals New Findings on Computers (An assessment of software definednetworking approach in surveillance using sparse optimization algorithm)[J]. TelemunicationsWeekly,2020.[82]Luigi Ranghetti,Mirco Boschetti,FrancescoNutini,Lorenzo Busetto. “sen2r”: An R toolbox for automatically downloading and preprocessing Sentinel-2 satellite data[J]. Computers and Geosciences,2020,139.[83]Mathie Najberg,Muhammad Haji Mansor,ThéodoreT aillé,Céline Bouré,Rodolfo Molina-Pe?a,Frank Boury,José Luis Cenis,Emmanuel Garcion,CarmenAlvarez-Lorenzo. Aerogel sponges of silk fibroin, hyaluronic acid and heparin for soft tissueengineering: Composition-properties relationship[J]. Carbohydrate Polymers,2020,237.[84]Isonkobong Udousoro. Effective Requirement Engineering Process Model in Software Engineering[J]. Software Engineering,2020,8(1).[85]. Soft Computing; Research Conducted at Department of Computer Sciences and Engineering Has Updated Our Knowledge about Soft Computing [Hyperparameter tuning in convolutional neural networks for domain adaptation in sentiment classification (HTCNN-DASC)][J]. Network Weekly News,2020.[86]. Engineering - Software Engineering; Data on Software Engineering Discussed by Researchers at Universita della Svizzera italiana (Investigating Types and Survivability of Performance Bugs In Mobile Apps)[J]. Computer Technology Journal,2020.[87]. Engineering - Software Engineering; Findings from Nanjing University Broaden Understanding of Software Engineering (Boosting Crash-inducing Change Localization With Rank-performance-based Feature Subset Selection)[J]. Computer Technology Journal,2020.[88]. Engineering - Software Engineering; Study Data from Queen's University Belfast Update Knowledge of Software Engineering (Practical relevance of software engineering research: synthesizing the munity's voice)[J]. Computer Technology Journal,2020.[89]. Engineering - Software Engineering; Researchers from Concordia University Detail New Studies and Findings in the Area of Software Engineering (MSRBot: Using bots to answer questions from software repositories)[J]. Computer Technology Journal,2020.[90]Anonymous. DBTA LIVE[J]. Database Trends and Applications,2020,34(2).[91]Tachanun KANGWANTRAKOOL,Kobkrit VIRIYAYUDHAKORN,Thanaruk THEERAMUNKONG. Software Development Effort Estimation from Unstructured Software Project Description by Sequence Models[J]. IEICE Transactions on Information andSystems,2020,E103.D(4).[92]Reza Mohammadi,Reza Javidan,NegarRikhtegar,Manijeh Keshtgari. An intelligent multicast traffic engineering method over software defined networks[J]. Journal of High Speed Networks,2020,26(1).[93]. Engineering - Civil Engineering; HohaiUniversity Researchers Detail New Studies and Findings in the Area of Civil Engineering (An Experimental Study on Settlement due to the Mutual Embedding of Miscellaneous Fill and Soft Soil)[J]. Journal of Engineering,2020.[94]. Engineering - Biomechanical Engineering; Researchers from Washington University St. LouisDetail New Studies and Findings in the Area of Biomechanical Engineering (Estimation of Anisotropic Material Properties of Soft Tissue By Mri ofUltrasound-induced Shear Waves)[J]. Journal of Engineering,2020.[95]. Engineering - Rock Engineering; Reports from University of Alicante Add New Data to Findings in Rock Engineering (Evaluation of Strength and Deformability of Soft Sedimentary Rocks In Dry and Saturated Conditions Through Needle Penetration and Point Load Tests: a Comparative ...)[J]. Journal of Engineering,2020.[96]. Computers; Study Findings from Department of Electrical and Communication Engineering Broaden Understanding of Computers [Improved energy efficient design in software defined wireless electroencephalography sensor networks (WESN) using distributed ...][J]. Network Weekly News,2020.[97]Mouro Erica,Pimentel Joo Felipe,MurtaLeonardo,Kalinowski Marcos,Mendes Emilia,Wohlin Claes. On the Performance of Hybrid Search Strategies for Systematic Literature Reviews in SoftwareEngineering[J]. Information and SoftwareTechnology,2020(prepublish).[98]Osuna Enrique,Rodrguez Luis-Felipe,Gutierrez-Garcia J. Octavio,Castro Luis A.. Development of putational models of emotions: A software engineering perspective[J]. Cognitive Systems Research,2020,60(C).[99]Sharifzadeh Bahador,Kalbasi Rasool,Jahangiri Mehdi,Toghraie Davood,Karimipour Arash. Computer modeling of pulsatile blood flow in elastic arteryusing a software program for application in biomedical engineering[J]. Computer Methods and Programs in Biomedicine,2020.[100]Shen Xiaoning,Guo Yinan,Li Aimin. Cooperative coevolution with an improved resource allocation for large-scale multi-objective software projectscheduling[J]. Applied Soft Computing,2020,88(C).[101]Jung Jaesoon,Kook Junghwan,Goo Seongyeol,Wang Semyung. Corrigendum to Sound transmission analysis of plate structures using the finite element method and elementary radiator approach with radiator error index [Advances in Engineering Software 112 (2017 115][J]. Advances in Engineering Software,2020,140(C).[102]Zhang Chenyi,Pang Jun. Preface for the special issue of the 12th International Symposium on Theoretical Aspects of Software Engineering (TASE2018[J]. Science of Computer Programming,2020,187(C).[103]Karras Oliver,Schneider Kurt,Fricker Samuel A.. Representing software project vision by means of video: A quality model for vision videos[J]. Journal of Systems and Software,2020,162(C).[104]Sutanto Juliana,Jiang Qiqi,Tan Chuan-Hoo. The contingent role of interproject connectedness in cultivating open source software projects[J]. The Journal of Strategic InformationSystems,2020(prepublish).[105]Weiner Iddo,Feldman Yael,Shahar Noam,Yacoby Iftach,Tuller Tamir. CSO A sequence optimization software for engineering chloroplast expression in Chlamydomonas reinhardtii[J]. AlgalResearch,2020,46(C).。
软件工程外文文献翻译
西安邮电学院毕业设计(论文)外文文献翻译院系:计算机学院专业:软件工程班级:软件0601学生姓名:导师姓名:职称:副教授起止时间:2010年3月8日至2010年6月11日ClassesOne of the most compelling features about Java is code reuse. But to be revolutionary, you’ve got to be able to do a lot more than copy code and change it.That’s the approach used in procedural languages like C, and it hasn’t worked very well. Like everything in Java, the solution revolves around the class. You reuse code by creating new classes, but instead of creating them from scratch, you use existing classes that someone has already built and debugged.The trick is to use the classes without soiling the existing code.➢Initializing the base classSince there are now two classes involved—the base class and the derived class—instead of just one, it can be a bit confusing to try to imagine the resulting object produced by a derived class. From the outside, it looks like the new class has the same interface as the base class and maybe some additional methods and fields. But inheritance doesn’t just copy the interface of the base class. When you create an object of the derived class, it contains within it a subobject of the base class. This subobject is the same as if you had created an object of the base class by itself. It’s just that from the outside, the subobject of the base class is wrapped within the derived-class object.Of course, it’s essential that th e base-class subobject be initialized correctly, and there’s only one way to guarantee this: perform the initialization in the constructor by calling the base-class constructor, which has all the appropriate knowledge and privileges to perform the base-class initialization. Java automatically inserts calls to the base-class constructor in the derived-class constructor.➢Guaranteeing proper cleanupJava doesn’t have the C++ concept of a destructor, a method that is automatically called when an object is destroyed. The reason is probably that in Java, the practice is simply to forget about objects rather than to destroy them, allowing the garbage collector to reclaim the memory as necessary.Often this is fine, but there are times when your class might perform some activities during its lifetime that require cleanup. As mentioned in Chapter 4, you can’t know when the garbage collector will be called, or if it will be called. So if you want something cleaned up for a class, you must explicitly write a special method to do it, and make sure that the client programmer knows that they must call this method.Note that in your cleanup method, you must also pay attention to the calling order for the base-class and member-object cleanup methods in case one subobject depends on another. In general, you should follow the same form that is imposed by a C++ compiler on its destructors: first perform all of the cleanup work specific to your class, in the reverse order of creation. (In general, this requires that base-class elements still be viable.) Then call the base-class cleanup method, as demonstrated here➢Name hidingIf a Java base class has a method name that’s overloaded several times, redefining that method name in the derived class will not hide any of the base-class versions (unlike C++). Thus overloading works regardless of whether the method was defined at this level or in a base class,it’s far more common to override methods of the same name, using exactly the same signature and return type as in the base class. It can be confusing otherwise (which is why C++ disallows it—to prevent you from making what is probably a mistake).➢Choosing composition vs. inheritanceBoth composition and inheritance allow you to place subobjects inside your new class (composition explicitly does this—with inheritance it’s implicit). You might wonder about the difference between the two, and when to choose one over the other.Composition is generally used when you want the features of an existing class inside your new class, but not its interface. That is, you embed an object so that you can use it to implement functionality in your new class, but the user of your new class sees the interface you’ve defined for the new class rather than the interface from theembedded object. For this effect, you embed private objects of existing classes inside your new class.Sometimes it makes sense to allow the class user to directly access the composition of your new class; that is, to make the member objects public. The member objects use implementation hiding themselves, so this is a safe thing to do. When the user knows you’re assembling a bunch of parts, it makes the interface easier to understand.When you inherit, you take an existing class and make a special version of it. In general, this mea ns that you’re taking a general-purpose class and specializing it for a particular need➢The final keywordJava’s final keyword has slightly different meanings depending on the context, but in general it says “This cannot be changed.” You might want to prev ent changes for two reasons: design or efficiency. Because these two reasons are quite different, it’s possible to misuse the final keywordThe following sections discuss the three places where final can be used: for data, methods, and classes.➢Final dataMany programming languages have a way to tell the compiler that a piece of data is “constant.” A constant is useful for two reasons:It can be a compile-time constant that won’t ever change.It can be a value initialized at run time that you don’t want ch anged.In the case of a compile-time constant, the compiler is allowed to “fold” the constant value into any calculations in which it’s used; that is, the calculation can be performed at compile time, eliminating some run-time overhead. In Java, these sorts of constants must be primitives and are expressed with the final keyword. A value must be given at the time of definition of such a constant.A field that is both static and final has only one piece of storage that cannot be changed.When using final with object references rather than primitives, the meaning gets a bit confusing. With a primitive, final makes the value a constant, but with an object reference, final makes the reference a constant. Once the reference is initialized to an object, it can never be changed to point to another object. However, the object itself can be modified; Java does not provide a way to make any arbitrary object a constant. (You can, however, write your class so that objects have the effect of being constant.) This restriction includes arrays, which are also objects.➢Final methodsThere are two reasons for final methods. The first is to put a “lock” on the method to prevent any inheriting class from changing its meaning. This is done for design reasons when you want to mak e sure that a method’s behavior is retained during inheritance and cannot be overridden.The second reason for final methods is efficiency. If you make a method final, you are allowing the compiler to turn any calls to that method into inline calls. When the compiler sees a final method call, it can (at its discretion) skip the normal approach of inserting code to perform the method call mechanism (push arguments on the stack, hop over to the method code and execute it, hop back and clean off the stack arguments, and deal with the return value) and instead replace the method call with a copy of the actual code in the method body. This eliminates the overhead of the method call. Of course, if a method is big, then your code begins to bloat, and you probably won’t see any performance gains from inlining, since any improvements will be dwarfed by the amount of time spent inside the method. It is implied that the Java compiler is able to detect these situations and choose wisely whether to inline a final method. However, it’s best to let the compiler and JVM handle efficiency issues and make a method final only if you want to explicitly prevent overriding➢Final classesWhen you say that an entire class is final (by preceding its definition with the final keyword), you state that you don’t want to inherit from this class or allow anyone else to do so. In other words, for some reason the design of your class is suchthat there is never a need to make any changes, or for safety or security reasons you don’t want subc lassingNote that the fields of a final class can be final or not, as you choose. The same rules apply to final for fields regardless of whet However, because it prevents inheritance, all methods in a final class are implicitly final, since there’s no way to override them. You can add the final specifier to a method in a final class, but it doesn’t add any meaning.her the class is defined as final.➢SummaryBoth inheritance and composition allow you to create a new type from existing types. Typically, however, composition reuses existing types as part of the underlying implementation of the new type, and inheritance reuses the interface. Since the derived class has the base-class interface, it can be upcast to the base, which is critical for polymorphism, as you’ll see in the next chapter.Despite the strong emphasis on inheritance in object-oriented programming, when you start a design you should generally prefer composition during the first cut and use inheritance only when it is clearly necessary. Composition tends to be more flexible. In addition, by using the added artifice of inheritance with your member type, you can change the exact type, and thus the behavior, of those member objects at run time. Therefore, you can change the behavior of the composed object at run time.When designing a system, your goal is to find or create a set of classes in which each class has a specific use and is neither too big (encompassing so much functionality that it’s unwieldy to reuse) nor annoyingly small (you can’t use it by itself or without adding functionality).类“Java引人注目的一项特性是代码的重复使用或者再生。
软件工程本科毕业外文文献翻译资料
软件工程本科毕业外文文献翻译学校代码:10128本科毕业设计外文文献翻译二〇一五年一月The Test Library Management System ofFramework Based on SSHThe application system features in small or medium-sized enterprise lie in the greater flexibility and safety high performance-price ratio. Traditional J2EE framework can not adapt to these needs, but the system a pplication based on SSH(Struts+Spring+Hibernate) technology can better satisfy such needs. This paper analyses some integration theory and key technologies about SSH, and according to the integration constructs a lightweight WEB framework, which has integrated the three kinds of technology ,forming the lightweight WEB framework bas ed on SSH and gaining good effects in practical applications.IntroductionGenerally the J2EE platform[27] used in large enterprise applications, can well s olve the application of reliability, safety and stability, but its weakness is the price hig h and the constructing cycle is long. Corresponding to the small or medium enterprise applications, the replace approach is the system framework of lightweight WEB, inclu ding the more commonly used methods which are based on the Struts and Hibernate. With the wide application of Spring, the three technology combination may be a bette r choice as a lightweight WEB framework. It uses layered structure and provides a go od integrated framework for Web applications at all levels in minimizing the Interlaye r coupling and increasing the efficiency of development. This framework can solve a l ot of problems, with good maintainability and scalability. It can solve the separation o f user interface and business logic separation, the separation of business logic and data base operation and the correct procedure control logic, etc. This paper studies the tech nology and principle of Struts and Spring and Hibernate, presenting a proved lightwei ght WEB application framework for enterprise.Hierarchical Web MechanismHierarchical Web framework including the user presentation layer, business logi clayer, data persistence layer ,expansion layer etc, each layer for different function, re spectively to finish the whole application. The whole system are divided into differentlogic module with relatively independent and mutual, and each module can be imple mented according to different design. It can realize the system parallel development, r apid integration, good maintainability, scalability.Struts MVC FrameworkTo ensure the reuse and efficiency of development process, adopting J2EE techn ology to build the Web application must select a system framework which has a good performance . Only in this way can we ensure not wasting lots of time because of adju sting configuration and achieve application development efficiently and quickly. So, p rogrammers in the course of practice got some successful development pattern which proved practical, such as MVC and O/R mapping, etc; many technologies, including S truts and Hibernate frameworks, realized these pattern. However, Struts framework on ly settled the separation problem between view layer and business logic layer, control layer, did not provide a flexible support for complex data saving process. On the contr ary, Hibernate framework offered the powerful and flexible support for complex data saving process. Therefore, how to integrate two frameworks and get a flexible, low-coupling solutions project which is easy to maintain for information system, is a resea rch task which the engineering staff is studying constantly.Model-View-Controller (MVC) is a popular design pattern. It divides the interactive system in three components and each of them specializes in one task. The model contains the applica tion data and manages the core functionality. The visual display of the model and the f eedback to the users are managed by the view. The controller not only interprets the in puts from the user, but also dominates the model and the view to change appropriately . MVC separates the system functionality from the system interface so as to enhance t he system scalability and maintainability. Struts is a typical MVC frame[32], and it also contains the three aforementioned components. The model level is composed of J avaBean and EJB components. The controller is realized by action and ActionServlet, and the view layer consists of JSP files. The central controller controls the action exec ution that receives a request and redirects this request to the appropriate module contr oller. Subsequently, the module controller processes the request and returns results tothe central controller using a JavaBean object, which stores any object to be presented in the view layer by including an indication to module views that must be presented. The central controller redirects the returned JavaBean object to the main view that dis plays its information.Spring Framework technologySpring is a lightweight J2EE application development framework, which uses the model of Inversion of Control(IoC) to separate the actual application from the Config uration and dependent regulations of the application. Committed to J2EE application a t all levels of the solution, Spring is not attempting to replace the existing framework, but rather “welding” the object of J2EE application at all levels together through the P OJO management. In addition, developers are free to choose Spring framework for so me or all, since Spring modules are not totally dependent.As a major business-level detail, Spring employs the idea of delay injection to assemble code for the sake o f improving the scalability and flexibility of built systems. Thus, the systems achieve a centralized business processing and reduction of code reuse through the Spring AOP module.Hibernate Persistent FrameworkHibernate is a kind of open source framework with DAO design patterns to achie ve mapping(O/R Mapping) between object and relational database.During the Web system development, the tradition approach directly interacts wi th the database by JDBC .However, this method has not only heavy workload but also complex SQL codes of JDBC which need to revise because the business logic sli ghtly changes. So, whatever development or maintain system are inconvenient. Consi dering the large difference between the object-oriented relation of java and the structure of relational database, it is necessary to intro duce a direct mapping mechanism between the object and database, which this kind of mapping should use configuration files as soon as possibility, so that mapping files w ill need modifying rather than java source codes when the business logic changes in the future. Therefore, O/R mapping pattern emerges, which hibernate is one of the most outstanding realization of architecture.It encapsulates JDBC with lightweight , making Java programmer operate a relati onal database with the object oriented programming thinking. It is a a implementation technology in the lasting layer. Compared to other lasting layer technology such as JD BC, EJB, JDO, Hibernate is easy to grasp and more in line with the object-oriented programming thinking. Hibernate own a query language (HQL), which is full y object-oriented. The basic structure in its application as shown in figure6.1.Hibernate is a data persistence framework, and the core technology is the object / relational database mapping(ORM). Hibernate is generally considered as a bridge bet ween Java applications and the relational database, owing to providing durable data se rvices for applications and allowing developers to use an object-oriented approach to the management and manipulation of relational database. Further more, it furnishes an object-oriented query language-HQL.Responsible for the mapping between the major categories of Java and the relatio nal database, Hibernate is essentially a middle ware providing database services. It su pplies durable data services for applications by utilizing databases and several profiles , such as hibernate properties and XML Mapping etc..Web services technologiesThe introduction of annotations into Java EE 5 makes it simple to create sophisticated Web service endpoints and clients with less code and a shorter learning curve than was possible with earlier Java EE versions. Annotations — first introduced in Java SE 5 — are modifiers you can add to your code as metadata. They don't affect program semantics directly, but the compiler, development tools, and runtime libraries can process them to produce additional Java language source files, XML documents, or other artifacts and behavior that augment the code containing the annotations (see Resources). Later in the article, you'll see how you can easily turn a regular Java class into a Web service by adding simple annotations.Web application technologiesJava EE 5 welcomes two major pieces of front-end technology — JSF and JSTL — into the specification to join the existing JavaServer Pages and Servlet specifications. JSF is a set of APIs that enable a component-based approach to user-interface development. JSTL is a set of tag libraries that support embedding procedural logic, access to JavaBeans, SQL commands, localized formatting instructions, and XML processing in JSPs. The most recent releases of JSF, JSTL, and JSP support a unified expression language (EL) that allows these technologies to integrate more easily (see Resources).The cornerstone of Web services support in Java EE 5 is JAX-WS 2.0, which is a follow-on to JAX-RPC 1.1. Both of these technologies let you create RESTful and SOAP-based Web services without dealing directly with the tedium of XML processing and data binding inherent to Web services. Developers are free to continue using JAX-RPC (which is still required of Java EE 5 containers), but migrating to JAX-WS is strongly recommended. Newcomers to Java Web services might as well skip JAX-RPC and head right for JAX-WS. That said, it's good to know that both of them support SOAP 1.1 over HTTP 1.1 and so are fully compatible: a JAX-WS Web services client can access a JAX-RPC Web services endpoint, and vice versa.The advantages of JAX-WS over JAX-RPC are compelling. JAX-WS:•Supports the SOAP 1.2 standard (in addition to SOAP 1.1).•Supports XML over HTTP. You can bypass SOAP if you wish. (See the article "Use XML directly over HTTP for Web services (where appropriate)"for more information.)•Uses the Java Architecture for XML Binding (JAXB) for its data-mapping model. JAXB has complete support for XML schema and betterperformance (more on that in a moment).•Introduces a dynamic programming model for both server and client.The client model supports both a message-oriented and an asynchronous approach.•Supports Message Transmission Optimization Mechanism (MTOM), a W3C recommendation for optimizing the transmission and format of a SOAP message.•Upgrades Web services interoperability (WS-I) support. (It supports Basic Profile 1.1; JAX-WS supports only Basic Profile 1.0.)•Upgrades SOAP attachment support. (It uses the SOAP with Attachments API for Java [SAAJ] 1.3; JAX-WS supports only SAAJ 1.2.)•You can learn more about the differences by reading the article "JAX-RPC versus JAX-WS."The wsimport tool in JAX-WS automatically handles many of the mundane details of Web service development and integrates easily into a build processes in a cross-platform manner, freeing you to focus on the application logic that implements or uses a service. It generates artifacts such as services, service endpoint interfaces (SEIs), asynchronous response code, exceptions based on WSDL faults, and Java classes bound to schema types by JAXB.JAX-WS also enables high-performing Web services. See Resources for a link to an article ("Implementing High Performance Web Services Using JAX-WS 2.0") presenting a benchmark study of equivalent Web service implementations based on the new JAX-WS stack (which uses two other Web services features in Java EE 5 —JAXB and StAX) and a JAX-RPC stack available in J2EE 1.4. The study found 40% to 1000% performance increases with JAX-WS in various functional areas under different loads.ConclusionEach framework has its advantages and disadvantages .Lightweight J2EE struct ure integrates Struts and Hibernate and Spring technology, making full use the powerf ul data processing function of Struts and the management flexible of Spring and the m ature of Hibernate. According to the practice, putting forward an open-source solutions suitable for small or medium-sized enterprise application of. The application system based on this architecture tech nology development has interlayer loose coupling ,structure distinctly, short develop ment cycle, maintainability. In addition, combined with commercial project developm ent, the solution has achieved good effect. The lightweight framework makes the paral lel development and maintenance for commercial system convenience, and can push f orward become other industry business system development.Through research and practice, we can easily find that Struts / Spring / Hiberna te framework utilizes Struts maturity in the presentation layer, flexibility of Spring bu siness management and convenience of Hibernate in the serialization layer, three kind s of framework integrated into a whole so that the development and maintenance beca me more convenient and handy. This kind of approach also will play a key role if appl ying other business system. Of course ,how to optimize system performance, enhance the user's access speed, improve security ability of system framework ,all of these wor ks, are need to do for author in the further.基于SSH框架实现的试题库管理系统小型或者中型企业的应用系统具有非常好的灵活性、安全性以及高性价比,传统的J2EE架构满足不了这些需求,但是基于SSH框架实现的应用系统更好的满足了这样的需求,这篇文章分析了关于SSH的一体化理论和关键技术,通过这些集成形成了轻量级Web框架,在已经集成三种技术的基础上,伴随形成了基于SSH的轻量级Web 框架,并且在实际应用中有着重要作用。
(完整版)软件工程专业_毕业设计外文文献翻译_
(二〇一三年六月A HISTORICAL PERSPECTIVEFrom the earliest days of computers, storing and manipulating data a major application focus. The first general-purpose DBMS was designed by Charles Bachman at General Electric in the early 1960s and was called the Integrated Data Store. It formed the basis for the network data model, which was standardized by the Conference on Data Systems Languages (CODASYL) and strongly influenced database systems through the 1960s. Bachman was the fi rst recipient of ACM’s Turing Award (the computer science equivalent of a Nobel prize) for work in the database area; 1973. In the late 1960s, IBM developed the Information Management System (IMS) DBMS, used even today in many major installations. IMS formed the basis for an alternative data representation framework called the Airlines and IBM around the same time, and it allowed several people to access the same data through computer network. Interestingly, today the same SABRE system is used to power popular Web-based travel services such as Travelocity!In 1970, Edgar Codd, at IBM’s San Jose Research Laboratory, proposed a new data representation framework called the relational data model. This proved to be a watershed in the development of database systems: it sparked rapid development of several DBMSs based on the relational model, along with a rich body of theoretical results that placed the field on a firm foundation. Codd won the 1981 Turing Award for academic discipline, and the popularity of relational DBMSs changed thecommercial landscape. Their benefits were widely recognized, and the use of DBMSs for managing corporate data became standard practice.In the 1980s, the relational model consolidated its position as the dominant DBMS paradigm, and database systems continued to gain widespread use. The SQL query language for relational databases, developed as part of IBM’s System R project, is now the standard query language. SQL was standardized in the late 1980s, and the current standard, SQL-92, was adopted by the American National Standards Institute (ANSI) and International Standards Organization (ISO). Arguably, the most widely used form of concurrent programming is the concurrent execution of database programs (called transactions). Users write programs as if they are to be run by themselves, and the responsibility for running them concurrently is given to the DBMS. James Gray won the 1999 Turing award for management in a DBMS.In the late 1980s and the 1990s, advances made in many areas of database systems. Considerable research carried out into more powerful query languages and richer data models, and there a big emphasis on supporting complex analysis of data from all parts of an enterprise. Several vendors (e.g., IBM’s DB2, Oracle 8, Informix UDS) developed by numerous vendors for creating data warehouses, consolidating data from several databases, and for carrying out specialized analysis.An interesting phenomenon is the emergence of several enterprise resource planning(ERP) and management resource planning (MRP) packages, which add a substantial layer of application-oriented features on top of a DBMS. Widely used packages include systems from Baan, Oracle,PeopleSoft, SAP, and Siebel. These packages identify a set of common tasks (e.g., inventory management, resources planning, financial analysis) encountered by a large number of organizations and provide a general application layer to carry out these tasks. The data is stored in a relational DBMS, and the application layer can be customized to different companies, leading to lower Introduction to Database Systems overall costs for the companies, compared to the cost of building the application layer from scratch. Most significantly, perhaps, DBMSs of Web sites stored their data exclusively in operating systems files, the use of a DBMS to store data that is accessed through a Web browser is becoming widespread. Queries are generated through Web-accessible forms and answers are formatted using a markup language such as HTML, in order to be easily displayed in a browser. All the database vendors are adding features to their DBMS aimed at making it more suitable for deployment over the Internet. Database management continues to gain importance as more and more data is brought on-line, and made ever more accessible through computer networking. Today the field is being driven by exciting visions such as multimedia databases, interactive video, digital libraries, a genome mapping effort and NASA’s Earth Observation System project,and the desire of companies to consolidate their decision-making processes and mine their data repositories for useful information about their businesses. Commercially, database manage- ment systems represent one of the largest and most vigorous market segments. Thusthes- tudy of database systems could prove to be richly rewarding in more ways than one!INTRODUCTION TO PHYSICAL DATABASEDESIGNLike all other aspects of database design, physical design must be guided by the nature of the data and its intended use. In particular, it is important to understand the typical workload that the database must support; the workload consists of a mix of queries and updates. Users also requirements about queries or updates must run or and users’ performance requirements are the basis on which a number of decisions .To create a good physical database design and to tune the system for performance in response to evolving user requirements, the designer needs to understand the workings of a DBMS, especially the indexing and query processing techniques supported by the DBMS. If the database is expected to be accessed concurrently by many users, or is a distributed database, the task becomes more complicated, and other features of a DBMS come into play.DATABASE WORKLOADSThe key to good physical design is arriving at an accurate description of the expected workload. A workload description includes the following elements:1. A list of queries and their frequencies, as a fraction of all queries and updates.2. A list of updates and their frequencies.3. Performance goals for each type of query and update.For each query in the workload, we must identify:Which relations are accessed.Which attributes are retained (in the SELECT clause).Which attributes or join conditions expressed on them (in the WHERE clause) and the workload, we must identify:Which attributes or join conditions expressed on them (in the WHERE clause) and .For UPDATE commands, the fields that are modified by the update.Remember that queries and updates typically involves a particular account number. The values of these parameters determine selectivity of selection and join conditions.Updates benefit from a good physical design and the presence of indexes. On the other indexes on the attributes that they modify. Thus, while queries can only benefit from the presence of an index, an index may either speed up or slow down a given update. Designers should keep this trade-offer in mind when creating indexes.NEED FOR DATABASE TUNINGAccurate, detailed workload information may be of the system. Consequently, tuning a database after it designed and deployed is important—we must refine the initial design in the light of actual usage patterns to obtain the best possible performance.The distinction between database design and database tuning is somewhat arbitrary.We could consider the design process to be over once an initial conceptual schema is designed and a set of indexing and clustering decisions is made. Any subsequent changes to the conceptual schema or the indexes, say, would then be regarded as a tuning activity. Alternatively, we could consider some refinement of the conceptual schema (and physical design decisions affected by this refinement) to be part of the physical design process.Where we draw the line between design and tuning is not very important.OVERVIEW OF DATABASE TUNINGAfter the initial phase of database design, actual use of the database provides a valuable source of detailed information that can be used to refine the initial design. Many of the original assumptions about the expected workload can be replaced by observed usage patterns; in general, some of the initial workload specification will be validated, and some of it will turn out to be wrong. Initial guesses about the size of data can be replaced with actual statistics from the system catalogs (although this information will keep changing as the system evolves). Careful monitoring of queries can reveal unexpected problems; for example, the optimizer may not be using some indexes as intended to produce good plans.Continued database tuning is important to get the best possibleperformance.TUNING THE CONCEPTUAL SCHEMAIn the course of database design, we may realize that our current choice of relation schemas does not enable us meet our performance objectives for the given workload with any (feasible) set of physical design choices. If so, we may our conceptual schema (and re-examine physical design decisions that are affected by the changes that we make).We may realize that a redesign is necessary during the initial design process or later, after the system in use for a while. Once a database designed and populated with data, changing the conceptual schema requires a significant effort in terms of mapping the contents of relations that are affected. Nonetheless, it may sometimes be necessary to revise the conceptual schema in light of experience with the system. We now consider the issues involved in conceptual schema (re)design from the point of view of performance.Several options must be considered while tuning the conceptual schema:We may decide to settle for a 3NF design instead of a BCNF design.If there are two ways to decompose a given schema into 3NF or BCNF, our choice should be guided by the workload.Sometimes we might decide to further decompose a relation that is already in BCNF.In other situations we might denormalize. That is, we might choose toreplace a collection of relations obtained by a decomposition from a larger relation with the original (larger) relation, even though it suffers from some redundancy problems. Alternatively, we might choose to add some fields to certain relations to speed up some important queries, even if this leads to a redundant storage of some information (and consequently, a schema that is in neither 3NF nor BCNF).This discussion of normalization the technique of decomposition, which amounts to vertical partitioning of a relation. Another technique to consider is , which would lead to our ; rather, we want to create two distinct relations (possibly with different constraints and indexes on each).Incidentally, when we redesign the conceptual schema, especially if we are tuning an existing database schema, it is worth considering whether we should create views to mask these changes from users for whom the original schema is more natural.TUNING QUERIES AND VIEWSIf we notice that a query is running much slower than we expected, we conjunction with some index tuning, can often ?x the problem. Similar tuning may be called for if queries on some view run slower than expected.When tuning a query, the first thing to verify is that the system is using the plan that you expect it to use. It may be that the system is not finding the best plan for a variety of reasons. Some common situations that are not condition involving null values.Selection conditions involving arithmetic or string expressions orconditions using the or connective. For example, if we E.age = 2*D.age in the WHERE clause, the optimizer may correctly utilize an available index on E.age but fail to utilize an available index on D.age. Replacing the condition by E.age2=D.age would reverse the situation.Inability to recognize a sophisticated plan such as an index-only scan for an aggregation query involving a GROUP BY clause.If the optimizer is not smart enough to and the best plan (using access methods and evaluation strategies supported by the DBMS), some systems allow users to guide the choice of a plan by providing order and join method. A user who wishes to guide optimization in this manner should and the capabilities of the given DBMS.(8)OTHER TOPICSMOBILE DATABASESThe availability of portable computers and wireless communications many components of a DBMS, including the query engine, transaction manager, and recovery manager.Users are connected through a wireless link whose bandwidth is ten times less than Ethernet and 100 times less than ATM networks. Communication costs are therefore significantly proportion to IO and CPU costs.Users’ locati ons are constantly changing, and mobile computers costs is connection time and battery usage in addition to bytes transferred, and change constantly depending on location. Data is frequently replicated to minimize the cost of accessing it from different locations.As a user moves around, data could be accessed from multipledatabase servers within a single transaction. The likelihood of losing connections is also much greater than in a traditional network. Centralized transaction management may therefore be impractical, especially if some data is resident at the mobile computers. We may in fact ACID transactions and develop alternative notions of consistency for user programs.MAIN MEMORY DATABASESThe price of main memory is now low enough that we can buy enough main memory to CPUs also memory. This shift prompts a reexamination of some basic DBMS design decisions, since disk accesses no longer dominate processing time for a memory-resident database: Main memory does not survive system crashes, and so we still atomicity and durability. Log records must be written to stable storage at commit time, and this process could become a bottleneck. To minimize this problem, rather than commit each transaction as it completes, we can collect completed transactions and commit them in batches; this is called group commit. Recovery algorithms can also be optimized since pages rarely out to make room for other pages.The implementation of in-memory operations must be considered while optimizing queries, namely the amount of space required to execute a plan. It is important to minimize the space overhead because exceeding available physical memory would lead to swapping pages to disk (through the operating system’s virtual memory mechanisms), greatly slowing down execution.Page-oriented data structures become less important (since pages areno longer the unit of data retrieval), and clustering is not important (since the cost of accessing any region of main memory is uniform).(一)从历史的角度回顾从数据库的早期开始,存储和操纵数据就一直是主要的应用焦点。
软件工程外文翻译文献
软件工程外文翻译文献(文档含中英文对照即英文原文和中文翻译)Software engineeringSoftware engineering is the study of the use of engineering methods to build and maintain effective, practical and high-quality software disciplines. It involves the programming language, database, software development tools, system platform, standards, design patterns and so on.In modern society, the software used in many ways. Typical software such as email, embedded systems, human-machine interface, office packages, operating systems, compilers, databases, games. Meanwhile, almost all the various sectors of computer software applications, such as industry, agriculture, banking, aviation and government departments. These applications facilitate the economic and social development,improve people's working efficiency, while improving the quality of life. Software engineers is to create software applications of people collectively, according to which software engineers can be divided into different areas of system analysts, software designers, system architects, programmers, testers and so on. It is also often used to refer to a variety of software engineers, programmers.OriginIn view of difficulties encountered in software development, North Atlantic Treaty Organization (NATO) in 1968 organized the first Conference on Software Engineering, and will be presented at the "software engineering" to define the knowledge required for software development, and suggested that "software development the activities of similar projects should be. " Software Engineering has formally proposed since 1968, this time to accumulate a large number of research results, widely lot of technical practice, academia and industry through the joint efforts of software engineering is gradually developing into a professional discipline.Definitioncreation and use of sound engineering principles in order to obtain reliable and economically efficient software.application of systematic, follow the principle can be measured approach to development, operation and maintenance of software; that is to beapplied to software engineering.The development, management and updating software products related to theories, methods and tools.A knowledge or discipline (discipline), aims to produce good quality, punctual delivery, within budget and meet users need software.the practical application of scientific knowledge in the design, build computer programs, and the accompanying documents produced, and the subsequent operation and maintenance.Use systematic production and maintenance of software products related to technology and management expertise to enable software development and changes in the limited time and under cost.Construction team of engineers developed the knowledge of large software systems disciplines.the software analysis, design, implementation and maintenance of a systematic method.the systematic application of tools and techniques in the development of computer-based applications.Software Engineering and Computer ScienceSoftware development in the end is a science or an engineering, this is a question to be debated for a long time. In fact, both the two characteristics of software development. But this does not mean that they can be confused with each other. Many people think that softwareengineering, computer science and information science-based as in the traditional sense of the physical and chemical engineering as. In the U.S., about 40% of software engineers with a degree in computer science. Elsewhere in the world, this ratio is also similar. They will not necessarily use every day knowledge of computer science, but every day they use the software engineering knowledge.For example, Peter McBreen that software "engineering" means higher degree of rigor and proven processes, not suitable for all types of software development stage. Peter McBreen in the book "Software Craftsmanship: The New Imperative" put forward the so-called "craftsmanship" of the argument, consider that a key factor in the success of software development, is to develop the skills, not "manufacturing" software process.Software engineering and computer programmingSoftware engineering exists in a variety of applications exist in all aspects of software development. The program design typically include program design and coding of the iterative process, it is a stage of software development.Software engineering, software project seeks to provide guidance in all aspects, from feasibility analysis software until the software after completion of maintenance work. Software engineering that software development and marketing activities are closely related. Such assoftware sales, user training, hardware and software associated with installation. Software engineering methodology that should not be an independent programmer from the team and to develop, and the program of preparation can not be divorced from the software requirements, design, and customer interests.Software engineering design of industrial development is the embodiment of a computer program.Software crisisSoftware engineering, rooted in the 20th century to the rise of 60,70 and 80 years of software crisis. At that time, many of the software have been a tragic final outcome. Many of the software development time significantly beyond the planned schedule. Some projects led to the loss of property, and even some of the software led to casualties. While software developers have found it increasingly difficult for software development.OS 360 operating system is considered to be a typical case. Until now, it is still used in the IBM360 series host. This experience for decades, even extremely complex software projects do not have a set of programs included in the original design of work systems. OS 360 is the first large software project, which uses about 1,000 programmers. Fred Brooks in his subsequent masterpiece, "The Mythical Man Month" (The Mythical Man-Month) in the once admitted that in his management of theproject, he made a million dollar mistake.Property losses: software error may result in significant property damage. European Ariane rocket explosion is one of the most painful lesson.Casualties: As computer software is widely used, including hospitals and other industries closely related to life. Therefore, the software error might also result in personal injury or death.Was used extensively in software engineering is the Therac-25 case of accidents. In 1985 between June and January 1987, six known medical errors from the Therac-25 to exceed the dose leads to death or severe radiation burns.In industry, some embedded systems do not lead to the normal operation of the machine, which will push some people into the woods. MethodologyThere are many ways software engineering aspects of meaning. Including project management, analysis, design, program preparation, testing and quality control.Software design methods can be distinguished as the heavyweight and lightweight methods. Heavyweight methods produce large amounts of official documentation.Heavyweight development methodologies, including the famous ISO 9000, CMM, and the Unified Process (RUP).Lightweight development process is not an official document of the large number of requirements. Lightweight methods, including well-known Extreme Programming (XP) and agile process (Agile Processes).According to the "new methodology" in this article, heavyweight method presented is a "defensive" posture. In the application of the "heavyweight methods" software organizations, due to a software project manager with little or no involvement in program design, can not grasp the item from the details of the progress of the project which will have a "fear", constantly had to ask the programmer to write a lot of "software development documentation." The lightweight methods are presented "aggressive" attitude, which is from the XP method is particularly emphasized four criteria - "communication, simplicity, feedback and courage" to be reflected on. There are some people that the "heavyweight method" is suitable for large software team (dozens or more) use, and "lightweight methods" for small software team (a few people, a dozen people) to use. Of course, on the heavyweight and lightweight method of approach has many advantages and disadvantages of debate, and various methods are constantly evolving.Some methodologists think that people should be strictly followed in the development and implementation of these methods. But some people do not have the conditions to implement these methods. In fact, themethod by which software development depends on many factors, but subject to environmental constraints.Software development processSoftware development process, with the subsequent development of technology evolution and improvement. From the early waterfall (Waterfall) development model to the subsequent emergence of the spiral iterative (Spiral) development, which recently began the rise of agile development methodologies (Agile), they showed a different era in the development process for software industry different awareness and understanding of different types of projects for the method.Note distinction between software development process and software process improvement important difference between. Such as ISO 15504, ISO 9000, CMM, CMMI such terms are elaborated in the framework of software process improvement, they provide a series of standards and policies to guide software organizations how to improve the quality of the software development process, the ability of software organizations, and not give a specific definition of the development process.Development of software engineering"Agile Development" (Agile Development) is considered an important software engineering development. It stressed that software development should be able to possible future changes and uncertaintiesof a comprehensive response.Agile development is considered a "lightweight" approach. In the lightweight approach should be the most prestigious "Extreme Programming" (Extreme Programming, referred to as XP).Correspond with the lightweight approach is the "heavyweight method" exists. Heavyweight approach emphasizes the development process as the center, rather than people-centered. Examples of methods such as heavyweight CMM / PSP / TSP.Aspect-oriented programming (Aspect Oriented Programming, referred to as the AOP) is considered to software engineering in recent years, another important development. This aspect refers to the completion of a function of a collection of objects and functions. In this regard the contents related to generic programming (Generic Programming) and templates.软件工程软件工程是一门研究用工程化方法构建和维护有效的、实用的和高质量的软件的学科。
软件工程专业毕业设计外文文献翻译
软件工程专业毕业设计外文文献翻译1000字本文将就软件工程专业毕业设计的外文文献进行翻译,能够为相关考生提供一定的参考。
外文文献1: Software Engineering Practices in Industry: A Case StudyAbstractThis paper reports a case study of software engineering practices in industry. The study was conducted with a large US software development company that produces software for aerospace and medical applications. The study investigated the company’s software development process, practices, and techniques that lead to the production of quality software. The software engineering practices were identified through a survey questionnaire and a series of interviews with the company’s software development managers, software engineers, and testers. The research found that the company has a well-defined software development process, which is based on the Capability Maturity Model Integration (CMMI). The company follows a set of software engineering practices that ensure quality, reliability, and maintainability of the software products. The findings of this study provide a valuable insight into the software engineering practices used in industry and can be used to guide software engineering education and practice in academia.IntroductionSoftware engineering is the discipline of designing, developing, testing, and maintaining software products. There are a number of software engineering practices that are used in industry to ensure that software products are of high quality, reliable, and maintainable. These practices include software development processes, software configuration management, software testing, requirements engineering, and project management. Software engineeringpractices have evolved over the years as a result of the growth of the software industry and the increasing demands for high-quality software products. The software industry has developed a number of software development models, such as the Capability Maturity Model Integration (CMMI), which provides a framework for software development organizations to improve their software development processes and practices.This paper reports a case study of software engineering practices in industry. The study was conducted with a large US software development company that produces software for aerospace and medical applications. The objective of the study was to identify the software engineering practices used by the company and to investigate how these practices contribute to the production of quality software.Research MethodologyThe case study was conducted with a large US software development company that produces software for aerospace and medical applications. The study was conducted over a period of six months, during which a survey questionnaire was administered to the company’s software development managers, software engineers, and testers. In addition, a series of interviews were conducted with the company’s software development managers, software engineers, and testers to gain a deeper understanding of the software engineering practices used by the company. The survey questionnaire and the interview questions were designed to investigate the software engineering practices used by the company in relation to software development processes, software configuration management, software testing, requirements engineering, and project management.FindingsThe research found that the company has a well-defined software development process, which is based on the Capability Maturity Model Integration (CMMI). The company’s software development process consists of five levels of maturity, starting with an ad hoc process (Level 1) and progressing to a fully defined and optimized process (Level 5). The company has achieved Level 3 maturity in its software development process. The company follows a set of software engineering practices that ensure quality, reliability, and maintainability of the software products. The software engineering practices used by the company include:Software Configuration Management (SCM): The company uses SCM tools to manage software code, documentation, and other artifacts. The company follows a branching and merging strategy to manage changes to the software code.Software Testing: The company has adopted a formal testing approach that includes unit testing, integration testing, system testing, and acceptance testing. The testing process is automated where possible, and the company uses a range of testing tools.Requirements Engineering: The company has a well-defined requirements engineering process, which includes requirements capture, analysis, specification, and validation. The company uses a range of tools, including use case modeling, to capture and analyze requirements.Project Management: The company has a well-defined project management process that includes project planning, scheduling, monitoring, and control. The company uses a range of tools to support project management, including project management software, which is used to track project progress.ConclusionThis paper has reported a case study of software engineering practices in industry. The study was conducted with a large US software development company that produces software for aerospace and medical applications. The study investigated the company’s software development process,practices, and techniques that lead to the production of quality software. The research found that the company has a well-defined software development process, which is based on the Capability Maturity Model Integration (CMMI). The company uses a set of software engineering practices that ensure quality, reliability, and maintainability of the software products. The findings of this study provide a valuable insight into the software engineering practices used in industry and can be used to guide software engineering education and practice in academia.外文文献2: Agile Software Development: Principles, Patterns, and PracticesAbstractAgile software development is a set of values, principles, and practices for developing software. The Agile Manifesto represents the values and principles of the agile approach. The manifesto emphasizes the importance of individuals and interactions, working software, customer collaboration, and responding to change. Agile software development practices include iterative development, test-driven development, continuous integration, and frequent releases. This paper presents an overview of agile software development, including its principles, patterns, and practices. The paper also discusses the benefits and challenges of agile software development.IntroductionAgile software development is a set of values, principles, and practices for developing software. Agile software development is based on the Agile Manifesto, which represents the values and principles of the agile approach. The manifesto emphasizes the importance of individuals and interactions, working software, customer collaboration, and responding to change. Agile software development practices include iterative development, test-driven development, continuous integration, and frequent releases.Agile Software Development PrinciplesAgile software development is based on a set of principles. These principles are:Customer satisfaction through early and continuous delivery of useful software.Welcome changing requirements, even late in development. Agile processes harness change for the customer's competitive advantage.Deliver working software frequently, with a preference for the shorter timescale.Collaboration between the business stakeholders and developers throughout the project.Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.Working software is the primary measure of progress.Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.Continuous attention to technical excellence and good design enhances agility.Simplicity – the art of maximizing the amount of work not done – is essential.The best architectures, requirements, and designs emerge from self-organizing teams.Agile Software Development PatternsAgile software development patterns are reusable solutions to common software development problems. The following are some typical agile software development patterns:The Single Responsibility Principle (SRP)The Open/Closed Principle (OCP)The Liskov Substitution Principle (LSP)The Dependency Inversion Principle (DIP)The Interface Segregation Principle (ISP)The Model-View-Controller (MVC) PatternThe Observer PatternThe Strategy PatternThe Factory Method PatternAgile Software Development PracticesAgile software development practices are a set ofactivities and techniques used in agile software development. The following are some typical agile software development practices:Iterative DevelopmentTest-Driven Development (TDD)Continuous IntegrationRefactoringPair ProgrammingAgile Software Development Benefits and ChallengesAgile software development has many benefits, including:Increased customer satisfactionIncreased qualityIncreased productivityIncreased flexibilityIncreased visibilityReduced riskAgile software development also has some challenges, including:Requires discipline and trainingRequires an experienced teamRequires good communicationRequires a supportive management cultureConclusionAgile software development is a set of values, principles, and practices for developing software. Agile software development is based on the Agile Manifesto, which represents the values and principles of the agile approach. Agile software development practices include iterative development, test-driven development, continuous integration, and frequent releases. Agile software development has many benefits, including increased customer satisfaction, increased quality, increased productivity, increased flexibility, increased visibility, and reduced risk. Agile software development also has some challenges, including the requirement for discipline and training, the requirement for an experienced team, the requirement for good communication, and the requirement for a supportive management culture.。
软件工程英文文献原文及翻译
英文文献原文及译文学生姓名:赵凡学号:1021010639学院:软件学院专业:软件工程指导教师:武敏顾晨昕2014年 6月英文文献原文The use of skinWhat is a skin? In the role of setting, the preparations made for the animation in the final process is skinning. The so-called skinning skinning tool is to use role-model role do with our skeletal system to help set the course together. After this procedure, fine role model can be rendered on real numbers can be made into animation. Bones in skinning process, in which the position is called Bind Pose. After the skin, bone deformation of the skin caused by the Games. However, sometimes inappropriate distortion, which requires bone or skin to make the appropriate changes, then you can make use of relevant command to restore the bone binding position, and then disconnect the association between bone and skin. In Maya, you can always put the bones and skin disconnected or reconnected. There is a direct way to skin the skin (skin flexible rigid skinning) and indirect skin (or wrap the lattice deformation of flexible or rigid skinning skinning joint use).In recent years, more and more 3D animation software, a great competition in the market, software companies are constantly developing and updating the relevant software only more humane, but in three-dimensional animation maya mainstream animation software. Able to create bone, meat, God's role is that each CG digital artists dream. Whether the digital characters charm, the test is the animator of life, understanding of life. Digital character to have bone and meat producers are required for the role of the body and has a full grasp of motor function. In addition, the roles of whether there is realism, the key lies in the design and production of the skin, which is skinning animation software for skilled technical and creative mastery is essential. Skin is ready to work in animation final steps, after this procedure, you can do the movements designed, if the skin did not do the work, after the animation trouble, so the skin is very important.As the three-dimensional animation with accuracy and authenticity, the current three-dimensional animation is rapidly developing country, nowadays the use ofthree-dimensional animation everywhere, the field of architecture, planning areas, landscape areas, product demonstrations, simulated animation, film animation, advertising, animation, character animation, virtual reality and other aspects of three-dimensional animation fully reflects the current importance. If compared to the three-dimensional animation puppet animation in real life, then the doll puppet animation equivalent of Maya modeling, puppet performers equivalent Maya animators and puppet steel joints in the body is the skeletal system. Bones in the animation will not be the final rendering, its role is only equivalent to a bracket that can simulate real bones set of major joints to move, rotate, etc.. When the bones are set, we will be bound to the skeleton model, this step is like a robot mounted to a variety of external parts, like hanging, and then through the various settings, add a keyframe animation on bone, and then drive to be bound by the bones corresponding to the model on the joints. Thus, in the final animation, you can see the stiffness of a stationary model with vitality. The whole process from the rigging point of view, may not compare more tedious keyframe animation, rigging, but it is the core of the whole three-dimensional animation, and soul.Rigging plays a vital role in a three-dimensional animation. Good rigging easy animation production, faster and more convenient allows designers to adjust the action figures. Each step are bound to affect the skeleton final animation, binding is based on the premise of doing animation, animators animate convenient, good binding can make animation more fluid, allowing the characters to life even more performance sex. In addition to rigging as well as expression of the binding character, but also to let people be able to speak or behave different facial expressions. Everything is done in order to bind the animation is set, it is bound to set a good animation is mainly based on the entire set of styles and processes. Rigging is an indispensable part in the three-dimensional animation.Three-dimensional animation production process: model, texture, binding, animation, rendering, special effects, synthesis. Each link is associated. Model and material determines the style of animation, binding, and animation determine fluency animation, rendering, animation effects, and synthetic colors and determine the finalresult.Three-dimensional animation, also known as 3D animation, is an emerging technology. Three-dimensional animation gives a three-dimensional realism, even subtle animal hair, this effect has been widely applied to the production of film and television in many areas, education, and medicine. Movie Crash, deformed or fantasy scenes are all three-dimensional animation in real life. Designers in the first three-dimensional animation software to create a virtual scene, and then create the model according to the proportion, according to the requirements set trajectory models, sports, and other parameters of the virtual camera animation, and finally as a model assigned a specific material, and marked the lights , the final output rendering, generating the final screen. DreamWorks' "Shrek" and Pixar's "Finding Nemo" is so accomplished visual impact than the two-dimensional animation has.Animated film "Finding Nemo" extensive use of maya scene technology. Produced 77,000 jellyfish animation regardless of the technical staff or artist is one of the most formidable challenge. This pink translucent jellyfish is most needed is patience and skill, you can say, jellyfish appeared animated sea creatures taken a big step. His skin technology can be very good. The use of film roles skinning techniques is very good, so that each character is vivid, is not related to expression, or action is so smooth, these underwater underwater world is so beautiful. Maya maya technology for the creation of the first to have a full understanding and knowledge. He first thought of creative freedom virtual capacity, but the use of technology has limitations. When the flexible skinning animation technique many roles in the smooth bound for editing, re-allocation tools needed to adjust the skeletal model for the control of the weight through the right point, every detail clownfish are very realistic soft. In the joint on the affected area should smear, let joints from other effects, this movement was not wearing a tie. Used less rigid, rigid lattice bound objects must be created in a position to help the bones of the joint motion. Animated film "Finding Nemo," the whole movie a lot of facial animation, facial skin but also a good technique to make facial expressions, the facial animation is also animated, and now more and more animated facial animationtechnology increasingly possible, these should be good early skin behind it will not affect the expression, there is therefore the creation of the film how maya digital technology, play his video works styling advantages and industrial processes are needed to explore creative personnel, all and three-dimensional figures on the production of content, from maya part. Two-dimensional hand-painted parts, post-synthesis of several parts, from a technical production, artistic pursuit. Several angles to capture the entire production cycle of creation. Maya techniques used in the animated film "Finding Nemo", the flexible skinning performance of many, clown face on with a lot of smooth binding, so more people-oriented, maya application of technical advantages in certain limited extent. Realistic three-dimensional imaging technology in the animation depth spatial density, the sense of space, mysterious underwater world to play the most. Because lifelike action, it also brings the inevitable footage and outdoor sports realistic density, but also to explore this movie maya main goal of the three-dimensional animation.英文文献译文蒙皮的运用什么是蒙皮?在角色设定中,为动画所作的准备工作里的最后一道工序就是蒙皮。
软件工程专业毕业设计外文文献翻译
考虑翻译工具的易用性和 价格
引用文献:确保引用的外文文献来源可靠、准确 翻译准确:保持原文意思不变,语言流畅自然 格式规范:遵循学术论文的格式要求,包括标题、作者、摘要、关键词等 文献整理:对外文文献进行分类整理,方便查阅
校对:检查语法、拼写和标点错误 修改:调整句子结构、替换用词,提高表达准确性和流畅性 对照原文:确保准确传达原文意思 团队协作:多人合作,互相校对和修改
软件工程外文文献 翻译的技巧
掌握专业术语和常用表达方式 理解原文的语境和语义 注意原文的语气和修辞 结合上下文理解原文的含义
掌握专业术语:熟 悉软件工程领域相 关术语,确保翻译 准确。
句式结构清晰:合 理安排句子结构, 使译文流畅易懂。
语义连贯:保持译 文语义连贯,避免 出现歧义或理解困 难。
智能编辑:对 机器翻译结果 进行智能优化, 减少人工干预
跨语言信息检 索:利用人工 智能技术快速 查找和获取外
文文献资源
全球化推动跨文化交流的发展
人工智能技术在跨文化交流中的应 用与前景
添加标题
添加标题
添加标题
添加标题
软件工程外文文献翻译在跨文化交 流中的作用
跨文化交流中语言翻译的挑战与机 遇
人工智能与机器学 习在软件工程中的 应用
语境理解:外文文献中的语境和中文 可能存在差异,需要准确理解原文的 语境和含义,并进行适当的翻译。
添加标题
添加标题
添加标题
添加标题
文化背景:不同国家和地区的文化背 景、历史传统、价值观念等可能存在 差异,需要对外文文献中的文化元素 进行适当的解释和调整。
专业知识:软件工程领域涉及的专业 知识较多,需要对外文文献中的相关 内容进行深入理解和翻译,以确保准 确性和专业性。
计算机 软件工程 外文翻译 外文文献 英文文献
一、外文资料译文:Java开发2.0:使用Hibernate Shards 进行切分横向扩展的关系数据库Andrew Glover,作者兼开发人员,Beacon50摘要:Sharding并不适合所有网站,但它是一种能够满足大数据的需求方法。
对于一些商店来说,切分意味着可以保持一个受信任的RDBMS,同时不牺牲数据可伸缩性和系统性能。
在Java 开发 2.0系列的这一部分中,您可以了解到切分何时起作用,以及何时不起作用,然后开始着手对一个可以处理数TB 数据的简单应用程序进行切分。
日期:2010年8月31日级别:中级PDF格式:A4和信(64KB的15页)取得Adobe®Reader®软件当关系数据库试图在一个单一表中存储数TB 的数据时,总体性能通常会降低。
索引所有的数据读取,显然是很耗时的,而且其中有可能是写入,也可能是读出。
因为NoSQL 数据商店尤其适合存储大型数据,但是NoSQL 是一种非关系数据库方法。
对于倾向于使用ACID-ity 和实体结构关系数据库的开发人员及需要这种结构的项目来说,切分是一个令人振奋的选方法。
切分一个数据库分区的分支,不是在本机上的数据库技术,它发生在应用场面上。
在各种切分实现,Hibernate Shards 可能是Java™ 技术世界中最流行的。
这个漂亮的项目可以让您使用映射至逻辑数据库的POJO 对切分数据集进行几乎无缝操作。
当你使用Hibernate Shards 时,您不需要将你的POJO 特别映射至切分。
您可以像使用Hibernate 方法对任何常见关系数据库进行映射时一样对其进行映射。
Hibernate Shards 可以为您管理低级别的切分任务。
迄今为止,在这个系列,我用一个比赛和参赛者类推关系的简单域表现出不同的数据存储技术比喻为基础。
这个月,我将使用这个熟悉的例子,介绍一个实际的切分策略,然后在Hibernate实现它的碎片。
软件工程毕业论文文献翻译中英文对照
软件工程毕业论文文献翻译中英文对照学生毕业设计(论文)外文译文学生姓名: 学号专业名称:软件工程译文标题(中英文):Qt Creator白皮书(Qt Creator Whitepaper)译文出处:Qt network 指导教师审阅签名: 外文译文正文:Qt Creator白皮书Qt Creator是一个完整的集成开发环境(IDE),用于创建Qt应用程序框架的应用。
Qt是专为应用程序和用户界面,一次开发和部署跨多个桌面和移动操作系统。
本文提供了一个推出的Qt Creator和提供Qt开发人员在应用开发生命周期的特点。
Qt Creator的简介Qt Creator的主要优点之一是它允许一个开发团队共享一个项目不同的开发平台(微软Windows?的Mac OS X?和Linux?)共同为开发和调试工具。
Qt Creator的主要目标是满足Qt开发人员正在寻找简单,易用性,生产力,可扩展性和开放的发展需要,而旨在降低进入新来乍到Qt的屏障。
Qt Creator 的主要功能,让开发商完成以下任务: , 快速,轻松地开始使用Qt应用开发项目向导,快速访问最近的项目和会议。
, 设计Qt物件为基础的应用与集成的编辑器的用户界面,Qt Designer中。
, 开发与应用的先进的C + +代码编辑器,提供新的强大的功能完成的代码片段,重构代码,查看文件的轮廓(即,象征着一个文件层次)。
, 建立,运行和部署Qt项目,目标多个桌面和移动平台,如微软Windows,Mac OS X中,Linux的,诺基亚的MeeGo,和Maemo。
, GNU和CDB使用Qt类结构的认识,增加了图形用户界面的调试器的调试。
, 使用代码分析工具,以检查你的应用程序中的内存管理问题。
, 应用程序部署到移动设备的MeeGo,为Symbian和Maemo设备创建应用程序安装包,可以在Ovi商店和其他渠道发布的。
, 轻松地访问信息集成的上下文敏感的Qt帮助系统。
软件工程本科毕业外文文献翻译资料
学校代码: 10128学号:本科毕业设计外文文献翻译二〇一五年一月The Test Library Management System ofFramework Based on SSHThe application system features in small or medium-sized enterprise lie in the gre ater flexibility and safety high performance-price ratio. Traditional J2EE framework c an not adapt to these needs, but the system application based on SSH(Struts+Spring+ Hibernate) technology can better satisfy such needs. This paper analyses some integra tion theory and key technologies about SSH, and according to the integration construc ts a lightweight WEB framework, which has integrated the three kinds of technology , forming the lightweight WEB framework based on SSH and gaining good effects in p ractical applications.IntroductionGenerally the J2EE platform[27] used in large enterprise applications, can well s olve the application of reliability, safety and stability, but its weakness is the price hig h and the constructing cycle is long. Corresponding to the small or medium enterprise applications, the replace approach is the system framework of lightweight WEB, inclu ding the more commonly used methods which are based on the Struts and Hibernate. With the wide application of Spring, the three technology combination may be a bette r choice as a lightweight WEB framework. It uses layered structure and provides a go od integrated framework for Web applications at all levels in minimizing the Interlaye r coupling and increasing the efficiency of development. This framework can solve a l ot of problems, with good maintainability and scalability. It can solve the separation o f user interface and business logic separation, the separation of business logic and data base operation and the correct procedure control logic, etc. This paper studies the tech nology and principle of Struts and Spring and Hibernate, presenting a proved lightwei ght WEB application framework for enterprise.Hierarchical Web MechanismHierarchical Web framework including the user presentation layer, business logic layer, data persistence layer ,expansion layer etc, each layer for different function, res pectively to finish the whole application. The whole system are divided into different logic module with relatively independent and mutual, and each module can be implem ented according to different design. It can realize the system parallel development, rap id integration, good maintainability, scalability.Struts MVC FrameworkTo ensure the reuse and efficiency of development process, adopting J2EE techn ology to build the Web application must select a system framework which has a good performance . Only in this way can we ensure not wasting lots of time because of adju sting configuration and achieve application development efficiently and quickly. So, p rogrammers in the course of practice got some successful development pattern which proved practical, such as MVC and O/R mapping, etc; many technologies, including S truts and Hibernate frameworks, realized these pattern. However, Struts framework on ly settled the separation problem between view layer and business logic layer, control layer, did not provide a flexible support for complex data saving process. On the contr ary, Hibernate framework offered the powerful and flexible support for complex data saving process. Therefore, how to integrate two frameworks and get a flexible, low-co upling solutions project which is easy to maintain for information system, is a researc h task which the engineering staff is studying constantly.Model-View-Controller (MVC) is a popular design pattern. It divides the interact ive system in three components and each of them specializes in one task. The model c ontains the application data and manages the core functionality. The visual display of t he model and the feedback to the users are managed by the view. The controller not o nly interprets the inputs from the user, but also dominates the model and the view to c hange appropriately. MVC separates the system functionality from the system interfac e so as to enhance the system scalability and maintainability. Struts is a typical MV C frame[32], and it also contains the three aforementioned components. The model le vel is composed of JavaBean and EJB components. The controller is realized by actio n and ActionServlet, and the view layer consists of JSP files. The central controller co ntrols the action execution that receives a request and redirects this request to the appr opriate module controller. Subsequently, the module controller processes the request a nd returns results to the central controller using a JavaBean object, which stores any object to be presented in the view layer by including an indication to module views that must be presented. The central controller redirects the returned JavaBean object to th e main view that displays its information.Spring Framework technologySpring is a lightweight J2EE application development framework, which uses the model of Inversion of Control(IoC) to separate the actual application from the Config uration and dependent regulations of the application. Committed to J2EE application a t all levels of the solution, Spring is not attempting to replace the existing framework, but rather “welding” the object of J2EE application at all levels together through the P OJO management. In addition, developers are free to choose Spring framework for so me or all, since Spring modules are not totally dependent.As a major business-level detail, Spring employs the idea of delay injection to as semble code for the sake of improving the scalability and flexibility of built systems. Thus, the systems achieve a centralized business processing and reduction of code reu se through the Spring AOP module.Hibernate Persistent FrameworkHibernate is a kind of open source framework with DAO design patterns to achie ve mapping(O/R Mapping) between object and relational database.During the Web system development, the tradition approach directly interacts wit h the database by JDBC .However, this method has not only heavy workload but also complex SQL codes of JDBC which need to revise because the business logic sli ghtly changes. So, whatever development or maintain system are inconvenient. Consi dering the large difference between the object-oriented relation of java and the structu re of relational database, it is necessary to introduce a direct mapping mechanism bet ween the object and database, which this kind of mapping should use configuration fil es as soon as possibility, so that mapping files will need modifying rather than java so urce codes when the business logic changes in the future. Therefore, O/R mapping pat tern emerges, which hibernate is one of the most outstanding realization of architectur e.It encapsulates JDBC with lightweight , making Java programmer operate a relati onal database with the object oriented programming thinking. It is a a implementation technology in the lasting layer. Compared to other lasting layer technology such as JD BC, EJB, JDO, Hibernate is easy to grasp and more in line with the object-oriented pr ogramming thinking. Hibernate own a query language (HQL), which is fully object-or iented. The basic structure in its application as shown in figure6.1.Hibernate is a data persistence framework, and the core technology is the object / relational database mapping(ORM). Hibernate is generally considered as a bridge bet ween Java applications and the relational database, owing to providing durable data se rvices for applications and allowing developers to use an object-oriented approach to t he management and manipulation of relational database. Furthermore, it furnishes an object-oriented query language-HQL.Responsible for the mapping between the major categories of Java and the relatio nal database, Hibernate is essentially a middle ware providing database services. It su pplies durable data services for applications by utilizing databases and several profiles , such as hibernate properties and XML Mapping etc..Web services technologiesThe introduction of annotations into Java EE 5 makes it simple to create sophisticated Web service endpoints and clients with less code and a shorter learning curve than was possible with earlier Java EE versions. Annotations — first introduced in Java SE 5 — are modifiers you can add to your code as metadata. They don't affect program semantics directly, but the compiler, development tools, and runtime libraries can process them to produce additional Java language source files, XML documents, or other artifacts and behavior that augment the code containing the annotations (see Resources). Later in the article, you'll see how you can easily turn a regular Java class into a Web service by adding simple annotations.Web application technologiesJava EE 5 welcomes two major pieces of front-end technology — JSF and JSTL —into the specification to join the existing JavaServer Pages and Servletspecifications. JSF is a set of APIs that enable a component-based approach to user-interface development. JSTL is a set of tag libraries that support embedding procedural logic, access to JavaBeans, SQL commands, localized formatting instructions, and XML processing in JSPs. The most recent releases of JSF, JSTL, and JSP support a unified expression language (EL) that allows these technologies to integrate more easily (see Resources).The cornerstone of Web services support in Java EE 5 is JAX-WS 2.0, which is a follow-on to JAX-RPC 1.1. Both of these technologies let you create RESTful and SOAP-based Web services without dealing directly with the tedium of XML processing and data binding inherent to Web services. Developers are free to continue using JAX-RPC (which is still required of Java EE 5 containers), but migrating to JAX-WS is strongly recommended. Newcomers to Java Web services might as well skip JAX-RPC and head right for JAX-WS. That said, it's good to know that both of them support SOAP 1.1 over HTTP 1.1 and so are fully compatible: a JAX-WS Web services client can access a JAX-RPC Web services endpoint, and vice versa.The advantages of JAX-WS over JAX-RPC are compelling. JAX-WS:•Supports the SOAP 1.2 standard (in addition to SOAP 1.1).•Supports XML over HTTP. You can bypass SOAP if you wish. (See the article "Use XML directly over HTTP for Web services (where appropriate)" for more information.)•Uses the Java Architecture for XML Binding (JAXB) for its data-mapping model. JAXB has complete support for XML schema and better performance (more on that in a moment).•Introduces a dynamic programming model for both server and client.The client model supports both a message-oriented and an asynchronous approach.•Supports Message Transmission Optimization Mechanism (MTOM), a W3C recommendation for optimizing the transmission and format of a SOAPmessage.•Upgrades Web services interoperability (WS-I) support. (It supports Basic Profile 1.1; JAX-WS supports only Basic Profile 1.0.)•Upgrades SOAP attachment support. (It uses the SOAP with Attachments API for Java [SAAJ] 1.3; JAX-WS supports only SAAJ 1.2.)•You can learn more about the differences by reading the article "JAX-RPC versus JAX-WS."The wsimport tool in JAX-WS automatically handles many of the mundane details of Web service development and integrates easily into a build processes in a cross-platform manner, freeing you to focus on the application logic that implements or uses a service. It generates artifacts such as services, service endpoint interfaces (SEIs), asynchronous response code, exceptions based on WSDL faults, and Java classes bound to schema types by JAXB.JAX-WS also enables high-performing Web services. See Resources for a link to an article ("Implementing High Performance Web Services Using JAX-WS 2.0") presenting a benchmark study of equivalent Web service implementations based on the new JAX-WS stack (which uses two other Web services features in Java EE 5 —JAXB and StAX) and a JAX-RPC stack available in J2EE 1.4. The study found 40% to 1000% performance increases with JAX-WS in various functional areas under different loads.ConclusionEach framework has its advantages and disadvantages .Lightweight J2EE struc ture integrates Struts and Hibernate and Spring technology, making full use the power ful data processing function of Struts and the management flexible of Spring and the mature of Hibernate. According to the practice, putting forward anopen-source solutions suitable for small or medium-sized enterprise application of. Th e application system based on this architecture technology development has interlayer loose coupling ,structure distinctly, short development cycle, maintainability. In addition, combined with commercial project development, the solution has achieved good effect. The lightweight framework makes the parallel development and maintenance f or commercial system convenience, and can push forward become other industry busi ness system development.Through research and practice, we can easily find that Struts / Spring / Hiberna te framework utilizes Struts maturity in the presentation layer, flexibility of Spring bu siness management and convenience of Hibernate in the serialization layer, three kind s of framework integrated into a whole so that the development and maintenance beca me more convenient and handy. This kind of approach also will play a key role if appl ying other business system. Of course ,how to optimize system performance, enhance the user's access speed, improve security ability of system framework ,all of these wor ks, are need to do for author in the further.基于SSH框架实现的试题库管理系统小型或者中型企业的应用系统具有非常好的灵活性、安全性以及高性价比,传统的J2EE架构满足不了这些需求,但是基于SSH框架实现的应用系统更好的满足了这样的需求,这篇文章分析了关于SSH的一体化理论和关键技术,通过这些集成形成了轻量级Web框架,在已经集成三种技术的基础上,伴随形成了基于SSH的轻量级Web 框架,并且在实际应用中有着重要作用。
软件工程—外文翻译
Artificial Immune Systems:A Novel Paradigm to Pattern RecognitionAbstractThis chapter introduces a new computational intelligence paradigm to perform pattern recognition, named Artificial Immune Systems (AIS). AIS take inspiration from the immune system in order to build novel computational tools to solve problems in a vast range of domain areas. The basic immune theories used to explain how the immune system perform pattern recognition are described and their corresponding computational models are presented. This is followed with a survey from the literature of AIS applied to pattern recognition. The chapter is concluded with a trade-off between AIS and artificial neural networks as pattern recognition paradigms. Keywords: Artificial Immune Systems;Negative Selection;Clonal Selection;Immune Network 1 IntroductionThe vertebrate immune system (IS) is one of the most intricate bodily systems and its complexity is sometimes compared to that of the brain. With the advances in the biology and molecular genetics, the comprehension of how the immune system behaves is increasing very rapidly. The knowledge about the IS functioning has unraveled several of its main operative mechanisms. These mechanisms have demonstrated to be very interesting not only from a biological standpoint, but also under a computational perspective. Similarly to the way the nervous system inspired the development of artificial neural networks (ANN), the immune system has now led to the emergence of artificial immune systems (AIS) as a novel computational intelligence paradigm.Artificial immune systems can be defined as abstract or metaphorical computational systems developed using ideas, theories, and components, extracted from the immune system. Most AIS aim at solving complex computational or engineering problems, such as pattern recognition, elimination, and optimization. This is a crucial distinction between AIS and theoretical immune system models. While the former is devoted primarily to computing, the latter is focused on the modeling of the IS in order to understand its behavior, so that contributions can be made to the biological sciences. It is not exclusive, however, the use of one approach into the other and, indeed, theoretical models of the IS have contributed to the development of AIS.This chapter is organized as follows. Section 2 describes relevant immune theories for pattern recognition and introduces their computational counterparts. In Section 3, we briefly describe how to model pattern recognition in artificial immune systems, and present a simple illustrative example. Section 4 contains a survey of AIS for pattern recognition, and Section 5 contrast the use of AIS with the use of ANN when applied to pattern recognition tasks. The chapter is concluded in Section 6.2 Biological and Artificial Immune SystemsAll living organisms are capable of presenting some type of defense against foreign attack. The evolution of species that resulted in the emergence of the vertebrates also led to the evolution of the immune system of this species. The vertebrate immune system is particularly interesting due to its several computational capabilities, as will be discussed throughout this section.The immune system of vertebrates is composed of a great variety of molecules, cells, and organs spread all over the body. There is no central organ controlling the functioning of the immune system, and there are several elements in transit and in different compartments performing complementary roles. The main task of the immune system is to survey the organism in the search for malfunctioning cells from their own body (e.g., cancer and tumour cells), and foreign disease causing elements (e.g., viruses and bacteria). Every element that can be recognized by the immune system is called an antigen (Ag). The cells that originally belong to our body and are harmless to its functioning are termed self (or self antigens), while the disease causing elements are named nonself (or nonself antigens). The immune system, thus, has to be capable of distinguishing between what is self from what is nonself; a process called self/nonself discrimination, and performed basically through pattern recognition events.From a pattern recognition perspective, the most appealing characteristic of the IS is the presence of receptor molecules, on the surface of immune cells, capable of recognising an almost limitless range of antigenic patterns. One can identify two major groups of immune cells, known as B-cells and T-cells. These two types of cells are rather similar, but differ with relation to how they recognise antigens and by their functional roles. B-cells are capable of recognising antigens free in solution (e.g., in the blood stream), while T-cells require antigens to be presented by other accessory cells.Antigenic recognition is the first pre-requisite for the immune system to be activated and to mount an immune response. The recognition has to satisfy some criteria. First, the cell receptor recognises an antigen with a certain affinity, and a binding between the receptor and the antigen occurs with strength proportional to this affinity. If the affinity is greater than a given threshold, named affinity threshold, then the immune system is activated. The nature of antigen, type of recognising cell, and the recognition site also influence the outcome of an encounter between an antigen and a cell receptor.The human immune system contains an organ called thymus that is located behind the breastbone, which performs a crucial role in the maturation of T-cells. After T-cells are generated, they migrate into the thymus where they mature. During this maturation, all T-cells that recognise self-antigens are excluded from the population of T-cells; a process termed negative selection. If a B-cell encounters a nonself antigen with a sufficient affinity, it proliferates and differentiates into memory and effector cells; a process named clonal selection. In contrast, if a B-cell recognises a self-antigen, it might result in suppression, as proposed by the immune network theory. In the following subsections, each of these processes (negative selection, clonal selection, and network theory) will be described separately, along with their computational algorithms counterparts.2.1 Negative SelectionThe thymus is responsible for the maturation of T-cells; and is protected by a blood barrier capable of efficiently excluding nonself antigens from the thymic environment. Thus, most elements found within the thymus are representative of self instead of nonself. As an outcome, the T-cells containing receptors capable of recognising these self antigens presented in the thymus are eliminated from the repertoire of T-cells through a process named negative selection. All T-cells that leave the thymus to circulate throughout the body are said to be tolerant to self, i.e., they do not respond to self.From an information processing perspective, negative selection presents an alternative paradigm to perform pattern recognition by storing information about the complement set (nonself) of the patterns to be recognised (self). A negative selection algorithm has been proposed in the literature with applications focused on the problem of anomaly detection, such as computer and network intrusion detection, time series prediction, image inspection and segmentation, and hardware fault tolerance. Given an appropriate problem representation (Section 3), define the set of patterns to be protected and call it the self- set (P). Based upon the negative selection algorithm, generate a set of detectors (M) that will be responsible to identify all elements that do not belong to the self-set, i.e., the nonself elements.After generating the set of detectors (M), the next stage of the algorithm consists in monitoring the system for the presence of nonself patterns (Fig 2(b)). In this case, assume a set P* of patterns to be protected. This set might be composed of the set P plus other new patterns, or it can be a completely novel set.For all elements of the detector set, that corresponds to the nonself patterns, check if it recognises (matches) an element of P* and, if yes, then a nonself pattern was recognized and an action has to be taken. The resulting action of detecting nonself varies according to the problem under evaluation and extrapolates the pattern recognition scope of this chapter.2.2 Clonal SelectionComplementary to the role of negative selection, clonal selection is the theory used to explain how an immune response is mounted when a nonself antigenic pattern is recognised by a B-cell. In brief, when a B-cell receptor recognises a nonself antigen with a certain affinity, it is selected to proliferate and produce antibodies in high volumes. The antibodies are soluble forms of the B-cell receptors that are released from the B-cell surface to cope with the invading nonself antigen. Antibodies bind to antigens leading to their eventual elimination by other immune cells. Proliferation in the case of immune cells is asexual, a mitotic process; the cells divide themselves (there is no crossover). During reproduction, the B-cell progenies (clones) undergo a hyper mutation process that, together with a strong selective pressure, result in B-cells with antigenic receptors presenting higher affinities with the selective antigen. This whole process of mutation and selection is known as the maturation of the immune response and is analogous to the natural selection of species. In addition to differentiating into antibody producing cells, the activated Bcells with high antigenic affinities are selected to become memory cells with long life spans. These memory cells are pre-eminent in future responses to this same antigenic pattern, or a similar one.Other important features of clonal selection relevant from the viewpoint of computation are:1. An antigen selects several immune cells to proliferate. The proliferation rate of each immune cell is proportional to its affinity with the selective antigen: the higher the affinity, the higher the number of offspring generated, and vice-versa;2. In complete opposition to the proliferation rate, the mutation suffered by each immune cell during reproduction is inversely proportional to the affinity of the cell receptor with the antigen: the higher the affinity, the smaller the mutation, and vice-versa.Some authors have argued that a genetic algorithm without crossover is a reasonable model of clonal selection. However, the standard genetic algorithm does not account for important properties such as affinity proportional reproduction and mutation. Other authors proposed a clonal selection algorithm, named CLONALG, to fulfil these basic processes involved in clonal selection. This algorithm was initially proposed to perform pattern recognition and then adapted to solvemulti-modal optimisation tasks. Given a set of patterns to be recognised (P), the basic steps of the CLONALG algorithm are as follows:1. Randomly initialise a population of individuals (M);2. For each pattern of P, present it to the population M and determine its affinity (match) with each element of the population M;3. Select n1 of the best highest affinity elements of M and generate copies of these individuals proportionally to their affinity with the antigen. The higher the affinity, the higher the number of copies, and vice-versa;4. Mutate all these copies with a rate proportional to their affinity with the input pattern: the higher the affinity, the smaller the mutation rate, and vice-versa.5. Add these mutated individuals to the population M and reselect n2 of these maturated (optimised) individuals to be kept as memories of the system;6. Repeat Steps 2 to 5 until a certain criterion is met, such as a minimum pattern recognition or classification error.Note that this algorithm allows the artificial immune system to become increasingly better at its task of recognising patterns (antigens). Thus, based upon an evolutionary like behaviour, CLONALG learns to recognise patterns.2.3 Immune NetworkThe immune network theory proposes that the immune system has a dynamic behaviour even in the absence of external stimuli. It is suggested that the immune cells and molecules are capable of recognising each other, what endows the system with an eigen behaviour that is not dependent on foreign stimulation. Several immunologists have refuted this theory, however its computational aspects are relevant and it has proved itself to be a powerful model for computational systems.According to the immune network theory, the receptor molecules contained in the surface of the immune cells present markers, named idiotopes, which can be recognized by receptors on other immune cells. These idiotopes are displayed in and/or around the same portions of the receptors that recognise nonself antigens. To explain the network theory, assume that a receptor (antibody) Ab1 ona B-cell recognises a nonself antigen Ag. Assume now, that this same receptor Ab1 also recognises an idiotope i2 on another B-cell receptor Ab2. Keeping track of the fact that i2 is part of Ab2, Ab1 is capable of recognising both Ag and Ab2. Thus, Ab2 is said to be the internal image of Ag, more precisely, i2 is the internal image of Ag. The recognition of idiotopes on a cell receptor by other cell receptors, lead to ever increasing sets of connected cell receptors and molecules. Note that the network in this case, is a network of affinities, which different from the ‘hardwired’ network of the nervous system. As a result of the network recognition events, it was suggested that the recognition of a cell receptor by another cell receptor results in network suppression, whilst the recognition of an antigen by a cell receptor results in network activation and cell proliferation. The original theory did not account explicitly for the results of network activation and/or suppression, and the various artificial immune networks found in the literature model it in a particular form.3 Modelling Pattern Recognition in AISUp to this point, the most relevant immune principles and their corresponding computational counterparts to perform pattern recognition have been presented. In order to apply these algorithms to computational problems, there is a need to specify a limited number of other aspects of artificial immune systems, not as yet covered. The first aspect to introduce is the most relevant representations to be applied to model self and nonself patterns. Here the self-patterns correspond to the components of the AIS responsible for recognising the input patterns (nonself). Secondly, the mechanism by which the evaluation of the degree of match (affinity), or degree of recognition, of an input pattern by an element of the AIS has to be discussed. To model immune cells, molecules, and the antigenic patterns, the shape-space approach proposed is usually adopted. Although AIS model recognition through pattern matching, given certain affinity functions to be described further, performing pattern recognition through complementarity or similarity is based more on practical aspects than on biological plausibility. The shape-space approach proposes that an attribute string s = ás1, s2,…,sLñ in an L dimensional shape-space, S, (s ÎSL), can represent any immune cell or molecule. Each attribute of this string is supposed to represent a feature of the immune cell or molecule, such as its charge, van der Wall interactions, etc. In the development of AIS the mapping from the attributes to their biological counterparts is usually not relevant. The type of attributes used to represent the string will define partially the shape-space under study, and is highly dependent on the problem domain. Any shape-space constructed from a finite alphabet of length k constitutes ak-ary Hamming shape-space. As an example, an attribute string built upon the set of binary elements {0,1} corresponds to a binary Hamming shape-space. It can be thought of, in this case, of a problem of recogn ising a set of characters represented by matrices composed of 0’s and 1’s. Each element of a matrix corresponds to a pixel in the character. If the elements of s are represented by real-valued vectors, then we have an Euclidean shape-space. Most of the AIS found in the literature employ binary Hamming or Euclidean shape-spaces. Other types of shape-spaces are also possible, such as symbolic shape-spaces, which combine different (symbolic) attributes in the representation of a single string s. These are usually found in data mining applications, where the data might contain symbolic information like age, name, etc., of a set of patterns.Another important characteristic of the artificial immune systems is that most of them are population based. It means that they are composed of a set of individuals, representing immune cells and molecules, which have to perform a given role; in our context, pattern recognition. If we recapitulate the three immune processes reviewed, negative selection, clonal selection, and immune network, all of them rely on a population M of individuals to recognise a set P of patterns. The negative selection algorithm has to define a set of detectors for nonself patterns; clonal selection reproduces, maturates, and selects self-cells to recognise a set of nonself; and the immune network maintains a set of individuals, connected as a network, to recognize self and nonself.Consider first the binary Hamming shape-space case, which is the most widely used. There are several expressions that can be employed in the determination of the degree of match or affinity between an element of P and an element of M . The simplest case is to simply calculate theHamming distance (DH ) between these two elements, as given by Eq. (1). Another approach is to search for a sequence of r -contiguous bits, and if the number of r -contiguous matches between the strings is greater than a given threshold, then recognition is said to have occurred. As the lastapproach to be mentioned here, we can describe the affinity measure of Hunt, given by Eq. (2). This last method has the advantage that it favours sequences of complementary matches, thus searching for similar regions between the attribute strings (patterns).1,LH i D where δ==∑ 10i i if p m otherwise δ≠⎧=⎨⎩ (1) 2i l H i D D =+∑ (2)where i l is the length of the i -th sequence of matching bits longer than 2.In the case of Euclidean shape-spaces, the Euclidean distance can be used to evaluate the affinity between any two components of the system. Other approaches such as the Manhattan distance may also be employed.Note that all the methods described rely basically, on determining the match between strings. However, there are AIS in the literature that take into account other aspects, such as the number of patterns matched by each antibody.4 A Survey of AIS for Pattern RecognitionThe applications of artificial immune systems are vast, ranging from machine learning to robotic autonomous navigation. This section will review some of the works from the AIS literature applied to the pattern recognition domain. The rationale is to provide a guide to the literature and a brief description of the scope of applications of the algorithms. The section is divided into two parts for ease of comprehension: 1) computer security, and 2) other applications. The problem ofprotecting computers (or networks of computers) from viruses, unauthorised users, etc., constitutes a rich field of research for pattern recognition systems. Due, mainly, to the appealing intuitive metaphor of building artificial immune systems to detect computer viruses, there has been a great interest from the computer science community to this particular application. The use of the negative and clonal selection algorithms have been widely tested on this application. The former because it isan inherent anomaly (change) detection system, constituting a particular case of a pattern recognition device. The latter, the clonal selection algorithm, has been used in conjunction to negative selection due to its learning capabilities. Other more classical pattern recognition tasks, such as character recognition, and data analysis have also been studied within artificial immune systems.5 AIS and ANN for Pattern RecognitionSimilar to the use of artificial neural networks, performing pattern recognition with an AIS usually involves three stages: 1) defining a representation for the patterns; 2) adapting (learning or evolving) the system to identify a set of typical data; and 3) applying the system to recognise a set of new patterns (that might contain patterns used in the adaptive phase).Refering to the three immune algorithms presented (negative selection, clonal selection, and immune network), coupled with the process of modelling pattern recognition in the immune system, as described in Section 3, this section will contrast AIS and ANN focusing the pattern recognition applications. Discussion will be based on computational aspects, such as basic components, adaptation mechanisms, etc. Common neural networks for pattern recognition will be considered, such as single and multi-layer perceptrons, associative memories, and self-organising networks. All these networks are characterised by set(s) of units (artificial neurons); they adapt to the environment through a learning (or storage) algorithm, they can have their architectures dynamically adapted along with the weights, and they have the basic knowledge stored in the connection strengths.Component: The basic unit of an AIS is an attribute string s (along with its connections in network models) represented in the appropriate shape-space. This string s might correspond to an immune cell or molecule. In an ANN, the basic unit is an artificial neuron composed of an activation function, a summing junction, connection strengths, and an activation threshold. While artificial neurons are usually processing elements, attribute strings representing immune cells and molecules are information storage and processing components.Location of the components: In immune network models, the cells and molecules usually present a dynamic behaviour that tries to mimic or counteract the environment. This way, the network elements will be located according to the environmental stimuli. Unlike the immune network models, ANN have their neurons positioned in fixed predefined locations in the network. Some neural network models also adopt fixed neighbourhood patterns for the neurons. If a network pattern of connectivity is not adopted for the AIS, each individual element will have a position in the population that might vary dynamically. Also, a metadynamic process might allow the introduction and/or elimination of particular units.Structure: In negative and clonal AIS, the components are usually structured around matrices representing repertoires or populations of individuals. These matrices might have fixed or variable dimensions. In artificial immune networks and artificial neural networks, the components of the population are interconnected and structured around patterns of connectivity. Artificial immune networks usually have an architecture that follows the spatial distribution of the antigens represented in shape-space, while ANN usually have pre-defined architectures, and weights biasedby the environment.Memory: The attribute strings representing the repertoire(s) of immune cells and molecules, and their respective numbers, constitute most of the knowledge contained in an artificial immune system. Furthermore, parameters like the affinity threshold can also be considered part of the memory of an AIS. In artificial immune network models, the connection strengths among units also carry endogenous and exogenous information, i.e., they quantify the interactions of the elements of the AIS themselves and also with the environment. In most cases, memory is content-addressable and distributed. In the standard (earliest) neural network models, knowledge was stored only in the connection strengths of individual neurons. In more sophisticate strategies, such as constructive and pruning algorithms, and networks with self-adaptive parameters, the final number of network layers, neurons, connections, and the shapes of their respective activation functions are also part of the network knowledge. The memory is usually self-associative or content-addressable, and distributed.Adaptation: Adaptation usually refers to the alteration or adjustment in the structure or behaviour of a system so that its pattern of response to other components of the system and to the environment changes. Although both evolutionary and learning processes involve adaptation, there is a conceptual difference between them. Evolution can be seen as a change in the genetic composition of a population of individuals during successive generations. It is a result of natural selection acting on the genetic variation among individuals. In contrast, learning can be seen as a long lasting change in behaviour as a result of previous experience. While AIS might present both types of adaptation, learning and evolution, ANNs adapt basically through learning procedures.Plasticity and diversity: Metadynamics refers basically to two processes: 1) the recruitment of new components into the system, and 2) the elimination of useless elements from the system. As consequences of metadynamics, the architecture of the system can be more appropriately adapted to the environment, and its search capability (diversity) increased. In addition, metadynamics reduces redundancy within the system by eliminating useless components. Metadynamics in the immune algorithms corresponds to a continuous insertion and elimination of the basic elements(cells/molecules) composing the system. In ANN, metadynamics is equivalent to the pruning and/or insertion of new connections, units, and layers in the network.Interaction with other components: The interaction among cells and molecules in AIS occurs through the recognition (matching) of attribute strings by cell receptors (other attribute strings). In immune network models, the cells usually have weighted connections that allow them to interact with (recognise and be recognised by) other cells. These weights can be stimulatory or suppressive indicating the degree of interaction with other cells. Artificial neural networks are composed of a set (or sets) of interconnected neurons whose connection strengths assume any positive or negative values, indicating an excitatory or inhibitory activation. The interaction with other neurons in the network occurs explicitly through these connection strengths, where a single neuron receives and processes inputs from the environment (or network neurons) in the same or other layer(s). An individual neuron can also receive an input from itself.Interaction with the environment: In pattern recognition applications, the environment is sually represented as a set of input patterns to be learnt, recognised, and/or classified. In AIS, an attributestring represents the genetic information of the immune cells and molecules. This string is compared with the patterns received from the environment. If there is an explicit antigenic population to be recognised (set of patterns), all or some antigens can be presented to the whole or parts of the AIS. At the end of the learning or recognition phase, each component of the AIS might recognise some of the input patterns. The artificial neurons have connections that receive input signals from the environment. These signals are processed by neurons and compared with the information contained in the artificial neural network, such as the connection strengths. After learning, the whole ANN might (approximately) recognise the input patterns.Threshold: Under the shape-space formalism, each component of the AIS interacts with other cells or molecules whose complements lie within a small surrounding region, characterised by a parameter named affinity threshold. This threshold determines the degree of recognition between the immune cells and the presented input pattern. Most current models of neurons include a bias (or threshold). This threshold determines the neuron activation, i.e., it indicates how sensitive the neuron activation will be with relation to the input signal.Robustness: Both paradigms are highly robust due mainly to the presence of populations or networks of components. These elements, cells, molecules, and neurons, can act collectively,co-operatively, and competitively to accomplish their particular tasks. As knowledge is distributed over the many components of the system, damage or failure to individual elements might not significantly deteriorate the overall performance. Both AIS and ANN are highly flexible and noise tolerant. An interesting property of immune network models and negative selection algorithms is that they are also self-tolerant, i.e., they learn to recognise themselves. In immune network models, the cells interact with each other and usually present connection strengths quantifying these interactions. In negative selection algorithms, the self-knowledge is performed by storing information about its complement.State: At each iteration, time step or interval, the state of an AIS corresponds to the concentration of the immune cells and molecules, and/or their affinities. In the case of immune network models, the connection strengths among units are also part of the current state of the system. In artificial neural networks, the activation level of the output neurons determines the state of the system. Notice that this activation level of the output neurons takes into account the number of connection strengths and their respective values, the shape of activation functions and the network dimension.Control: Any immune principle, theory or process can be used to control the types of interaction among the many components of an AIS. As examples, clonal selection can be employed to build an antibody repertoire capable of recognising a set of antigenic patterns, and negative selection can be used to define a set of antibodies (detectors) for the recognition of anomalous patterns. Differential or difference equations can be applied to the control of how an artificial immune network will interact with itself and the environment. Basically, three learning paradigms can be used to train an ANN: 1) supervised, 2) unsupervised, and 3) reinforcement learning.Generalisation capability: In the AIS case, cells and molecules capable of recognising a certain pattern, can recognise not only this specific pattern, but also any structurally related pattern.。
大学毕业论文---软件专业外文文献中英文翻译
while Ifyou a ]. Every软件专业毕业论文外文文献中英文翻译Object landscapes and lifetimesTechnically, OOP is just about abstract data typing, inheritance, and polymorphism, but otheissues can be at least as important. The remainder of this section will cover these issues.One of the most important factors is the way objects are created and destroyed. Where is thedata for an object and how is the lifetime of the object controlled? There are different philosoat work here. C++ takes the approach that control of efficiency is the most important issue, sogivesthe programmer a choice.For maximum run-timespeed,the s torageand lifetimecan bedeterminedwhile the program isbeing written,by placingthe objectson the stack(thesearesometimes called automatic or scoped variables) or in the static storage area. This places a prion the speed of storage allocation and release, and control of these can be very valuable in somsituations. However, you sacrifice flexibility because you must know the exact quantity, lifetimand type of objectsyou're writing the program. are trying to solvemore generalproblem such as computer-aided design, warehouse management, or air-traffic control, this is toorestrictive.The second approach is to create objects dynamically in a pool of memory called the heap. Inthis approach, you don't know until run-time how many objects you need, what their lifetime is,what theirexacttypeis.Those aredeterminedatthespurof themoment whiletheprogram isrunning. If you need a new object, you simply make it on the heap at the point that you need it.Because the storage is managed dynamically, at run-time, the amount of time required to allocatestorage on the heap is significantly longer than the time to create storage on the stack. (Creatstorage on the stack is often a single assembly instruction to move the stack pointer down, andanother to move it back up.) The dynamic approach makes the generally logical assumption thatobjects tend to be complicated, so the extra overhead of finding storage and releasing that storwill not have an important impact on the creation of an object. In addition, the greater flexibiessential to solve the general programming problem.Java uses the second approach, exclusively timeyou want to create an object, you useIn you when th but any new youin C (suchthe new keyword to build a dynamic instance of that object.There's another issue, however, and that's the lifetime of an object. With languages that alobjectsto be createdon the stack,the compilerdetermineshow long the objectlastsand canautomatically destroy it. However, if you create it on the heap the compiler has no knowledge ofits lifetime.alanguage like C++,must determine programmatically to destroy theobject, which can lead to memory leaks if you don’t do it correctly (and this is a common problin C++ programs). Java provides a feature called a garbage collector that automatically discoverwhen an object is no longer in use and destroys it. A garbage collector is much more convenientbecause it reduces the number of issues that you must track and the code you must write. Moreimportant, the garbage collector provides a much higher level of insurance against the insidiousproblem of memory leaks (which has brought many a C++ project to its knees).The rest of this section looks at additional factors concerning object lifetimes and landsca1. The singly rooted hierarchyOne of the issues in OOP that has become especially prominent since the introduction of C++iswhether a llclassesshouldultimatelybe inheritedfrom a singlebase class.In Java (aswithvirtually all other OOP languages)answer is “yes” and the name of this ultimate base class issimply Object. It turns out that the benefits of the singly rooted hierarchy are many.All objectsin a singlyrootedhierarchyhave an interfacein common, so they are allultimatelythe same type.The alternative(providedby C++) is thatyou don’tknow thateverything is the same fundamental type. From a backward-compatibility standpoint this fits themodel of C betterand can be thoughtof as lessrestrictive, when you want to do full-onobject-orientedprogramming you must then buildyour own hierarchyto providethe sameconvenience that’s built into other OOP languages. And in library classacquire, someother incompatible interface will be used. It requires effort (and possibly multiple inheritancework the new interface into your design. Is the extra “flexibility” of C++ worth it? If you neit —if you have a large investment —it’s quite valuable. If you’re starting from scratch, otheralternatives such as Java can often be more productive.All objects in a singly rooted hierarchyas Java provides) can be guaranteed to havecertain functionality. You know you can perform certain basic operations on every object in yoursystem. A singly rooted hierarchy, along with creating all objects on the heap, greatly simplifitoso ’s that is a it’sargument passing (one of the more complex topics in C++).A singly rooted hierarchy makes it much easier to implement a garbage collector (which isconvenientlybuiltintoJava).The necessarysupportcan be installed the b ase class,and thegarbage collector can thus send the appropriate messages to every object in the system. Withoutsinglyrootedhierarchyand a system to manipulatean objectvia a reference,itisdifficultimplement a garbage collector.Since run-time type information is guaranteed to be in all objects, you’ll never end up withobject whose type you cannot determine. This is especially important with system level operationsuch as exception handling, and to allow greater flexibility in programming.2 .Collection libraries and support for easy collection useBecause a container is a tool that you’ll use frequently, it makes sense to have a librarycontainersthatare builtin a reusablefashion,so you can take one off the shelfBecause acontainer is a tool that you’ll use frequently, it makes sense to have a library of containersbuilt in a reusable fashion, so you can take one off the shelf and plug it into your program. Japrovides such a library, which should satisfy most needs.Downcasting vs. templates/genericsTo make thesecontainersreusable,they hold the one universaltype in Java thatwaspreviously mentioned: Object. The singly rooted hierarchy means that everything is an Object, soa container that holds Objects can hold anything. This makes containers easy to reuse.To use such a container, you simply add object references to it, and later ask for them backBut, since the container holds only Objects, when you add your object reference into the containit is upcast to Object, thus losing its identity. When you fetch it back, you get an Object refeand not a reference to the type that you put in. So how do you turn it back into something thatthe useful interface of the object that you put into the container?Here, the cast is used again, but this time you’re not casting up the inheritance hierarchymore general type, you cast down the hierarchy to a more specific type. This manner of casting icalled downcasting. With upcasting, you know, for example, that a Circle is aittype of Shapesafe to upcast, but you don’t knowObject an necessarily a Circle or so Shape hardly’t itit to get by safe to downcast unless you know that’s what you’re dealing with.It’s not completely dangerous, however, because if you downcast to the wrong thing you’llget a run-time error called an exception, which will be described shortly. When you fetch objectreferences from a container, though, you must have some way to remember exactly what they areso you can perform a proper downcast.Downcasting and the run-time checks require extra time for the running program, and extraeffort from the programmer. Wouldnmake sense tosomehow create the container so that itknows the types that it holds, eliminating the need for the downcast and a possible mistake? Thesolution is parameterized types, which are classes that the compiler can automatically customizework with particulartypes.For example,with a parameterizedcontainer,the compilercouldcustomize that container so that it would accept only Shapes and fetch only Shapes.Parameterized types are an important part of C++, partly because C++ has no singly rootedhierarchy. In C++, the keyword that implements parameterized types is “template.” Java current has no parameterized types since it is possible for —however awkwardly —using thesingly rooted hierarchy. However, a current proposal for parameterized types uses a syntax thatstrikingly similar to C++ templates.。
外文翻译--《软件工程-实践者的研究方法》
附录Software Engineering-A PRACTITIONER’S APPROACHWritten by Roger S. Pressman, Ph.D. (P.340-P.343)13.3DESIGN PRINCIPLESSoftware design is both a process and a model. The design process is a sequence ofsteps that enable the designer to describe all aspects of the software to be built. It is important to note, however, that the design process is not simply a cookbook. Creative skill, past experience, a sense of what makes “good” software, and an overallcommitment to quality are critical success factors for a competent design.The design model is the equivalent of an architect’s plans for a house. It begins by representing the totality of the thing to be built (e.g., a three-dimensional renderingof the house) and slowly refines the thing to provide guidance for constructing eachdetail (e.g., the plumbing layout). Similarly, the design model that is created for softwareprovides a variety of different views of the computer software.Basic design principles enable the software engineer to navigate the design process.Davis suggests a setof principles for software design, which have beenadapted and extended in the following list:• The design process should not suffer from “tunnel vision.” A gooddesigner should consider alternative approaches, judging each based on therequirements of the the resources available to do the job, and thedesign concepts presented in Section • The design should be traceable to the analysis model. Because a singleelement of the design model often traces to multiple requirements, it is necessaryto have a means for tracking how requirements have been satisfied bythe design model.• The design should not reinvent the wheel. Systems are constructed usinga set of design patterns, many of which have likely been encountered before.These patterns should always be chosen as an alternative to reinvention.Time is short and resources are limited! Design time should be invested inrepresenting truly new ideas and integrating those patterns that already exist.• The design should “minimize the intellectual distance”between the software and the problem as it exists in the real world.That is, the structure of the software design should (whenever possible)mimic the structure of the problem domain.• The design should exhibit uniformity and integration. A design is uniformif it appears that one person developed the entire thing. Rules of styleand format should be defined for a design team before design work begins. Adesign is integrated if care is taken in defining interfaces between designComponents.• The design should be structured to accommodate change. The designconcepts discussed in the next section enable a design to achieve this principle.• The design should be structured to degrade gently, even when aberrantdata, events, or operating conditions are encountered. Welldesignedsoftware should never “bomb.” It should be designed toaccommodate unusual circumstances, and if it must terminate processing, doso in a graceful manner.• Design is not coding, coding is not design. Even when detailed proceduraldesigns are created for program components, the level of abstraction ofthe design model is higher than source code. The only design decisions madeat the coding level address the small implementation details that enable theprocedural design to be coded.• The design should be assessed for quality as it is being created, notafter the fact.A variety of design concepts (Section 13.4) and design measures(Chapters 19 and 24) are available to assist the designer in assessing quality.• The design should be reviewed to minimize conceptual (semantic)errors. There is sometimes a tendency to focus on minutiae when the design isreviewed, missing the forest for the trees. A design team should ensure thatmajor conceptual elements of the design (omissions, ambiguity, inconsistency)have been addressed before worrying about the syntax of the design model.When these design principles are properly applied, the software engineer creates a designthat exhibits both external and internal quality factors . External quality factors are those properties of the software that can be readily observed by users (e.g., speed,reliability, correctness, usability).Internal quality factors are of importance to softwareengineers. They lead to a high-quality design from the technical perspective. To achieveinternal quality factors, the designer must understand basic design concepts.13.4 DESIGN CONCEPTSA set of fundamental software design concepts has evolved over the past four decades.Although the degree of interest in each concept has varied over the years, each hasstood the test of time. Each provides the software designer with a foundation fromwhich more sophisticated design methods can be applied. Each helps the softwareengineer to answer the following questions:• What criteria can be used to partition software into individual components?• How is function or data structure detail separated from a conceptual representationof the software?• What uniform criteria define the technical quality of a software design?M. A. Jackson once said: "The beginning of wisdom for a [software engineer] is torecognize the difference between getting a program to work, and getting it right". Fundamental software design concepts provide the necessary frameworkfor "getting it right."13.4.1 AbstractionWhen we consider a modular solution to any problem, many levels of abstraction canbe posed. At the highest level of abstraction, a solution is stated in broad terms usingthe language of the problem environment. At lower levels of abstraction, a more proceduralorientation is taken. Problem-oriented terminology is coupled with implementation-oriented terminology in an effort to state a solution. Finally, at the lowestlevel of abstraction, the solution is stated in a manner that can be directly implemented.Wasserman provides a useful definition:The psychological notion of "abstraction" permits one to concentrate on a problem atsome level of generalization without regard to irrelevant low level details; use of abstractionalso permits one to work with concepts and terms that are familiar in the problem environmentwithout having to transform them to an unfamiliar structure . . .Each step in the software process is a refinement in the level of abstraction of the software solution. During system engineering, software is allocated as an element ofa computer-based system. During software requirements analysis, the software solutionis stated in terms "that are familiar in the problem environment." As we movethrough the design process, the level of abstraction is reduced. Finally, the lowestlevel of abstraction is reached when source code is generated.As we move through different levels of abstraction, we work to create proceduraland data abstractions. A procedural abstraction is a named sequence of instructionsthat has a specific and limited function. An example of a procedural abstraction wouldbe the word open for a door. Open implies a long sequence of procedural steps (e.g.,walk to the door, reach out and grasp knob, turn knob and pull door, step away frommoving door, etc.).A data abstraction is a named collection of data that describes a data objectChapter12). In the context of the procedural abstraction open, we can define a data abstractioncalled door. Like any data object, the data abstraction for door would encompassa set of attributes that describe the door (e.g., door type, swing direction, peningmechanism, weight, dimensions). It follows that the procedural abstraction open wouldmake use of information contained in the attributes of the data abstraction door.Many modern programming languages provide mechanisms for creating abstractdata types. For example, the Ada package is a programming language mechanismthat provides support for both data and procedural abstraction. The original abstractdata type is used as a template or generic data structure from which other data structurescan be instantiated.Control abstraction is the third form of abstraction used in software design. Likeprocedural and data abstraction, control abstraction implies a program control mechanismwithout specifying internal details. An example of a control abstraction is the synchronization semaphore used to coordinate activities in an operating system.The concept of the control abstraction is discussed briefly in Chapter 14.13.4.2 RefinementStepwise refinement is a top-down design strategy originally proposed by Niklaus Wirth. A program is developed by successively refining levels of procedural detail.A hierarchy is developed by decomposing a macroscopic statement of function (aprocedural abstraction) in a stepwise fashion until programming language statementsare reached. An overview of the concept is provided by Wirth: In each step (of the refinement), one or several instructions of the given program are decomposedinto more detailed instructions. This successive decomposition or refinement of specificationsterminates when all instructions are expressed in terms of any underlying computeror programming language . . . As tasks are refined, so the data may have to be refined,decomposed, or structured, and it is natural to refine the program and the data specificationsin parallel.Every refinement step implies some design decisions. It is important that . . . the programmerbe aware of the underlying criteria (for design decisions) and of the existence ofalternative solutions . . .The process of program refinement proposed by Wirth is analogous to the process of refinement and partitioning that is used during requirements analysis. The differenceis in the level of implementation detail that is considered, not the approach.Refinement is actually a process of elaboration.We begin with a statement offunction(or description of information) that is defined at a high level of abstraction. Thatis, the statement describes function or information conceptually but provides no informationabout the internal workings of the function or the internal structure of theinformation. Refinement causes the designer to elaborate on the original statement,providing more and more detail as each successive refinement (elaboration) occurs.Abstraction and refinement are complementary concepts. Abstraction enables adesigner to specify procedure and data and yet suppress low-level details. Refinementhelps the designer to reveal low-level details as design progresses. Both conceptsaid the designer in creating a complete design model as the design evolves.《软件工程-实践者的研究方法》Roger S. Pressman博士(340页-343页)13.3 设计原则软件设计是一个过程也是一个模型。
计算机 软件工程 外文文献 外文翻译 英文文献 数据库开发过程
英文原文版出处: A.Ghatak,K.Thyagaraian,Comtemperary Optic,Plenum Press,1978译文:数据库开发过程基于信息工程的信息系统规划是数据库开发项目的一个来源。
这些开发新数据库的项目通常是为了满足组织的战略需求,例如改善客户支持、提高产品和库存管理或进行更精确的销售预测。
然而许多数据库开发项目更多的是以自底向上的方式出现的,例如信息系统的用户需要特定的信息来完成他们的工作,从而请求开始一个项目,又如其他信息系统的专家发现组织需要改进数据管理而开始新的项目。
即使在自底向上的情况下,建立企业数据模型也是必须的,以便理解现有的数据库是否可以提供所需的数据,否则,新的数据库、数据实体和属性都应该加到当前的组织数据资源中去。
无论是战略需求还是操作信息的需求,每个数据库开发项目通常集中在一个数据库上。
一些数据库项目仅仅集中在定义、设计和实现一个数据库,以作为后续信息系统开发的基础。
然而在大多数情况下,数据库及其相关信息处理功能是作为一个完整的信息系统开发项目的一部分而被开发的。
一. 系统开发生命周期指导管理信息系统开发项目的传统过程是系统开发生命周期(SDLC)。
系统开发生命周期是指一个组织中由数据库设计人员和程序员组成的信息系统专家小组详细说明、开发、维护和替换信息系统的全部步骤。
这个过程比作瀑布是因为每一步都流到相邻的下一步,即信息系统的规格说明是一块一块地开发出来的,每一块的输出是下一块的输入。
然而如图所示,这些步骤并不是纯线性的,每个步骤在时间上有所重叠(因此可以并行地管理步骤),而且当需要重新考虑先前的决策时,还可以回滚到前面某些步骤。
(因而水可以在瀑布中倒流!)系统开发生命周期的每一阶段都包括与数据库开发相关的活动,所以,数据库管理的问题遍布整个系统开发过程。
请注意,系统开发生命周期的阶段和数据库开发步骤之间不存在一一对应的关系,概念数据建模发生在两个系统开发生命周期阶段之间。
软件工程外文文献—软件稳定性模型SSM
外文原文Due to the instability of software systems produced over a period of time unlike other systems, it has become essential to research upon and ascertain the Stabilrty of software systems which determines other factors such as reliability, trustworthiness etc. Though theoretically there is no deterioration expected for a software product, it does owing to changes in software which involves re-engineering of the changed code. The re-engineering of the software product is not essential for smafl changes that would have been made in the code of the software modules.The idea behind this research work is to bring out the basic techniques used in Pattern identification of Real Time Computing Systems by using the Software Stability Model (SSM).A case study has been built for illustrating the application of SSM to real time computing systems that use Adaptive ReconfigurabIe controls. The "Control Software" that is being used in the real time computing system makes use of the properties of Adaptive Control [8]. Adaptive Reconfigurable Control finds applications in areas including real time systems such as Air Traffic Control Systems, Networked Multimedia Systems, Command Control Systems, Medical Critical Care systems, Real Time Operating Systems (RTOS) etc. In essence the n Control Software** is modeled as a "System" and hence the concepts of stability of a physical system (as defined in Control Engineering) are applied to that of the software used in real- time systems.Software failure happens in many real time systems such as transportation systems, medical systems, defense systems, etc. A software which is dynamically controlling the buffering functions of a database management system, or a software that uses the concept of caching for OS memory management are typical examples of software using algorithms with embedded control and adaptation. We have considered a Real Time Computing System that uses Adaptive Control Algorithms. The intuitive use of the stability concepts available in Control Engineering in Real Time Computing systems with the support of the Software Stabilrty Model(SSM) is the theme of this research work.For the real time system shown in fig I, using Re- configurable control, the control laws are stored in the Controller Database. The required n Control Law" would be chosen as required. The highlight of reconfigurable control is that the controller could be redesigned at runtime.The code level implementation of the control law as shown in [9], could be used in reconfigurable control, in order to produce stability and performance checks on the "control software** used. Adaptive components are included in Real r∏me Systems in order to cope up With changing environments .Fig I. Btock Diagram of a Real Time System with Adaptive Reconfigurab Ie Control In [3] the stability criteria for software has been stated as follows :" A system is said to be stable when little disturbances applied to the system have negligible effects on the system In case of model driven software system this implies the very small changes in the input model do not radically change the behavior of the system". This concept when extended to defining the term "controllability" in software engineering is to establish the fact that software systems will perform as per the g iven specifications when the inputs are changed. In a simikιr fashion the "observability*1 criteria for software systems could also be given. A software system is said to be "observable" if it becomes possible to obtain information about the state of the system at any point of time.M.E Fayad discusses the Software Stability Model (SSM) in [4] using EBT∕B0∕10. Enduring Business Themes (EBTs) remain constant for a given system. EBTs are so chosen that the objects will remain stable in order to make up the core of software systems. Business Objects (BOs) also remain stable, but the internal processes might exchange on a need basis. Therefore externally BOs remain as stable as EBls. Industrial Objects (IOs) are the objects that one designs in a classical object model An Industrial Object represents a physical entity.As defined by Fayad in SSM, The EB r Is are determined by answering the question :"What is this system for?'*, the BOs are deteπnined by answering the question :" Howdo the intangible conceptual themes map into more concrete objects?" The IOs are determined by answering the question :" What is the physical representation of the BOs? "[ 10].It is difficult for Software Engineers to identify the objects of Stabilrty in Real Time Domain Applications, who are not so much exposed to stability concepts. It is the control engineer who should be able to advice the software engineers in the identification of EBTs, BOs and IOs. The domain knowledge of the application is very essential to identify all the three objects clearly. Stability over time, Adaptability, Essentiality, Intuition, Explicitness, Commonality to the domain, Tangibility etc. are the identification criteria M.E. Fayad has suggested to identify the EBT/80/1O [5,7].As a software system is controlled by the control software programs running, it is important to analysis the reliability of such systems while code change happens. It is also inportant to identify the range over which the software systems would behave stable during such code changes. This in line with the principles involved in defining the Controllability and Observability criteria for systems that would serve as a strong methodology to support the stability of Software systems.As in any other physical system, stability is defined over the Control Gain Factor Kin the Loop Transfer Function of the system, there are ranges over which the control parameters of the Software systems would also behave stable. The core idea of this research is to do an in-depth analysis of the behavior of Software systems on code changes and determining the cond⅛ions under which the control parameters would make the Software System stable. This in turn suggests the design of appropriate controllers using software programs in order to keep the software system within a stable region.Computing System from the external entities. Ideally the Real Time Computing system is a MIMO (Mult⅛>le I叩Ul Multiple Output) system. The inputs thus received are being processed by the Conputing System based on the parameter settings made in the Real Time Computing System. The Algorrthms used in the control software of theReal Time Computing System will process the inputs (signals) received. Appropriate Computation using real-time data meeting the required conditions as defined by the control parameters of the real time applications are being handled by the Conputing System. The Output thus obtained is fed into the Transaction Processing System which mon⅛ors for example - the database operations that take place in the backend database server.The control signals coming out of the TP system are fed into the Adaptive Control Server that has the master control system The signals received from the TP system which refers to the Model Estimator and Controller Designer Database chooses the apt control values as defined by the control parameters of the particular Real Time Application. The reconfigurable control used in the Adaptive Control Server gives the advantage that the controller is being redesigned at runtime since both the Model Estimator and Controller Designer refer to their respective Databases and hence adaptive nature of the server helps the server act appropriately.The various EBTs, BOs and IOs have been identified for the Real Tune TP system as shown in Fig 3. Based on the concepts of software stability, the categorization of the objects viz., EBT, BO and IO has been done based on the degree of stability [6,10]. Computing, Monitoring and Controlling represent the EBls of the system. The main objective of the TP System is to perform the computation of the control parameters based on the sensor values inputted and algorithms implemented using the control software. The other objective of the TP system is to track the Transactions triggered and compare the actual values computed with the real time system parameter limits pre- defined.Based on the comparison, the Adaptive Control system in turn controls the TP system and the output is thus controlled by the values chosen from the Controller Database dynamically at run-time ." Computing" is an enduring theme since it remains stable extremely and internally as long as this system lasts .n Monrtoring" is an enduring theme that represents the process of conparing the actual values conputedwith the real time system parameter limits pre- defined, which also renns stable both externally and internally."Controlling" is an enduring theme that provides Adaptive control to the TP system at run-time which also remains stable both externally and internally. All the three EBls identified here, define the concept of the system since these are the main aims of TP SystemReal Time Control Parameters, Real-time system parameter limits and Adaptive Control parameters represent the BOs. Every BO identified here are externally stable and highly adaptable in ternally. For exaπple, Real time control parameters are the ones which are always the key control elements that are stable for a TP system, yet they can depend on the sensors, control Algorithms, process controllers etc.Fig 3 shows a stable model of a TP system. We now identify a domain pattern for the TP system based on the SSM, by extracting the EBTs and BOs of the stable model of TP system. We see that the chosen EBTs, BOs etc. based on the SSM is owing to the domain of the problem. Hence software stable models for other applications in the TP domain will have these objects as their EBTs and BOs. Fig 6 shows a domain pattern for the TP system. As defined in [10], the various elements of the stability model for the TP System has been discussed in this section.Intent: This pattern suggests the basic structure of any Transaction Processing (TP) System Context: There are many Online Transaction Processing Systems I Software that require TP function.Problem: Arrive at the stable objects which represent the basic structure of the TP System Forces: The pattern identified should depict the basic structure of the TP System. It has to be generic in nature so that it could be applied to any kind of TP systems. It is quite challenging to arrive at a pattern which will handle different types/ functions of TP systems.How far a real time (Transaction Processing) system is dependent on the input parameters defines the controllability of the TP system. How far the outputs are dependent on the varying system parameters determines the Observability of the TP System. The determining control parameters of the TP System uħimately determines the EBTζ80s and 10s. Therefore decision on the control parameters decides whether the TP systems is Controllable as well Observable. Since the Controllability and Observability are the key elements of stability, we thereby ensure determination of EB r Is,80s and K)s as well depend on the control parameters governing the Control Software being used in Adaptive Re-ConfigurabIe Control based Transaction Processing System. The TP System Pattern thus arrived gives a reliable system which is more stable.中文翻译由于与其他系统不同,在一段时间内产生的软件系统不稳定,因此研究和确定软件系统的稳定性变得至关重要,这些系统决定了其他因素,如可靠性,可信度等。
软件工程专业软件质量中英文资料外文翻译文献
软件工程专业软件质量中英文资料外文翻译文献Planning for Software QualityIn this Chapter: Defining quality in software projects, working with your organization’s quality policy, Creating a quality management plan, Identifying how changes in time and cost will affect project quality.When it comes to quality, you’ve probably heard some great clichés: quality is planned into a project, not added through inspection(you should spend your time in planning quality instead of inspecting after you have errors).It’s always cheaper(and more efficient) to do a job right the first time around. Why is there always time to do work right the second time? Always underpromise and overdeliver.There sure are some catchy slogans, and clichés become clichés because they’re usually accurate. In this chapter we explore what quality is , how to plan it into your project, and how to create a quality management plan.6.1 Defining QualityBefore you can plan for quality, you must first define what quality is. Ask your customers, your project team, your management team, and even yourself what quality is and you get a variety of answers:What customers say: The software you create lives up to expectations, is reliable, and does some incredible things the customer doesn’t expect(or even think of ).What your project team says: The work is completed as planned and as expected, with few errors- and fewer surprises.What managers say: The customer is happy and the project delivers on time and on budget.What you may say: The project team completes its work according to its estimates, the customer is happy, and management is happy with the final costs andschedule.Quality, for everyone concerned, is the ability of the project and the project’s deliverable to satisfy the stated and implied requirement. Quality is all of items we mention here, but it’s more than just the deliverable; it’s following a process, meeting specified requirements, and performing to create the best possible deliverable. Everything, from the project kickoff meeting to the final testing, affects the project quality.6.2 Referring to the product scopeAs the project manager, you primary concern is satisfying the product scope. The product scope is the description of the software the customer expects from your project.If you work primarily to satisfy the product scope, then you’ll be in good shape with satisfying the customer’s expectations for quality. But, in order to satisfy the product scope you must first have several documents:Product scope description document. This document defines what the customer expects from the project. What are the characteristics of the software? This description becomes more complete as you progress through the project and gather more knowledge.Project requirements document. This document defines exactly what the project must create without being deemed a failure. What types of functionality should stakeholders be able to perform with the software? This document prioritizes the stakeholders’ requirements.Detailed design document. This document specifies how the project team will create units that meet the project requirements, which in turn will satisfy the product scope.Metrics for acceptability. Many software projects need metrics for acceptability. These metrics include speeds, data accuracy, and metrics from user acceptability tests. You’ll need to avoid vague metrics, such as good and fast. Instead, aim to define accurate numbers and determine how the values will be captured.Satisfying the product scope will assure that the customer is happy with you and with deliverables the project team has created. You will only satisfy the product scope if you plan how to do it. Quality is no accident.6.3 Referring to the project scopeThe project scope defines all of the work(and only the required work ) to createthe project deliverable. The project scope defines what will and won’t be included in the project deliverable. Project scope is different than the product scope, because the product scope describes only the finished deliverable, whereas the project scope describes the work and activities needed to reach the deliverable.You must define the project scope so that you use it as an appropriate quality tool. The project scope draws a line in the sand when it comes to project changes. Changes, as we’re sure you’ve experienced, can trickle into the project and cause problems with quality. Even the most innocent changes can bloom into monsters that wreck your project.Figure 6-1 shows the project manager’s approach to project changes and quality. Early in the project, during the initiation and planning stages, you safely entertain changes to the project. After you create the project scope, however, your rule when it comes to changes should be “Just say no!”Changes to the project may affect the quality of the product. This isn’t to say that changes should come into a project at all- far from it. But changes to the project must be examined, weighed, and considered for the affect on time, cost, and impact on project quality.6.3.1 Going the extra mileOne cliché that rings true is that it’s always better to underpromise and overdiliver. We’ve heard project managers tell us this is their approach to keep people happy. It sounds good, right? A customer asks for a piece of software that can communicate with a database through a Web form. Your project team, however, creates a piece of software that can communicate through a Web form to the customer’s database, and you add lots of query combinations for each table in the database. Fantastic!A valid argument can be made that you should never underpromise, but promise what you can deliver and live up to those promises. Technically, in project management, quality is achieved by meeting the customer’s expectations-not more than they expect, and certainly not less than they expect.Surprising the customer with more than the project scope outlines can actually backfire, for the following reasons: The customer may believe the project deliverable could have been completed faster without all the extras you’ve included. The customer may believe the project deliverable could have been completed for fewer dollars without all the extras you’ve included. If the customer discovers bugs in the software, the blame may lie with the extras. The customer may not want the extras,regardless of how ingenious you believe they are. Overdelivering is not meeting expectations: you’re not giving the customer what he or she asked for.Now, Having put the wet blanket on the fire of creativity, let us say this: communicate. We can’t emphasize enough how important it is to tell the customer what you can do, even if it’s more than what the customer has originally asked for. In software development, customers may need guidance on what each deliverable can do, and they look to you as the expert to help them make those decisions. But notice this process is done before the project execution begins, not during the implementation phase.The product scope and the project scope support one another. If the customer changes details in the product scope will also change. If not, then your project team will be completing a project scope that wo n’t create what the customer expects.Avoiding gold- plated softwareYou create gold-plated software when you complete a project, and the software is ready to go the customer, but suddenly realize that you have money to burn. If you find yourself with a hefty sum of cash remaining in the project budget, you may feel tempted to fix the situation with a lot of bling. After all, if you give the project deliverable to the customer exactly as planned, several things may happen: You customer may be initially happy that you’ve delivered underbudget. Then they’ll wonder whether you cut corners or just didn’t have a clue as to the actual cost of the project. The customer may wonder why your estimate and the actual cost of the project deliverable are not in sync. The remaining budget will be returned to the customer unless your contract stipulates otherwise. Other project managers may not be happy that you’ve created a massive, unused project budget when their projects have been strapped for cash. Key stakeholders may lose confidence in your future estimates and believe them to be bloated, padded, or fudged.This is , in case you haven’t guessed, a bad thing. The best thing to do is to deliver an accurate estimate to begin with and avoid this scenario altogether. We discuss time estimates in Chapter 8 and cost estimates in Chapter 9. For now, know that your customer’s confidence in future estimates is always measured on your ability to provide accurate estimates at the beginning of the process.If you find yourself in the scenario where you have a considerable amount of cash left in the project budget, the best thing to do is to give an accurate assessment to the customer of what you’ve accomplished in the project and what’s leftin the kitty. Don’t eat up the budget with extras, and don’t beat yourself up over it. Mistakes happen, especially to beginners, and it’s still more forgivable to be underbudget than it is to be overbudget.So should you also present extras to the customer when you present the project’s status and the remaining budget? If the extras are value-added scope changes, we say yes. If the extras are truly gold-plated extras to earn more dollars, then we say no. Software quality is based on whether the product delivers on its promises. If the proposed changes don’t make the software better, no one needs them.What you do on your current project may influence what you get to do on future projects. Honesty now pays dividends later.6.4 Examining quality versus gradeQuality and grade are not the same thing. Low quality is always a problem, but low grade may not be. Quality, as you know, is the ability of software to deliver on its promises. Grade is the ranking or classification we assign to things.Consider your next flight. You expect the airplane to safely launch, fly, and land. And you expect to be reasonably comfortable during the flight. You expect the behavior of the crew and fellow passengers to be relatively considerate, even if they’re a little cramped and annoyed. (You have to factor in that crying baby three rows back. It’s not the baby’s fault, after all.)Now consider were you’re seated on the airplane. Are you in first class or coach? That’s the grade!Within software developments we also have grades and quality issues.A quick, cheap software fix may be considered low grade, but it can still be a high-quality software solution because it satisfies the scope of the simple project. On the other hand, the rinky-dink approach won’t work during the development of a program to track financial data through e-ecommerce solutions for an international company.During the planning process, one goal of stakeholder analysis is to determine the requirements for quality and grade.6.5 Working with a Quality policyA quality policy isn’t a policy that’s “real good.”A quality policy is an organization-wide policy that dictates how your organization will plan, manage, and then control quality in all projects. This policy sets the expectations for your projects, and everyone else’s, for metrics of acceptability.Quality policies fall under the big umbrella of quality assurance. QA is an organization-wide program, the goal of which is to improve quality and to prevent mistakes.So who decides what quality is and what’s bunk? You might guess and say the customer, which to some extent is true, but generally the quality policy is set by management. The quality policy can be written by the geniuses within your organization, or your organization may follow a quality system and the proved quality approaches within these systems. For example, your company might participate in any number of proprietary and nonproprietary organizations, thereby pledging to adhere to their quality policies. The following sections discuss a few of them.Working ISO programsThe International Organization for Standardization(ISO) is a world wide body with153 members that convenes in Geneva, Switzerland. The goal of the ISO is to set compatibility standards for all industries, to establish common ground, and to maintain interoperability between businesses, countries, and devices.In case you’re wondering, the abbreviation for the international Organization for Standardization is ISO, not IOS. This is because of all the different countries represented and the varying languages; they decided to use the abbreviation of ISO taken form the Greek isos, which means equal.There are many different ISO programs, but the most popular is ISO 9000. An ISO 9000-certified organization focuses on business-to-business dealing and striving to ensure customer satisfaction. An ISO 9000-certified organization must ensure that it: Establishes and meets the customer’s quality requirements. Adheres to applicable regulatory requirements. Achieves customer satisfaction throughout the project. Takes internal measures to continually improve performance, not just once.You can learn more about ISO programs and how your organization canparticipate by visiting their Web site: .Visit these Web sites for more assistance with quality management:Getting a total Quality Management workoutThe U.S. Naval Air Systems Command originated the term Total QualityManagement as a means of describing the Japanese-style management approach to qualityimprovement.TQM requires that all members of an organization contribute to quality improvements in products, services, and the work culture. The idea is that if everyone is involved in quality and works to make the total environment better, then the services and products of the organization will continue to improve.In software development, TQM means that the entire team works to make the development of the software better, the process from start to completion better, and the deliverable better as well. TQM is largely based on W.Edwards Deming’s 14 Points for Quality. Here’s how Deming’s 14 points and TQM are specifically applicable to software development(you can find out more about W.Edwards Deming in the nearby sidebar, “W.Edwards Deming and the software project manager”):Create constancy of purpose for improving products and services. Every developer must agree and actively pursue quality in all of his or her software creation, testing, and development.Adopt the new philosophy. This philosophy can’t be a fad. The software project manager has to constantly motivate the project team to work towards quality.Cease dependence on inspection to achieve quality. Software development has tradition of coding and inspection, and then reacting to errors. This model is dangerous because developers begin to lean on the testing phase to catch errors, rather than striving to incorporate quality into the development phase. As a rule, quality should be planned into software design, never inspected in.End the practice of awarding business on price alone; instead, minimize total cost by working with a single supplier. The idea here is that a relationship will foster a commitment to quality between you and the supplier that’s a bit more substantial than an invoice and a check.Constantly strive to improve every process for planning, production, and service. Quality planning and delivery is an iterative process.Institute training on the job. If your software developers don’t know how to develop, they’ll certainly create some lousy software. If your team doesn’t know how to do something, you must train them.Adopt and institute leadership. The project manager must identify how to leadand motivate the project team, or the team may lead itself, remaining stagnant.Drive out fear. Are your software developers afraid of you? If so, how can they approach you with ideas for quality improvements, news on development, and flaws they’ve identified within the software? Fear does nothing to improve quality.为软件质量制定计划在这一章中我们将讨论:在软件工程中为质量下定义,遵循你所在团队的质量方针,创造一个质量管理计划,弄清楚时间和成本上的变化会对软件质量有何影响。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
软件工程外文翻译文献(文档含中英文对照即英文原文和中文翻译)Software engineeringSoftware engineering is the study of the use of engineering methods to build and maintain effective, practical and high-quality software disciplines. It involves the programming language, database, software development tools, system platform, standards, design patterns and so on.In modern society, the software used in many ways. Typical software such as email, embedded systems, human-machine interface, office packages, operating systems, compilers, databases, games. Meanwhile, almost all the various sectors of computer software applications, such as industry, agriculture, banking, aviation and government departments. These applications facilitate the economic and social development,improve people's working efficiency, while improving the quality of life. Software engineers is to create software applications of people collectively, according to which software engineers can be divided into different areas of system analysts, software designers, system architects, programmers, testers and so on. It is also often used to refer to a variety of software engineers, programmers.OriginIn view of difficulties encountered in software development, North Atlantic Treaty Organization (NATO) in 1968 organized the first Conference on Software Engineering, and will be presented at the "software engineering" to define the knowledge required for software development, and suggested that "software development the activities of similar projects should be. " Software Engineering has formally proposed since 1968, this time to accumulate a large number of research results, widely lot of technical practice, academia and industry through the joint efforts of software engineering is gradually developing into a professional discipline.Definitioncreation and use of sound engineering principles in order to obtain reliable and economically efficient software.application of systematic, follow the principle can be measured approach to development, operation and maintenance of software; that is to beapplied to software engineering.The development, management and updating software products related to theories, methods and tools.A knowledge or discipline (discipline), aims to produce good quality, punctual delivery, within budget and meet users need software.the practical application of scientific knowledge in the design, build computer programs, and the accompanying documents produced, and the subsequent operation and maintenance.Use systematic production and maintenance of software products related to technology and management expertise to enable software development and changes in the limited time and under cost.Construction team of engineers developed the knowledge of large software systems disciplines.the software analysis, design, implementation and maintenance of a systematic method.the systematic application of tools and techniques in the development of computer-based applications.Software Engineering and Computer ScienceSoftware development in the end is a science or an engineering, this is a question to be debated for a long time. In fact, both the two characteristics of software development. But this does not mean that they can be confused with each other. Many people think that softwareengineering, computer science and information science-based as in the traditional sense of the physical and chemical engineering as. In the U.S., about 40% of software engineers with a degree in computer science. Elsewhere in the world, this ratio is also similar. They will not necessarily use every day knowledge of computer science, but every day they use the software engineering knowledge.For example, Peter McBreen that software "engineering" means higher degree of rigor and proven processes, not suitable for all types of software development stage. Peter McBreen in the book "Software Craftsmanship: The New Imperative" put forward the so-called "craftsmanship" of the argument, consider that a key factor in the success of software development, is to develop the skills, not "manufacturing" software process.Software engineering and computer programmingSoftware engineering exists in a variety of applications exist in all aspects of software development. The program design typically include program design and coding of the iterative process, it is a stage of software development.Software engineering, software project seeks to provide guidance in all aspects, from feasibility analysis software until the software after completion of maintenance work. Software engineering that software development and marketing activities are closely related. Such assoftware sales, user training, hardware and software associated with installation. Software engineering methodology that should not be an independent programmer from the team and to develop, and the program of preparation can not be divorced from the software requirements, design, and customer interests.Software engineering design of industrial development is the embodiment of a computer program.Software crisisSoftware engineering, rooted in the 20th century to the rise of 60,70 and 80 years of software crisis. At that time, many of the software have been a tragic final outcome. Many of the software development time significantly beyond the planned schedule. Some projects led to the loss of property, and even some of the software led to casualties. While software developers have found it increasingly difficult for software development.OS 360 operating system is considered to be a typical case. Until now, it is still used in the IBM360 series host. This experience for decades, even extremely complex software projects do not have a set of programs included in the original design of work systems. OS 360 is the first large software project, which uses about 1,000 programmers. Fred Brooks in his subsequent masterpiece, "The Mythical Man Month" (The Mythical Man-Month) in the once admitted that in his management of theproject, he made a million dollar mistake.Property losses: software error may result in significant property damage. European Ariane rocket explosion is one of the most painful lesson.Casualties: As computer software is widely used, including hospitals and other industries closely related to life. Therefore, the software error might also result in personal injury or death.Was used extensively in software engineering is the Therac-25 case of accidents. In 1985 between June and January 1987, six known medical errors from the Therac-25 to exceed the dose leads to death or severe radiation burns.In industry, some embedded systems do not lead to the normal operation of the machine, which will push some people into the woods. MethodologyThere are many ways software engineering aspects of meaning. Including project management, analysis, design, program preparation, testing and quality control.Software design methods can be distinguished as the heavyweight and lightweight methods. Heavyweight methods produce large amounts of official documentation.Heavyweight development methodologies, including the famous ISO 9000, CMM, and the Unified Process (RUP).Lightweight development process is not an official document of the large number of requirements. Lightweight methods, including well-known Extreme Programming (XP) and agile process (Agile Processes).According to the "new methodology" in this article, heavyweight method presented is a "defensive" posture. In the application of the "heavyweight methods" software organizations, due to a software project manager with little or no involvement in program design, can not grasp the item from the details of the progress of the project which will have a "fear", constantly had to ask the programmer to write a lot of "software development documentation." The lightweight methods are presented "aggressive" attitude, which is from the XP method is particularly emphasized four criteria - "communication, simplicity, feedback and courage" to be reflected on. There are some people that the "heavyweight method" is suitable for large software team (dozens or more) use, and "lightweight methods" for small software team (a few people, a dozen people) to use. Of course, on the heavyweight and lightweight method of approach has many advantages and disadvantages of debate, and various methods are constantly evolving.Some methodologists think that people should be strictly followed in the development and implementation of these methods. But some people do not have the conditions to implement these methods. In fact, themethod by which software development depends on many factors, but subject to environmental constraints.Software development processSoftware development process, with the subsequent development of technology evolution and improvement. From the early waterfall (Waterfall) development model to the subsequent emergence of the spiral iterative (Spiral) development, which recently began the rise of agile development methodologies (Agile), they showed a different era in the development process for software industry different awareness and understanding of different types of projects for the method.Note distinction between software development process and software process improvement important difference between. Such as ISO 15504, ISO 9000, CMM, CMMI such terms are elaborated in the framework of software process improvement, they provide a series of standards and policies to guide software organizations how to improve the quality of the software development process, the ability of software organizations, and not give a specific definition of the development process.Development of software engineering"Agile Development" (Agile Development) is considered an important software engineering development. It stressed that software development should be able to possible future changes and uncertaintiesof a comprehensive response.Agile development is considered a "lightweight" approach. In the lightweight approach should be the most prestigious "Extreme Programming" (Extreme Programming, referred to as XP).Correspond with the lightweight approach is the "heavyweight method" exists. Heavyweight approach emphasizes the development process as the center, rather than people-centered. Examples of methods such as heavyweight CMM / PSP / TSP.Aspect-oriented programming (Aspect Oriented Programming, referred to as the AOP) is considered to software engineering in recent years, another important development. This aspect refers to the completion of a function of a collection of objects and functions. In this regard the contents related to generic programming (Generic Programming) and templates.软件工程软件工程是一门研究用工程化方法构建和维护有效的、实用的和高质量的软件的学科。