(完整版)_毕业设计-软件工程-外文翻译_

合集下载

计算机软件Java毕业设计外文资料翻译

计算机软件Java毕业设计外文资料翻译
Juan del Rosal 16,Madrid,Spain(e・
mail:jacobo.saenz@bec.uned.esJdelatoiTe@dia.uned.es, sdonnido@dia.uned.es)・
Mathematics Faculty, Unh-ersidad de Murcia, Campus de Espiiiardo, 30071 Murcia,
2.3EjsS Javascr i pt模式
Java漏洞的问题山EjsS在以前的版本(5.0)中通过使用Javascript编程语言 而不是Java来解决。因此,使用EjsS 5.0或更高版本,用户可以通过JavaScript知识开发基于Javascript的新VRL。
运行此模式时,EjsS的主要结构在用户眼中不会改变,应用程序的构建非常
(2015). EJS, JIL Server and Lab VIEW: How to build a remote lab in the blink of an eye・Learning Technologies, IEEE Transactions on, PP(99),1-1. doi:l 0.
IFAC-PapersOnLine,2015, (ancisco Esquembre, Felix J. Garcia, Luis de la Tone, Sebastian
Donnido
Computer Science and Automatics Department, Computer Science School, UNED,
[2]Bose, R・(2013)・Virtual labs project: A paradigm shift in iiiteniet-based remote experimentation・IEEE Access, 1, 718-725・

软件工程(外文翻译文献)

软件工程(外文翻译文献)

外文文献资料1、Software EngineeringSoftware is the sequences of instructions in one or more programming languages that comprise a computer application to automate some business function. Engineering is the use of tools and techniques in problem solving. Putting the two words together, software engineering is the systemtic application of tools and techniques in the development of computer-based applications.The software engineering process describes the steps it takes to deelop the system. We begin a development project with the notion that there is a problem to be solved via automation. The process is how you get from problem recognition to a working solution. A quality process is desirable because it is more likely to lead to a quality product. The process followed by a project team during the development life cycle of an application should be orderly, goal-oriented, enjoyable, and a learning experience.Object-oriented methodology is an approach to system lifecycle development that takes a top-down view of data objects, their allowable actions, and the underlying communication requirement to define a system architecture. The data and action components are encapsulated, that is , they are combined together, to form abstract data types Encapsulation means that if I know what data I want ,I also know the allowable processes against that data. Data are designed as lattice hierarchies of relationships to ensure that top-down, hierarchic inheritance and side ways relationships are accommodated. Encapsulated objects are constrained only to communicate via messages. At a minimum, messages indicate the receiver and action requested. Messages may be more elaborate, including the sender and data to be acted upon.That we try to apply engineering discipline to software development does not mean that we have all the answers about how to build applications. On the contrary, we still build systems that are not useful and thus are not used. Part of the reason for continuing problems in application development, is that we are constantly trying to hita moving target. Both the technology and the type of applications needed by businesses are constantly changing and becoming more complex. Our ability to develop and disseminate knowledge about how to successfully build systems for new technologies and new application types seriously lags behind technological and business changes.Another reason for continuing problems in application development is that we aren’t always free to do what we like and it is hard to change habits and cultures from the old way of doing things, as well as get users to agree with a new sequence of events or an unfamiliar format for documentation.You might ask then, if many organizations don’t use good software engineering practices, why should I bother learning them? There are two good answers to this question. First, if you never know the right thing to do, you have no chance of ever using it. Second, organizations will frequently accept evolutionary, small steps of change instead of revolutionary, massive change. You can learn individual techniques that can be applied without complete devotion to one way of developing systems. In this way, software engineering can speed changee in their organizations by demonstrating how the tools and techniques enhance th quality of both the product and the process of building a system.2、Data Base System1、IntroductionThe development of corporate databases will be one of the most important data-processing activities for the rest of the 1970s. Date will be increasingly regarded as a vital corporate resource, which must be organized so as to maximize their value. In addition to the databases within an organization, a vast new demand is growing for database services, which will collect, organize, and sell data.The files of data which computers can use are growing at a staggering rate. The growth rate in the size of computer storage is greater than the growth in the size or power of any other component in the exploding data processing industry. The more data the computers have access to, the greater is their potential power. In all walks of life and in all areas of industry, data banks will change the areas of what it is possiblefor man to do. In the end of this century, historians will look back to the coming of computer data banks and their associated facilities as a step which changed the nature of the evolution of society, perhaps eventually having a greater effect on the human condition than even the invention of the printing press.Some most impressive corporate growth stories of the generation are largely attributable to the explosive growth in the need of information.The vast majority of this information is not yet computerized. However, the cost of data storage hardware is dropping more rapidly than other costs in data processing. It will become cheaper to store data on computer files than to store them on paper. Not only printed information will be stored. The computer industry is improving its capability to store line drawing, data in facsimile form, photo-graphs, human speech, etc. In fact, any form of information other than the most intimate communications between humans can be transmitted and stored digitally.There are two main technology developments likely to become available in the near future. First, there are electromagnetic devices that will hold much more data than disks but have much longer access time. Second, there are solid-state technologies that will give microsecond access time but capacities are smaller than disks.Disks themselves may be increased in capacity somewhat. For the longer term future there are a number of new technologies which are currently working in research labs which may replace disks and may provide very large microsecond-access-time devices. A steady stream of new storage devices is thus likely to reach the marketplace over the next 5 years, rapidly lowering the cost of storing data.Given the available technologies, it is likely that on-line data bases will use two or three levels of storage. One solid-state with microsecond access time, one electromagnetic with access time of a fraction of a second. If two ,three ,or four levels of storage are used, physical storage organization will become more complex ,probably with paging mechanisms to move data between the levels; solid-state storage offers the possibility of parallel search operation and associativememory.Both the quantity of data stored and the complexity of their organization are going up by leaps and bounds. The first trillion bit on-line stores are now in use . in a few year’s time ,stores of this size may be common.A particularly important consideration in data base design is to store the data so that the can be used for a wide variety of applications and so that the way they can be changed quickly and easily. On computer installation prior to the data base era it has been remarkably difficult to change the way data are used. Different programmers view the data in different ways and constantly want to modify them as new needs arise modification , however ,can set off a chain reaction of changes to existing programs and hence can be exceedingly expensive to accomplish .Consequently , data processing has tended to become frozen into its old data structures .To achieve flexibility of data usage that is essential in most commercial situations . Two aspects of data base design are important. First, it should be possible to interrogate and search the data base without the lengthy operation of writing programs in conventional programming languages. Second ,the data should be independent of the programs which use them so that they can be added to or restructured without the programs being changed .The work of designing a data base is becoming increasing difficult , especially if it is to perform in an optimal fashion . There are many different ways in which data can be structured ,and they have different types of data need to be organized in different ways. Different data have different characteristics , which ought to effect the data organization ,and different users have fundamentally different requirements. So we need a kind of data base management system(DBMS)to manage data.Data base design using the entity-relationship model begins with a list of the entity types involved and the relationships among them. The philosophy of assuming that the designer knows what the entity types are at the outset is significantly different from the philosophy behind the normalization-based approach.The entity-relationship(E-R)approach uses entity-relationship diagrams. The E-Rapproach requires several steps to produre a structure that is acceptable by the particular DBMS. These steps are:(1) Data analysis(2) Producing and optimizing the entity model.(3) Logical schema development(4) Physical data base design process.Developing a data base structure from user requirements is called data bases design. Most practitioners agree that there are two separate phases to the data base design process. The design of a logical database structure that is processable by the data base management system(DBMS)d escribes the user’s view of data, and is the selection of a physical structure such as the indexed sequential or direct access method of the intended DBMS.Current data base design technology shows many residual effects of its outgrowth from single-record file design methods. File design is primarily application program dependent since the data has been defined and structured in terms of individual applications to use them. The advent of DBMS revised the emphasis in data and program design approaches.There are many interlocking questions in the design of data-base systems and many types of technique that one can use is answer to the question so many; in fact, that one often sees valuable approaches being overlooked in the design and vital questions not being asked.There will soon be new storage devices, new software techniques, and new types of data bases. The details will change, but most of the principles will remain. Therefore, the reader should concentrate on the principles.2、Data base systemThe conception used for describing files and data bases has varied substantially in the same organization.A data base may be defined as a collection of interrelated data stored together with as little redundancy as possible to serve on or more applications in an optimal fashion; the data are stored so that they are independent of programs which use thedata; a common and controlled approach is used in adding new data and in modifying and retrieving existing data within the data base. One system is said to contain a collection of data bases if they are entirely separate in structure.A data base may be designed for batch processing, real-time processing, or in-line processing. A data base system involve application program, DBMS, and data base.One of the most important characteristics of most data bases is that they will constantly need to change and grow. Easy restructuring of the data base must be possible as new data types and new applications are added. The restructuring should be possible without having to rewrite the application program and in general should cause as little upheaval as possible. The ease with which a data base can be changed will have a major effect on the rate at which data-processing application can be developed in a corporation.The term data independence is often quoted as being one of the main attributes of a data base. It implies that the data and the application programs which use them are independent so that either may be changed without changing the other. When a single set of data items serves a variety of applications, different application programs perceive different relationships between the data items. To a large extent, data-base organization is concerned with the representation of relationship between data items and records as well as how and where the data are stored. A data base used for many applications can have multiple interconnections between the data item about which we may wish to record. It can describes the real world. The data item represents an attribute, and the attribute must be associated with the relevant entity. We design values to the attributes, one attribute has a special significance in that it identifies the entity.An attribute or set of attribute which the computer uses to identify a record or tuple is referred to as a key. The primary key is defined as that key used to uniquely identify one record or tuple. The primary key is of great importance because it is used by the computer in locating the record or tuple by means of an index or addressing algorithm.If the function of a data base were merely to store data, its organization would be simple. Most of the complexities arise from the fact that is must also show the relationships between the various items of data that are stored. It is different to describe the data in logical or physical.The logical data base description is referred to as a schema .A schema is a chart of the types of data that one used. It gives the names of the entities and attributes, and specifics the relations between them. It is a framework into which the values of the data-items can be fitted.We must distinguish between a record type and a instance of the record. When we talk about a “personnel record”,this is really a record type.There are no data values associated with it.The term schema is used to mean an overall chart of all of the dataitem types and record types stored in a data he uses. Many different subschema can be derived from one schema.The schema and the subschema are both used by the data-base management system, the primary function of which is to serve the application programs by executing their data operations.A DBMS will usually be handing multiple data calls concurrently. It must organize its system buffers so that different data operations can be in process together. It provides a data definition language to specify the conceptual schema and most likely, some of the details regarding the implementation of the conceptual schema by the physical schema. The data definition language is a high-level language, enabling one to describe the conceptual schema in terms of a “data model” .The choice of a data model is a difficult one, since it must be rich enough in structure to describe significant aspects of the real world, yet it must be possible to determine fairly automatically an efficient implementation of the conceptual schema by a physical schema. It should be emphasized that while a DBMS might be used to build small data bases, many data bases involve millions of bytes, and an inefficient implementation can be disastrous.We will discuss the data model in the following.3、Three Data ModelsLogical schemas are defined as data models with the underlying structure of particular database management systems superimposed on them. At the present time, there are three main underlying structures for database management systems. These are :RelationalHierarchicalNetworkThe hierarchical and network structures have been used for DBMS since the 1960s. The relational structure was introduced in the early 1970s.In the relational model, the entities and their relationships are represented by two-dimensional tables. Every table represents an entity and is made up of rows and columns. Relationships between entities are represented by common columns containing identical values from a domain or range of possible values.The last user is presented with a simple data model. His and her request are formulated in terms of the information content and do not reflect any complexities due to system-oriented aspects. A relational data model is what the user sees, but it is not necessarily what will be implemented physically.The relational data model removes the details of storage structure and access strategy from the user interface. The model provides a relatively higher degree of data. To be able to make use of this property of the relational data model however, the design of the relations must be complete and accurate.Although some DBMS based on the relational data model are commercially available today, it is difficult to provide a complete set of operational capabilities with required efficiency on a large scale. It appears today that technological improvements in providing faster and more reliable hardware may answer the question positively.The hierarchical data model is based on a tree-like structure made up of nodes and branches. A node is a collection of data attributes describing the entity at that point.The highest node of the hierarchical tree structure is called a root. The nodes at succeeding lower levels are called children .A hierarchical data model always starts with a root node. Every node consists of one or more attributes describing the entity at that node. Dependent nodes can follow the succeeding levels. The node in the preceding level becomes the parent node of the new dependent nodes. A parent node can have one child node as a dependent or many children nodes. The major advantage of the hierarchical data model is the existence of proven database management systems that use the hierarchical data model as the basic structure. There is a reduction of data dependency but any child node is accessible only through its parent node, the many-to –many relationship can be implemented only in a clumsy way. This often results in a redundancy in stored data.The network data model interconnects the entities of an enterprise into a network. In the network data model a data base consists of a number of areas. An area contains records. In turn, a record may consist of fields. A set which is a grouping of records, may reside in an area or span a number of areas. A set type is based on the owner record type and the member record type. The many-to many relation-ship, which occurs quite frequently in real life can be implemented easily. The network data model is very complex, the application programmer must be familiar with the logical structure of the data base.4、Logical Design and Physical DesignLogical design of databases is mainly concerned with superimposing the constructs of the data base management system on the logical data model. There are three mainly models: hierarchical, relational, network we have mentioned above.The physical model is a framework of the database to be stored on physical devices. The model must be constructed with every regard given to the performance of the resulting database. One should carry out an analysis of the physical model with average frequencies of occurrences of the grou pings of the data elements, with expected space estimates, and with respect to time estimates for retrieving and maintaining the data.The database designer may find it necessary to have multiple entry points into a database, or to access a particular segment type with more than one key. To provide this type of access; it may be necessary to invert the segment on the keys. Thephysical designer must have expertise in knowledge of the DBMS functions and understanding of the characteristics of direct access devices and knowledge of the applications.Many data bases have links between one record and another, called pointers. A pointer is a field in one record which indicates where a second record is located on the storage devices.Records that exist on storage devices is a given physical sequence. This sequencing may be employed for some purpose. The most common pupose is that records are needed in a given sequence by certain data-processing operations and so they are stored in that sequences.Different applications may need records in different sequences.The most common method of ordering records is to have them in sequence by a key —that key which is most commonly used for addressing them. An index is required to find any record without a lengthy search of the file.If the data records are laid out sequentially by key, the index for that key can be much smaller than they are nonsequential.Hashing has been used for addressing random-access storages since they first came into existence in the mid-1950s. But nobody had the temerity to use the word hashing until 1968.Many systems analysis has avoided the use of hashing in the suspicion that it is complicated. In fact, it is simple to use and has two important advantages over indexing. First, it finds most records with only one seek and second, insertion and deletions can be handled without added complexity. Indexing, however, can be used with a file which is sequential by prime key and this is an overriding advantage, for some batch-pro-cessing applications.Many data-base systems use chains to interconnect records also. A chain refers to a group of records scatters within the files and interconnected by a sequence of pointers. The software that is used to retrive the chained records will make them appear to the application programmer as a contiguous logical file.The primary disadvantage of chained records is that many read operations areneeded in order to follow lengthy chains. Sometimes this does not matter because the records have to be read anyway. In most search operations, however, the chains have to be followed through records which would not otherwise to read. In some file organizations the chains can be contained within blocked physical records so that excessive reads do not occur.Rings have been used in many file organizations. They are used to eliminate redundancy. When a ring or a chain is entered at a point some distance from its head, it may be desirable to obtain the information at the head quickly without stepping through all the intervening links.5、Data Description LanguagesIt is necessary for both the programmers and the data administrator to be able to describe their data precisely; they do so by means of data description languages. A data description language is the means of declaring to data-base management system what data structures will be used.A data description languages giving a logical data description should perform the folloeing functions:It should give a unique name to each data-item type, file type, data base and other data subdivision.It should identify the types of data subdivision such as data item segment , record and base file.It may define the type of encoding the program uses in the data items (binary , character ,bit string , etc.)It may define the length of the data items and the range of the values that a data item can assume .It may specify the sequence of records in a file or the sequence of groups of record in the data base .It may specify means of checking for errors in the data .It may specify privacy locks for preventing unauthorized reading or modification of the data .These may operate at the data-item ,segment ,record, file or data-base level and if necessary may be extended to the contents(value) of individual data items .The authorization may , on the other hand, be separate defined .It is more subject to change than the data structures, and changes in authorization proceduresshould not force changes in application programs.A logical data description should not specify addressing ,indexing ,or searching techniques or specify the placement of data on the storage units ,because these topics are in the domain of physical ,not logical organization .It may give an indication of how the data will be used or of searching requirement .So that the physical technique can be selected optimally but such indications should not be logically limiting.Most DBMS have their own languages for defining the schemas that are used . In most cases these data description languages are different to other programmer language, because other programmer do not have the capability to define to variety of relationship that may exit in the schemas.附录 B 外文译文1、软件工程软件是指令的序列,该指令序列由一种或者多种程序语言编写,它能使计算机应用于某些事物的运用自动化。

软件工程毕业论文文献翻译中英文对照

软件工程毕业论文文献翻译中英文对照

软件工程毕业论文文献翻译中英文对照学生毕业设计(论文)外文译文学生姓名: 学号专业名称:软件工程译文标题(中英文):Qt Creator白皮书(Qt Creator Whitepaper)译文出处:Qt network 指导教师审阅签名: 外文译文正文:Qt Creator白皮书Qt Creator是一个完整的集成开发环境(IDE),用于创建Qt应用程序框架的应用。

Qt是专为应用程序和用户界面,一次开发和部署跨多个桌面和移动操作系统。

本文提供了一个推出的Qt Creator和提供Qt开发人员在应用开发生命周期的特点。

Qt Creator的简介Qt Creator的主要优点之一是它允许一个开发团队共享一个项目不同的开发平台(微软Windows?的Mac OS X?和Linux?)共同为开发和调试工具。

Qt Creator的主要目标是满足Qt开发人员正在寻找简单,易用性,生产力,可扩展性和开放的发展需要,而旨在降低进入新来乍到Qt的屏障。

Qt Creator 的主要功能,让开发商完成以下任务: , 快速,轻松地开始使用Qt应用开发项目向导,快速访问最近的项目和会议。

, 设计Qt物件为基础的应用与集成的编辑器的用户界面,Qt Designer中。

, 开发与应用的先进的C + +代码编辑器,提供新的强大的功能完成的代码片段,重构代码,查看文件的轮廓(即,象征着一个文件层次)。

, 建立,运行和部署Qt项目,目标多个桌面和移动平台,如微软Windows,Mac OS X中,Linux的,诺基亚的MeeGo,和Maemo。

, GNU和CDB使用Qt类结构的认识,增加了图形用户界面的调试器的调试。

, 使用代码分析工具,以检查你的应用程序中的内存管理问题。

, 应用程序部署到移动设备的MeeGo,为Symbian和Maemo设备创建应用程序安装包,可以在Ovi商店和其他渠道发布的。

, 轻松地访问信息集成的上下文敏感的Qt帮助系统。

软件工程中英文对照外文翻译文献

软件工程中英文对照外文翻译文献

中英文对照外文翻译(文档含英文原文和中文翻译)Application FundamentalsAndroid applications are written in the Java programming language. The compiled Java code — along with any data and resource files required by the application — is bundled by the aapt tool into an Android package, an archive file marked by an .apk suffix. This file is the vehicle for distributing the application and installing it on mobile devices; it's the file users download to their devices. All the code in a single .apk file is considered to be one application.In many ways, each Android application lives in its own world:1. By default, every application runs in its own Linux process. Android starts the process when any of the application's code needs to be executed, and shuts down the process when it's no longer needed and system resources are required by other applications.2. Each process has its own virtual machine (VM), so application code runs in isolation from the code of all other applications.3. By default, each application is assigned a unique Linux user ID. Permissions are set so that the application's files are visible only to that user and only to the application itself — although there are ways to export them to other applications as well.It's possible to arrange for two applications to share the same user ID, in which case they will be able to see each other's files. To conserve system resources, applications with the same ID can also arrange to run in the same Linux process, sharing the sameVM.Application ComponentsA central feature of Android is that one application can make use of elements of other applications (provided those applications permit it). For example, if your application needs to display a scrolling list of images and another application has developed a suitable scroller and made it available to others, you can call upon that scroller to do the work, rather than develop your own. Your application doesn't incorporate the code of the other application or link to it. Rather, it simply starts up that piece of the other application when the need arises.For this to work, the system must be able to start an application process when any part of it is needed, and instantiate the Java objects for that part. Therefore, unlike applications on most other systems, Android applications don't have a single entry point for everything in the application (no main() function, for example). Rather, they have essential components that the system can instantiate and run as needed. There are four types of components:ActivitiesAn activity presents a visual user interface for one focused endeavor the user can undertake. For example, an activity might present a list of menu items users can choose from or it might display photographs along with their captions. A text messaging application might have one activity that shows a list of contacts to send messages to, a second activity to write the message to the chosen contact, and other activities to review old messages or change settings. Though they work together to form a cohesive user interface, each activity is independent of the others. Each one is implemented as a subclass of the Activity base class.An application might consist of just one activity or, like the text messaging application just mentioned, it may contain several. What the activities are, and how many there are depends, of course, on the application and its design. Typically, one of the activities is marked as the first one that should be presented to the user when the application is launched. Moving from one activity to another is accomplished by having the current activity start the next one.Each activity is given a default window to draw in. Typically, the window fills the screen, but it might be smaller than the screen and float on top of other windows. An activity can also make use of additional windows — for example, a pop-up dialog that calls for a user response in the midst of the activity, or a window that presents users with vital information when they select a particular item on-screen.The visual content of the window is provided by a hierarchy of views — objects derived from the base View class. Each view controls a particular rectangular space within the window. Parent views contain and organize the layout of their children. Leaf views (those at the bottom of the hierarchy) draw in the rectangles they control and respond to user actions directed at that space. Thus, views are where the activity's interaction with the user takes place.For example, a view might display a small image and initiate an action when the user taps that image. Android has a number of ready-made views that you can use —including buttons, text fields, scroll bars, menu items, check boxes, and more.A view hierarchy is placed within an activity's window by theActivity.setContentView() method. The content view is the View object at the root of the hierarchy. (See the separate User Interface document for more information on views and the hierarchy.)ServicesA service doesn't have a visual user interface, but rather runs in the background for an indefinite period of time. For example, a service might play background music as the user attends to other matters, or it might fetch data over the network or calculate something and provide the result to activities that need it. Each service extends the Service base class.A prime example is a media player playing songs from a play list. The player application would probably have one or more activities that allow the user to choose songs and start playing them. However, the music playback itself would not be handled by an activity because users will expect the music to keep playing even after they leave the player and begin something different. To keep the music going, the media player activity could start a service to run in the background. The system would then keep the music playback service running even after the activity that started it leaves the screen.It's possible to connect to (bind to) an ongoing service (and start the service if it's not already running). While connected, you can communicate with the service through an interface that the service exposes. For the music service, this interface might allow users to pause, rewind, stop, and restart the playback.Like activities and the other components, services run in the main thread of the application process. So that they won't block other components or the user interface, they often spawn another thread for time-consuming tasks (like music playback). See Processes and Threads, later.Broadcast receiversA broadcast receiver is a component that does nothing but receive and react to broadcast announcements. Many broadcasts originate in system code — for example, announcements that the timezone has changed, that the battery is low, that a picture has been taken, or that the user changed a language preference. Applications can also initiate broadcasts — for example, to let other applications know that some data has been downloaded to the device and is available for them to use.An application can have any number of broadcast receivers to respond to any announcements it considers important. All receivers extend the BroadcastReceiver base class.Broadcast receivers do not display a user interface. However, they may start an activity in response to the information they receive, or they may use the NotificationManager to alert the user. Notifications can get the user's attention in various ways — flashing the backlight, vibrating the device, playing a sound, and so on. They typically place a persistent icon in the status bar, which users can open to get the message.Content providersA content provider makes a specific set of the application's data available to other applications. The data can be stored in the file system, in an SQLite database, or in anyother manner that makes sense. The content provider extends the ContentProvider base class to implement a standard set of methods that enable other applications to retrieve and store data of the type it controls. However, applications do not call these methods directly. Rather they use a ContentResolver object and call its methods instead. A ContentResolver can talk to any content provider; it cooperates with the provider to manage any interprocess communication that's involved.See the separate Content Providers document for more information on using content providers.Whenever there's a request that should be handled by a particular component, Android makes sure that the application process of the component is running, starting it if necessary, and that an appropriate instance of the component is available, creating the instance if necessary.Activating components: intentsContent providers are activated when they're targeted by a request from a ContentResolver. The other three components — activities, services, and broadcast receivers — are activated by asynchronous messages called intents. An intent is an Intent object that holds the content of the message. For activities and services, it names the action being requested and specifies the URI of the data to act on, among other things. For example, it might convey a request for an activity to present an image to the user or let the user edit some text. For broadcast receivers, theIntent object names the action being announced. For example, it might announce to interested parties that the camera button has been pressed.There are separate methods for activating each type of component:1. An activity is launched (or given something new to do) by passing an Intent object toContext.startActivity() or Activity.startActivityForResult(). The responding activity can look at the initial intent that caused it to be launched by calling its getIntent() method. Android calls the activity's onNewIntent() method to pass it any subsequent intents. One activity often starts the next one. If it expects a result back from the activity it's starting, it calls startActivityForResult() instead of startActivity(). For example, if it starts an activity that lets the user pick a photo, it might expect to be returned the chosen photo. The result is returned in an Intent object that's passed to the calling activity's onActivityResult() method.2. A service is started (or new instructions are given to an ongoing service) by passing an Intent object to Context.startService(). Android calls the service's onStart() method and passes it the Intent object. Similarly, an intent can be passed to Context.bindService() to establish an ongoing connection between the calling component and a target service. The service receives the Intent object in an onBind() call. (If the service is not already running, bindService() can optionally start it.) For example, an activity might establish a connection with the music playback service mentioned earlier so that it can provide the user with the means (a user interface) for controlling the playback. The activity would call bindService() to set up that connection, and then call methods defined by the service to affect the playback.A later section, Remote procedure calls, has more details about binding to a service.3. An application can initiate a broadcast by passing an Intent object to methods like Context.sendBroadcast(), Context.sendOrderedBroadcast(), andContext.sendStickyBroadcast() in any of their variations.Android delivers the intent to all interested broadcast receivers by calling their onReceive() methods. For more on intent messages, see the separate article, Intents and Intent Filters.Shutting down componentsA content provider is active only while it's responding to a request from a ContentResolver. And a broadcast receiver is active only while it's responding to a broadcast message. So there's no need to explicitly shut down these components. Activities, on the other hand, provide the user interface. They're in a long-running conversation with the user and may remain active, even when idle, as long as the conversation continues. Similarly, services may also remain running for a long time. So Android has methods to shut down activities and services in an orderly way:1. An activity can be shut down by calling its finish() method. One activity can shut down another activity (one it started with startActivityForResult()) by calling finishActivity().2. A service can be stopped by calling its stopSelf() method, or by calling Context.stopService().Components might also be shut down by the system when they are no longer being used or when Android must reclaim memory for more active components. A later section, Component Lifecycles, discusses this possibility and its ramifications in more detail.The manifest fileBefore Android can start an application component, it must learn that the component exists. Therefore, applications declare their components in a manifest file that's bundled into the Android package, the .apk file that also holds the application's code, files, and resources.The manifest is a structured XML file and is always named AndroidManifest.xml for all applications. It does a number of things in addition to declaring the application's components, such as naming any libraries the application needs to be linked against (besides the default Android library) and identifying any permissions the application expects to be granted.But the principal task of the manifest is to inform Android about the application's components. For example, an activity might be declared as follows:The name attribute of the <activity> element names the Activity subclass that implements the activity. The icon and label attributes point to resource files containing an icon and label that can be displayed to users to represent the activity.The other components are declared in a similar way — <service> elements for services, <receiver> elements for broadcast receivers, and <provider> elements for content providers. Activities, services, and content providers that are not declared in the manifest are not visible to the system and are consequently never run. However, broadcast receivers can either be declared in the manifest, or they can be created dynamically in code (as BroadcastReceiver objects) and registered with the system by calling Context.registerReceiver().For more on how to structure a manifest file for your application, see The Android Manifest.xml File.Intent filtersAn Intent object can explicitly name a target component. If it does, Android finds that component (based on the declarations in the manifest file) and activates it. But if a target is not explicitly named, Android must locate the best component to respond to the intent. It does so by comparing the Intent object to the intent filters of potential targets. A component's intent filters inform Android of the kinds of intents the component is able to handle. Like other essential information about the component, they're declared in the manifest file. Here's an extension of the previous example that adds two intent filters to the activity:The first filter in the example — the combination of the action"android.intent.action.MAIN" and the category"UNCHER" — is a common one. It marks the activity as one that should be represented in the application launcher, the screen listing applications users can launch on the device. In other words, the activity is the entry point for the application, the initial one users would see when they choose the application in the launcher.The second filter declares an action that the activity can perform on a particular type of data.A component can have any number of intent filters, each one declaring a different set of capabilities. If it doesn't have any filters, it can be activated only by intents that explicitly name the component as the target.For a broadcast receiver that's created and registered in code, the intent filter is instantiated directly as an IntentFilter object. All other filters are set up in the manifest. For more on intent filters, see a separate document, Intents and Intent Filters.应用程序基础Android DevelopersAndroid应用程序使用Java编程语言开发。

软件工程本科毕业外文文献翻译资料

软件工程本科毕业外文文献翻译资料

软件工程本科毕业外文文献翻译学校代码:10128本科毕业设计外文文献翻译二〇一五年一月The Test Library Management System ofFramework Based on SSHThe application system features in small or medium-sized enterprise lie in the greater flexibility and safety high performance-price ratio. Traditional J2EE framework can not adapt to these needs, but the system a pplication based on SSH(Struts+Spring+Hibernate) technology can better satisfy such needs. This paper analyses some integration theory and key technologies about SSH, and according to the integration constructs a lightweight WEB framework, which has integrated the three kinds of technology ,forming the lightweight WEB framework bas ed on SSH and gaining good effects in practical applications.IntroductionGenerally the J2EE platform[27] used in large enterprise applications, can well s olve the application of reliability, safety and stability, but its weakness is the price hig h and the constructing cycle is long. Corresponding to the small or medium enterprise applications, the replace approach is the system framework of lightweight WEB, inclu ding the more commonly used methods which are based on the Struts and Hibernate. With the wide application of Spring, the three technology combination may be a bette r choice as a lightweight WEB framework. It uses layered structure and provides a go od integrated framework for Web applications at all levels in minimizing the Interlaye r coupling and increasing the efficiency of development. This framework can solve a l ot of problems, with good maintainability and scalability. It can solve the separation o f user interface and business logic separation, the separation of business logic and data base operation and the correct procedure control logic, etc. This paper studies the tech nology and principle of Struts and Spring and Hibernate, presenting a proved lightwei ght WEB application framework for enterprise.Hierarchical Web MechanismHierarchical Web framework including the user presentation layer, business logi clayer, data persistence layer ,expansion layer etc, each layer for different function, re spectively to finish the whole application. The whole system are divided into differentlogic module with relatively independent and mutual, and each module can be imple mented according to different design. It can realize the system parallel development, r apid integration, good maintainability, scalability.Struts MVC FrameworkTo ensure the reuse and efficiency of development process, adopting J2EE techn ology to build the Web application must select a system framework which has a good performance . Only in this way can we ensure not wasting lots of time because of adju sting configuration and achieve application development efficiently and quickly. So, p rogrammers in the course of practice got some successful development pattern which proved practical, such as MVC and O/R mapping, etc; many technologies, including S truts and Hibernate frameworks, realized these pattern. However, Struts framework on ly settled the separation problem between view layer and business logic layer, control layer, did not provide a flexible support for complex data saving process. On the contr ary, Hibernate framework offered the powerful and flexible support for complex data saving process. Therefore, how to integrate two frameworks and get a flexible, low-coupling solutions project which is easy to maintain for information system, is a resea rch task which the engineering staff is studying constantly.Model-View-Controller (MVC) is a popular design pattern. It divides the interactive system in three components and each of them specializes in one task. The model contains the applica tion data and manages the core functionality. The visual display of the model and the f eedback to the users are managed by the view. The controller not only interprets the in puts from the user, but also dominates the model and the view to change appropriately . MVC separates the system functionality from the system interface so as to enhance t he system scalability and maintainability. Struts is a typical MVC frame[32], and it also contains the three aforementioned components. The model level is composed of J avaBean and EJB components. The controller is realized by action and ActionServlet, and the view layer consists of JSP files. The central controller controls the action exec ution that receives a request and redirects this request to the appropriate module contr oller. Subsequently, the module controller processes the request and returns results tothe central controller using a JavaBean object, which stores any object to be presented in the view layer by including an indication to module views that must be presented. The central controller redirects the returned JavaBean object to the main view that dis plays its information.Spring Framework technologySpring is a lightweight J2EE application development framework, which uses the model of Inversion of Control(IoC) to separate the actual application from the Config uration and dependent regulations of the application. Committed to J2EE application a t all levels of the solution, Spring is not attempting to replace the existing framework, but rather “welding” the object of J2EE application at all levels together through the P OJO management. In addition, developers are free to choose Spring framework for so me or all, since Spring modules are not totally dependent.As a major business-level detail, Spring employs the idea of delay injection to assemble code for the sake o f improving the scalability and flexibility of built systems. Thus, the systems achieve a centralized business processing and reduction of code reuse through the Spring AOP module.Hibernate Persistent FrameworkHibernate is a kind of open source framework with DAO design patterns to achie ve mapping(O/R Mapping) between object and relational database.During the Web system development, the tradition approach directly interacts wi th the database by JDBC .However, this method has not only heavy workload but also complex SQL codes of JDBC which need to revise because the business logic sli ghtly changes. So, whatever development or maintain system are inconvenient. Consi dering the large difference between the object-oriented relation of java and the structure of relational database, it is necessary to intro duce a direct mapping mechanism between the object and database, which this kind of mapping should use configuration files as soon as possibility, so that mapping files w ill need modifying rather than java source codes when the business logic changes in the future. Therefore, O/R mapping pattern emerges, which hibernate is one of the most outstanding realization of architecture.It encapsulates JDBC with lightweight , making Java programmer operate a relati onal database with the object oriented programming thinking. It is a a implementation technology in the lasting layer. Compared to other lasting layer technology such as JD BC, EJB, JDO, Hibernate is easy to grasp and more in line with the object-oriented programming thinking. Hibernate own a query language (HQL), which is full y object-oriented. The basic structure in its application as shown in figure6.1.Hibernate is a data persistence framework, and the core technology is the object / relational database mapping(ORM). Hibernate is generally considered as a bridge bet ween Java applications and the relational database, owing to providing durable data se rvices for applications and allowing developers to use an object-oriented approach to the management and manipulation of relational database. Further more, it furnishes an object-oriented query language-HQL.Responsible for the mapping between the major categories of Java and the relatio nal database, Hibernate is essentially a middle ware providing database services. It su pplies durable data services for applications by utilizing databases and several profiles , such as hibernate properties and XML Mapping etc..Web services technologiesThe introduction of annotations into Java EE 5 makes it simple to create sophisticated Web service endpoints and clients with less code and a shorter learning curve than was possible with earlier Java EE versions. Annotations — first introduced in Java SE 5 — are modifiers you can add to your code as metadata. They don't affect program semantics directly, but the compiler, development tools, and runtime libraries can process them to produce additional Java language source files, XML documents, or other artifacts and behavior that augment the code containing the annotations (see Resources). Later in the article, you'll see how you can easily turn a regular Java class into a Web service by adding simple annotations.Web application technologiesJava EE 5 welcomes two major pieces of front-end technology — JSF and JSTL — into the specification to join the existing JavaServer Pages and Servlet specifications. JSF is a set of APIs that enable a component-based approach to user-interface development. JSTL is a set of tag libraries that support embedding procedural logic, access to JavaBeans, SQL commands, localized formatting instructions, and XML processing in JSPs. The most recent releases of JSF, JSTL, and JSP support a unified expression language (EL) that allows these technologies to integrate more easily (see Resources).The cornerstone of Web services support in Java EE 5 is JAX-WS 2.0, which is a follow-on to JAX-RPC 1.1. Both of these technologies let you create RESTful and SOAP-based Web services without dealing directly with the tedium of XML processing and data binding inherent to Web services. Developers are free to continue using JAX-RPC (which is still required of Java EE 5 containers), but migrating to JAX-WS is strongly recommended. Newcomers to Java Web services might as well skip JAX-RPC and head right for JAX-WS. That said, it's good to know that both of them support SOAP 1.1 over HTTP 1.1 and so are fully compatible: a JAX-WS Web services client can access a JAX-RPC Web services endpoint, and vice versa.The advantages of JAX-WS over JAX-RPC are compelling. JAX-WS:•Supports the SOAP 1.2 standard (in addition to SOAP 1.1).•Supports XML over HTTP. You can bypass SOAP if you wish. (See the article "Use XML directly over HTTP for Web services (where appropriate)"for more information.)•Uses the Java Architecture for XML Binding (JAXB) for its data-mapping model. JAXB has complete support for XML schema and betterperformance (more on that in a moment).•Introduces a dynamic programming model for both server and client.The client model supports both a message-oriented and an asynchronous approach.•Supports Message Transmission Optimization Mechanism (MTOM), a W3C recommendation for optimizing the transmission and format of a SOAP message.•Upgrades Web services interoperability (WS-I) support. (It supports Basic Profile 1.1; JAX-WS supports only Basic Profile 1.0.)•Upgrades SOAP attachment support. (It uses the SOAP with Attachments API for Java [SAAJ] 1.3; JAX-WS supports only SAAJ 1.2.)•You can learn more about the differences by reading the article "JAX-RPC versus JAX-WS."The wsimport tool in JAX-WS automatically handles many of the mundane details of Web service development and integrates easily into a build processes in a cross-platform manner, freeing you to focus on the application logic that implements or uses a service. It generates artifacts such as services, service endpoint interfaces (SEIs), asynchronous response code, exceptions based on WSDL faults, and Java classes bound to schema types by JAXB.JAX-WS also enables high-performing Web services. See Resources for a link to an article ("Implementing High Performance Web Services Using JAX-WS 2.0") presenting a benchmark study of equivalent Web service implementations based on the new JAX-WS stack (which uses two other Web services features in Java EE 5 —JAXB and StAX) and a JAX-RPC stack available in J2EE 1.4. The study found 40% to 1000% performance increases with JAX-WS in various functional areas under different loads.ConclusionEach framework has its advantages and disadvantages .Lightweight J2EE struct ure integrates Struts and Hibernate and Spring technology, making full use the powerf ul data processing function of Struts and the management flexible of Spring and the m ature of Hibernate. According to the practice, putting forward an open-source solutions suitable for small or medium-sized enterprise application of. The application system based on this architecture tech nology development has interlayer loose coupling ,structure distinctly, short develop ment cycle, maintainability. In addition, combined with commercial project developm ent, the solution has achieved good effect. The lightweight framework makes the paral lel development and maintenance for commercial system convenience, and can push f orward become other industry business system development.Through research and practice, we can easily find that Struts / Spring / Hiberna te framework utilizes Struts maturity in the presentation layer, flexibility of Spring bu siness management and convenience of Hibernate in the serialization layer, three kind s of framework integrated into a whole so that the development and maintenance beca me more convenient and handy. This kind of approach also will play a key role if appl ying other business system. Of course ,how to optimize system performance, enhance the user's access speed, improve security ability of system framework ,all of these wor ks, are need to do for author in the further.基于SSH框架实现的试题库管理系统小型或者中型企业的应用系统具有非常好的灵活性、安全性以及高性价比,传统的J2EE架构满足不了这些需求,但是基于SSH框架实现的应用系统更好的满足了这样的需求,这篇文章分析了关于SSH的一体化理论和关键技术,通过这些集成形成了轻量级Web框架,在已经集成三种技术的基础上,伴随形成了基于SSH的轻量级Web 框架,并且在实际应用中有着重要作用。

(完整版)软件工程专业_毕业设计外文文献翻译_

(完整版)软件工程专业_毕业设计外文文献翻译_

(二〇一三年六月A HISTORICAL PERSPECTIVEFrom the earliest days of computers, storing and manipulating data a major application focus. The first general-purpose DBMS was designed by Charles Bachman at General Electric in the early 1960s and was called the Integrated Data Store. It formed the basis for the network data model, which was standardized by the Conference on Data Systems Languages (CODASYL) and strongly influenced database systems through the 1960s. Bachman was the fi rst recipient of ACM’s Turing Award (the computer science equivalent of a Nobel prize) for work in the database area; 1973. In the late 1960s, IBM developed the Information Management System (IMS) DBMS, used even today in many major installations. IMS formed the basis for an alternative data representation framework called the Airlines and IBM around the same time, and it allowed several people to access the same data through computer network. Interestingly, today the same SABRE system is used to power popular Web-based travel services such as Travelocity!In 1970, Edgar Codd, at IBM’s San Jose Research Laboratory, proposed a new data representation framework called the relational data model. This proved to be a watershed in the development of database systems: it sparked rapid development of several DBMSs based on the relational model, along with a rich body of theoretical results that placed the field on a firm foundation. Codd won the 1981 Turing Award for academic discipline, and the popularity of relational DBMSs changed thecommercial landscape. Their benefits were widely recognized, and the use of DBMSs for managing corporate data became standard practice.In the 1980s, the relational model consolidated its position as the dominant DBMS paradigm, and database systems continued to gain widespread use. The SQL query language for relational databases, developed as part of IBM’s System R project, is now the standard query language. SQL was standardized in the late 1980s, and the current standard, SQL-92, was adopted by the American National Standards Institute (ANSI) and International Standards Organization (ISO). Arguably, the most widely used form of concurrent programming is the concurrent execution of database programs (called transactions). Users write programs as if they are to be run by themselves, and the responsibility for running them concurrently is given to the DBMS. James Gray won the 1999 Turing award for management in a DBMS.In the late 1980s and the 1990s, advances made in many areas of database systems. Considerable research carried out into more powerful query languages and richer data models, and there a big emphasis on supporting complex analysis of data from all parts of an enterprise. Several vendors (e.g., IBM’s DB2, Oracle 8, Informix UDS) developed by numerous vendors for creating data warehouses, consolidating data from several databases, and for carrying out specialized analysis.An interesting phenomenon is the emergence of several enterprise resource planning(ERP) and management resource planning (MRP) packages, which add a substantial layer of application-oriented features on top of a DBMS. Widely used packages include systems from Baan, Oracle,PeopleSoft, SAP, and Siebel. These packages identify a set of common tasks (e.g., inventory management, resources planning, financial analysis) encountered by a large number of organizations and provide a general application layer to carry out these tasks. The data is stored in a relational DBMS, and the application layer can be customized to different companies, leading to lower Introduction to Database Systems overall costs for the companies, compared to the cost of building the application layer from scratch. Most significantly, perhaps, DBMSs of Web sites stored their data exclusively in operating systems files, the use of a DBMS to store data that is accessed through a Web browser is becoming widespread. Queries are generated through Web-accessible forms and answers are formatted using a markup language such as HTML, in order to be easily displayed in a browser. All the database vendors are adding features to their DBMS aimed at making it more suitable for deployment over the Internet. Database management continues to gain importance as more and more data is brought on-line, and made ever more accessible through computer networking. Today the field is being driven by exciting visions such as multimedia databases, interactive video, digital libraries, a genome mapping effort and NASA’s Earth Observation System project,and the desire of companies to consolidate their decision-making processes and mine their data repositories for useful information about their businesses. Commercially, database manage- ment systems represent one of the largest and most vigorous market segments. Thusthes- tudy of database systems could prove to be richly rewarding in more ways than one!INTRODUCTION TO PHYSICAL DATABASEDESIGNLike all other aspects of database design, physical design must be guided by the nature of the data and its intended use. In particular, it is important to understand the typical workload that the database must support; the workload consists of a mix of queries and updates. Users also requirements about queries or updates must run or and users’ performance requirements are the basis on which a number of decisions .To create a good physical database design and to tune the system for performance in response to evolving user requirements, the designer needs to understand the workings of a DBMS, especially the indexing and query processing techniques supported by the DBMS. If the database is expected to be accessed concurrently by many users, or is a distributed database, the task becomes more complicated, and other features of a DBMS come into play.DATABASE WORKLOADSThe key to good physical design is arriving at an accurate description of the expected workload. A workload description includes the following elements:1. A list of queries and their frequencies, as a fraction of all queries and updates.2. A list of updates and their frequencies.3. Performance goals for each type of query and update.For each query in the workload, we must identify:Which relations are accessed.Which attributes are retained (in the SELECT clause).Which attributes or join conditions expressed on them (in the WHERE clause) and the workload, we must identify:Which attributes or join conditions expressed on them (in the WHERE clause) and .For UPDATE commands, the fields that are modified by the update.Remember that queries and updates typically involves a particular account number. The values of these parameters determine selectivity of selection and join conditions.Updates benefit from a good physical design and the presence of indexes. On the other indexes on the attributes that they modify. Thus, while queries can only benefit from the presence of an index, an index may either speed up or slow down a given update. Designers should keep this trade-offer in mind when creating indexes.NEED FOR DATABASE TUNINGAccurate, detailed workload information may be of the system. Consequently, tuning a database after it designed and deployed is important—we must refine the initial design in the light of actual usage patterns to obtain the best possible performance.The distinction between database design and database tuning is somewhat arbitrary.We could consider the design process to be over once an initial conceptual schema is designed and a set of indexing and clustering decisions is made. Any subsequent changes to the conceptual schema or the indexes, say, would then be regarded as a tuning activity. Alternatively, we could consider some refinement of the conceptual schema (and physical design decisions affected by this refinement) to be part of the physical design process.Where we draw the line between design and tuning is not very important.OVERVIEW OF DATABASE TUNINGAfter the initial phase of database design, actual use of the database provides a valuable source of detailed information that can be used to refine the initial design. Many of the original assumptions about the expected workload can be replaced by observed usage patterns; in general, some of the initial workload specification will be validated, and some of it will turn out to be wrong. Initial guesses about the size of data can be replaced with actual statistics from the system catalogs (although this information will keep changing as the system evolves). Careful monitoring of queries can reveal unexpected problems; for example, the optimizer may not be using some indexes as intended to produce good plans.Continued database tuning is important to get the best possibleperformance.TUNING THE CONCEPTUAL SCHEMAIn the course of database design, we may realize that our current choice of relation schemas does not enable us meet our performance objectives for the given workload with any (feasible) set of physical design choices. If so, we may our conceptual schema (and re-examine physical design decisions that are affected by the changes that we make).We may realize that a redesign is necessary during the initial design process or later, after the system in use for a while. Once a database designed and populated with data, changing the conceptual schema requires a significant effort in terms of mapping the contents of relations that are affected. Nonetheless, it may sometimes be necessary to revise the conceptual schema in light of experience with the system. We now consider the issues involved in conceptual schema (re)design from the point of view of performance.Several options must be considered while tuning the conceptual schema:We may decide to settle for a 3NF design instead of a BCNF design.If there are two ways to decompose a given schema into 3NF or BCNF, our choice should be guided by the workload.Sometimes we might decide to further decompose a relation that is already in BCNF.In other situations we might denormalize. That is, we might choose toreplace a collection of relations obtained by a decomposition from a larger relation with the original (larger) relation, even though it suffers from some redundancy problems. Alternatively, we might choose to add some fields to certain relations to speed up some important queries, even if this leads to a redundant storage of some information (and consequently, a schema that is in neither 3NF nor BCNF).This discussion of normalization the technique of decomposition, which amounts to vertical partitioning of a relation. Another technique to consider is , which would lead to our ; rather, we want to create two distinct relations (possibly with different constraints and indexes on each).Incidentally, when we redesign the conceptual schema, especially if we are tuning an existing database schema, it is worth considering whether we should create views to mask these changes from users for whom the original schema is more natural.TUNING QUERIES AND VIEWSIf we notice that a query is running much slower than we expected, we conjunction with some index tuning, can often ?x the problem. Similar tuning may be called for if queries on some view run slower than expected.When tuning a query, the first thing to verify is that the system is using the plan that you expect it to use. It may be that the system is not finding the best plan for a variety of reasons. Some common situations that are not condition involving null values.Selection conditions involving arithmetic or string expressions orconditions using the or connective. For example, if we E.age = 2*D.age in the WHERE clause, the optimizer may correctly utilize an available index on E.age but fail to utilize an available index on D.age. Replacing the condition by E.age2=D.age would reverse the situation.Inability to recognize a sophisticated plan such as an index-only scan for an aggregation query involving a GROUP BY clause.If the optimizer is not smart enough to and the best plan (using access methods and evaluation strategies supported by the DBMS), some systems allow users to guide the choice of a plan by providing order and join method. A user who wishes to guide optimization in this manner should and the capabilities of the given DBMS.(8)OTHER TOPICSMOBILE DATABASESThe availability of portable computers and wireless communications many components of a DBMS, including the query engine, transaction manager, and recovery manager.Users are connected through a wireless link whose bandwidth is ten times less than Ethernet and 100 times less than ATM networks. Communication costs are therefore significantly proportion to IO and CPU costs.Users’ locati ons are constantly changing, and mobile computers costs is connection time and battery usage in addition to bytes transferred, and change constantly depending on location. Data is frequently replicated to minimize the cost of accessing it from different locations.As a user moves around, data could be accessed from multipledatabase servers within a single transaction. The likelihood of losing connections is also much greater than in a traditional network. Centralized transaction management may therefore be impractical, especially if some data is resident at the mobile computers. We may in fact ACID transactions and develop alternative notions of consistency for user programs.MAIN MEMORY DATABASESThe price of main memory is now low enough that we can buy enough main memory to CPUs also memory. This shift prompts a reexamination of some basic DBMS design decisions, since disk accesses no longer dominate processing time for a memory-resident database: Main memory does not survive system crashes, and so we still atomicity and durability. Log records must be written to stable storage at commit time, and this process could become a bottleneck. To minimize this problem, rather than commit each transaction as it completes, we can collect completed transactions and commit them in batches; this is called group commit. Recovery algorithms can also be optimized since pages rarely out to make room for other pages.The implementation of in-memory operations must be considered while optimizing queries, namely the amount of space required to execute a plan. It is important to minimize the space overhead because exceeding available physical memory would lead to swapping pages to disk (through the operating system’s virtual memory mechanisms), greatly slowing down execution.Page-oriented data structures become less important (since pages areno longer the unit of data retrieval), and clustering is not important (since the cost of accessing any region of main memory is uniform).(一)从历史的角度回顾从数据库的早期开始,存储和操纵数据就一直是主要的应用焦点。

毕业设计(论文)外文资料翻译(学生用)

毕业设计(论文)外文资料翻译(学生用)

毕业设计外文资料翻译学院:信息科学与工程学院专业:软件工程姓名: XXXXX学号: XXXXXXXXX外文出处: Think In Java (用外文写)附件: 1.外文资料翻译译文;2.外文原文。

附件1:外文资料翻译译文网络编程历史上的网络编程都倾向于困难、复杂,而且极易出错。

程序员必须掌握与网络有关的大量细节,有时甚至要对硬件有深刻的认识。

一般地,我们需要理解连网协议中不同的“层”(Layer)。

而且对于每个连网库,一般都包含了数量众多的函数,分别涉及信息块的连接、打包和拆包;这些块的来回运输;以及握手等等。

这是一项令人痛苦的工作。

但是,连网本身的概念并不是很难。

我们想获得位于其他地方某台机器上的信息,并把它们移到这儿;或者相反。

这与读写文件非常相似,只是文件存在于远程机器上,而且远程机器有权决定如何处理我们请求或者发送的数据。

Java最出色的一个地方就是它的“无痛苦连网”概念。

有关连网的基层细节已被尽可能地提取出去,并隐藏在JVM以及Java的本机安装系统里进行控制。

我们使用的编程模型是一个文件的模型;事实上,网络连接(一个“套接字”)已被封装到系统对象里,所以可象对其他数据流那样采用同样的方法调用。

除此以外,在我们处理另一个连网问题——同时控制多个网络连接——的时候,Java内建的多线程机制也是十分方便的。

本章将用一系列易懂的例子解释Java的连网支持。

15.1 机器的标识当然,为了分辨来自别处的一台机器,以及为了保证自己连接的是希望的那台机器,必须有一种机制能独一无二地标识出网络内的每台机器。

早期网络只解决了如何在本地网络环境中为机器提供唯一的名字。

但Java面向的是整个因特网,这要求用一种机制对来自世界各地的机器进行标识。

为达到这个目的,我们采用了IP(互联网地址)的概念。

IP以两种形式存在着:(1) 大家最熟悉的DNS(域名服务)形式。

我自己的域名是。

所以假定我在自己的域内有一台名为Opus的计算机,它的域名就可以是。

外文翻译---软件和软件工程

外文翻译---软件和软件工程

外文翻译:Software and software engineering ----the software appearance and enumeratesAs the decade of the 1980s began, a front page story in business week magazine trumpeted the following headline:” software: the new driving force.”software had come of age—it had become a topic for management concern. during the mid-1980s,a cover story in foreune lamented “A Growing Gap in Software,”and at the close of the decade, business week warned managers about”the Software Trap—Automate or else.”As the 1990s dawned , a feature story in Newsweek asked ”Can We Trust Our Software? ”and The wall street journal related a major software company’s travails with a front page article entitled “Creating New Software Was an Agonizing Task …” these headlines, and many others like them, were a harbinger of a new understanding of the importance of computer software ---- the opportunities that it offers and the dangers that it poses.Software has now surpassed hardware as the key to the success of many computer-based systems. Whether a computer is used to run a business, control a product, or enable a system , software is the factor that differentiates . The completeness and timeliness of information provided by software (and related databases) differentiate one company from its competitors. The design and “human friendliness” of a software product differentiate it from competing products with an otherwise similar function .The intelligence and function provided by embedded software often differentiate two similar industrial or consumer products. It is software that can make the difference.During the first three decades of the computing era, the primary challenge was to develop computer hardware that reduced the cost of processing and storing data .Throughout the decade of the 1980s,advances in microelectronics resulted in more computing power at increasingly lower cost. Today, the problem is different .The primary challenge during the 1990s is to improve thequality ( and reduce the cost ) of computer-based solutions- solutions that are implemented with software.The power of a 1980s-era mainframe computer is available now on a desk top. The awesome processing and storage capabilities of modern hardware represent computing potential. Software is the mechanism that enables us to harness and tap this potential.The context in which software has been developed is closely coupled to almost five decades of computer system evolution. Better hardware performance, smaller size and lower cost have precipitated more sophisticated computer-based syst ems. We’re moved form vacuum tube processors to microelectronic devices that are capable of processing 200 million connections per second .In popular books on “the computer revolution,”Osborne characterized a “new industrial revolution,” Toffer called the advent of microelectronics part of “the third wave of change” in human history , and Naisbitt predicted that the transformation from an industrial society to an “information society” will have a profound impact on our lives. Feigenbaum and McCorduck suggested that information and knowledge will be the focal point for power in the twenty-first century, and Stoll argued that the “ electronic community” created by networks and software is the key to knowledge interchange throughout the world . As the 1990s began , Toffler described a “power shift” in which old power structures( governmental, educational, industrial, economic, and military) will disintegrate as computers and software lead to a “democratization of knowledge.”Figure 1-1 depicts the evolution of software within the context of. computer-based system application areas. During the early years of computer system development, hardware underwent continual change while software was viewed by many as an afterthought. Computer programming was a "seat-of-the-pants" art for which few systematic methods existed. Software development was virtually unmanaged--until schedules slipped or costs began to escalate. During this period, abatch orientation was used for most systems. Notable exceptions were interactive systems such as the early American Airlines reservation system and real-time defense-orientedsystems such as SAGE. For the most part, however, hardware was dedicated to the union of, a single program that in turn was dedicated to a specific application.Evolution of softwareDuring the early years, general-purpose hardware became commonplace. Software, on the other hand, was custom-designed for each application and had a relatively limited distribution. Product software(i.e., programs developed to be sold to one or more customers) was in its infancy . Most software was developed and ultimately used by the same person or organization. You wrote it, you got it running , and if it failed, you fixed it. Because job mobility was low , managers could rest assured that you’d be there when bugs were encountered.Because of this personalized software environment, design was an implicit process performed in one’s head, and action was often nonexistent. During the early years we learned much about the implementation of computer-based systems, but relatively little about computer system engineering .In fairness , however , we must acknowledge the many outstanding computer-based systems that were developed during this era. Some of these remain in use today and provide landmark achievements that continue to justify admiration.The second era of computer system evolution (Figure 1.1) spanned the decade from themid-1960s to the late 1970s. Multiprogramming and multiuse systems introduced new concepts of human-machine interaction. Interactive techniques opened a new world of applications and new levels of hardware and software sophistication . Real-time systems could collect, analyze, and transform data form multiple sources , thereby controlling processes and producing output in milliseconds rather than minutes . Advances in on-line storage led to the first generation of database management systems.The second era was also characterized by the use of product software and the advent of "software houses." Software was developed for widespread distribution in a multidisciplinary market. Programs for mainframes and minicomputers were distributed to hundreds and sometimesthousands of users. Entrepreneurs from industry, government, and academia broke away to "develop the ultimate software package" and earn a bundle of money.As the number of computer-based systems grew, libraries of computer software began to expand. In-house development projects produced tens of thousands of program source statements. Software products purchased from the outside added hundreds of thousands of new statements. A dark cloud appeared on the horizon. All of these programs--all of these source statements-had to be corrected when faults were detected, modified as user requirements changed, or adapted to new hardware that was purchased. These activities were collectively called software maintenance. Effort spent on software maintenance began to absorb resources at an alarming rate.Worse yet, the personalized nature of many programs made them virtually unmentionable. A "software crisis" loomed on the horizon.The third era of computer system evolution began in the mid-1970s and continues today. The distributed system--multiple computers, each performing functions concurrently and communicating with one another- greatly increased the complexity of computer-based systems. Global and local area networks, high-bandwidth digital communications, and increasing demands for 'instantaneous' data access put heavy demands on software developers.The third era has also been characterized by the advent and widespread use of microprocessors, personal computers, and powerful desk-top workstations. The microprocessor has spawned a wide array of intelligent products-from automobiles to microwave ovens, from industrial robots to blood serum diagnostic equipment. In many cases, software technology is being integrated into products by technical staff who understand hardware but are often novices in software development.The personal computer has been the catalyst for the growth of many software companies. While the software companies of the second era sold hundreds or thousands of copies of their programs, the software companies of the third era sell tens and even hundreds of thousands of copies. Personal computer hardware is rapidly becoming a commodity, while software provides the differentiating characteristic. In fact, as the rate of personal computer sales growth flattened during the mid-1980s, software-product sales continued to grow. Many people in industry and at home spent more money on software than they did to purchase the computer on which the software would run.The fourth era in computer software is just beginning. Object-oriented technologies (Chapters 8 and 12) are rapidly displacing more conventional software development approaches in many application areas. Authors such as Feigenbaum and McCorduck [FEI83] and Allman [ALL89] predict that "fifth-generation" computers, radically different computing architectures, and their related software will have a profound impact on the balance of political and industrial power throughout the world. Already, "fourth-generation" techniques for software development (discussed later in this chapter) are changing the manner in which some segments of the software community build computer programs. Expert systems and artificial intelligence software has finally moved from the laboratory into practical application for wide-ranging problems in the real world. Artificial neural network software has opened exciting possibilities for pattern recognition and human-like information processing abilities.As we move into the fourth era, the problems associated with computer software continue to intensify:Hardware sophistication has outpaced our ability to build software to tap hardware's potential.Our ability to build new programs cannot keep pace with the demand for new programs.Our ability to maintain existing programs is threatened by poor design and inadequate resources.In response to these problems, software engineering practices--the topic to which this book is dedicated--are being adopted throughout the industry.An Industry PerspectiveIn the early days of computing, computer-based systems were developed usinghardware-oriented management. Project managers focused on hardware because it was the single largest budget item for system development. To control hardware costs, managers instituted formal controls and technical standards. They demanded thorough analysis and design before something was built. They measured the process to determine where improvements could be made. Stated simply, they applied the controls, methods, and tools that we recognize as hardware engineering. Sadly, software was often little more than an afterthought.In the early days, programming was viewed as an "art form." Few formal methods existed and fewer people used them. The programmer often learned his or her craft by trial and error. The jargon and challenges of building computer software created a mystique that few managers cared to penetrate. The software world was virtually undisciplined--and many practitioners of the clay loved it!Today, the distribution of costs for the development of computer-based systems has changed dramatically. Software, rather than hardware, is often the largest single cost item. For the past decade managers and many technical practitioners have asked the following questions: Why does it take so long to get programs finished?Why are costs so high?Why can't we find all errors before we give the software to our customers?Why do we have difficulty in measuring progress as software is being developed?These, and many other’ questions, are a manifestation of the concern about software and the manner in which it is developed--a concern that has tend to the adoption of software engineering practices.译文:软件和软件工程——软件的出现及列举在二十世纪八十年代的前十年开始的时候, 在商业周刊杂志里一个头版故事大声宣扬以下标题:“软件,我们新的驱动力!”软件带来了一个时代------它成为了一个大家关心的主题。

软件工程外文文献翻译

软件工程外文文献翻译

软件工程外文文献翻译大连交通大学2012届本科生毕业设计 (论文) 外文翻译原文:New Competencies for HRWhat does it take to make it big in HR? What skills and expertise do you need? Since 1988, Dave Ulrich, professor of business administration at the University of Michigan, and his associates have been on a quest to provide the answers. This year, they?ve released an all-new 2007 Human Resource Competency Study (HRCS). Thefindings and interpretations lay out professional guidance for HRfor at least the next few years.“People want to know what set of skills high-achieving HR people need toperform even better,” says Ulrich, co-director of the project along with WayneBrockbank, also a professor of business at the University of Michigan.Conducted under the auspices of the Ross School of Business at the University of Michigan and The RBL Group in Salt Lake City, with regional partners including the Society for Human Resource Management (SHRM) in North America and other institutions in Latin America, Europe, China and Australia, HRCS is the longest-running, most extensive global HR competency study in existence. “Inreaching our conclusions, we?ve looked across more than 400 companies and are able to report with statistical accuracy what HR executives say and do,” Ulrich says.“The re search continues to demonstrate the dynamic nature of the humanresource management profession,” says SHRM President and CEO Susan R. Meisinger, SPHR. “The findings also highlight what an exciting time it is to be in the profession. We continue to have the ability to really add value to an organization.”“HRCS is foundational work that is really important to HR as a profession,” says Cynthia McCague, senior vice president of the Coca-Cola Co., who participated in the study. “They have created and continue to enhance a framework for thinking about how HR drives organizational performance.”What’s NewResearchers identified six core competencies that high-performing HR professionals embody. These supersede the five competencies outlined in the 2002 HRCS—the last study published—reflecting the continuing evolution of the HRprofession. Each competency is broken out into performance elements.“This is the fifth round, so we can look at past models and compare where the profession is going,” says Evren Esen, survey program manager at SHRM, which provided the sample of HR professionals surveyed in NorthAmerica. “We can actually see the profession changing. Some core areas remain the same, but others,大连交通大学2012届本科生毕业设计 (论文) 外文翻译based on how the raters asse ss and perceive HR, are new.” (For more information, see “The Competencies and Their Elements,” at right.) To some degree, the new competencies reflect a change in nomenclature or a shuffling of the competency deck. However, there are some key differences.Five years ago, HR?s role in managing culture was embedded within a broadercompetency. Now its importance merits a competency of its own. Knowledge of technology, a stand-alone competency in 2002, now appears within Business Ally. In other instances, the new competencies carry expectations that promise to change the way HR views its role. For example, the Credible Activist calls for HR to eschew neutrality and to take a stand—to practice the craft “with an attitude.”To put the competencies in perspective, it?s helpful to view them as a three-tierpyramid with Credible Activist at the pinnacle.Credible Activist. This competency is the top indicator inpredicting overall outstanding performance, suggesting that mastering it should be a priority. “You?ve got to be good at all of them, but, no question, [this competency] is key,” Ulrich says.“But you can?t be a Credible Activist without having all the other competencies. In a sense, it?s the whole package.”“It?s a deal breaker,” agrees Dani Johnson, project manager of the Human Resource Competency Study at The RBL Group in Salt Lake City. “If you don?t cometo the table with it, you?re done. It permeates everything you do.”The Credible Activist is at the heart of what it takes to be an effective H R leader. “The best HR people do not hold back; they step forward and advocate for theirposition,” says Susan Harmansky, SPHR, senior director of domestic restaurant operations for HR at Papa John?s International in Louisville, Ky., and former chair of the Human Resource Certification Institute. “CEOs are not waiting for HR to come in with options—they want your recommendations; they want you to speak from your position as an expert, similar to what you see from legal or finance executives.”“You don?t w ant to be credible without being an activist, because essentially you?re worthless to the business,” Johnson says. “People like you, but you have no impact. On the other hand, you don?t want to be an activist without being credible. You can be dangerous in a situation like that.”Below Credible Activist on the pyramid is a cluster of three competencies: Cultural Steward, Talent Manager/Organizational Designer and Strategy Architect.Cultural Steward. HR has always owned culture. But with Sarbanes-Oxley and other regulatory pressures, and CEOs relying more on HR to manage culture, this is the first time it has emerged as an independent competency. Of the six competencies,大连交通大学2012届本科生毕业设计 (论文) 外文翻译Cultural Steward is the second highest predictor of performance of both HR professionals and HR departments.Talent Manager/Organizational Designer. Talent management focuses on howindividuals enter, move up, across or out of the organization. Organizational design centers on the policies, practices and structure that shape how the organization works. Their linking reflects Ulrich?s belief that HR may be placing too much emphasis ontalent acquisition at the expense of organizational design. Talent management will not succeed in the long run without an organizational structure that supports it.Strategy Architect. Strategy Architects are able to recognize business trends and their impact on the business, and to identify potential roadblocks and opportunities. Harmansky, who recently joined Papa John?s, demonstrates how the Strategy Architect competency helps HR contribute to the overall business strategy. “In my first months here, I?m spending a lot of time traveling, going to see stores all over the country. Every time I go to a store, while my counterparts of the management team are talking about [operational aspects], I?m talking tothe people who work there. I?m trying to find out what the issues are surrounding people. How do I develop them? I?m looking for my business differentiator on the people side s o I can contribute to the strategy.”When Charlease Deathridge, SPHR, HR manager of McKee Foods inStuarts Draft, Va., identified a potential roadblock to implementing a new management philosophy, she used the Strategy Architect competency. “When we were rolling out …lean manufacturing? principles at our location, we administered an employee satisfaction survey to assess how the workers viewed the new system. The satisfaction scores were lower than ideal. I showed [management] how a negative could become a positive, how we could use the data and follow-up surveys as a strategic tool to demonstrate progress.”Anchoring the pyramid at its base are two competencies that Ulrich describes as “table stakes—necessary but not sufficient.” Except in China, where HR is at an earlier stage in professional development and there is great emphasis on transactional activities, these competencies are looked upon as basic skills that everyone must have. There is some disappointing news here. In the United States, respondents rated significantly lower on these competencies than the respondents surveyedin other countries.Business Ally. HR contributes to the success of a business byknowing how it makes money, who the customers are, and why they buy the company?s products and services. For HR professionals to be BusinessAllies (and Credible Activists and Strategy Architects as well), they should be what Ulrich describes as “businessliterate.” The mantra about understanding the business—how it works, the financialsand strategic issues—remains as important today as it did in every iteration of the survey the past 20 years. Yet progress in this area continues to lag.大连交通大学2012届本科生毕业设计 (论文) 外文翻译“Even these high performers don?t know the business as well as they should,” U lrich says. In his travels, he gives HR audiences 10 questions to test their business literacy.Operational Executor. These skills tend to fall into the range of HR activities characterized as transactional or “legacy.” Policies need to be drafted, adapted and implemented. Employees need to be paid, relocated, hired, trained and more. Every function here is essential, but—as with the Business Allycompetency—high-performing HR managers seem to view them as less important and score higher on the other competencies. Even some highly effective HR people may be running a risk in paying too little attention to these nuts-and-bolts activities, Ulrich observes.Practical ToolIn conducting debriefings for people who participated in the HRCS, Ulrich observes how delighted they are at the prescriptive nature of theexercise. The individual feedback reports they receive (see “How the Study Was Done”) offer thema road map, and they are highly motivated to follow it.Anyone who has been through a 360-degree appraisal knows that criticism can be jarring. It?s risky to open yourself up to others? opinions when you don?t have to. Add the prospect of sharing the results with your boss and colleagues who will be rating you, and you may decide to pass. Still, it?s not surprising that highly motivated people like Deathridge jumped at the chance for the free feedback.“All of it is not good,” says Deathridge. “You have to be willing to face up to it. You go home, work it out and say, …Why am I getting this bad feedback?? ”But for Deathridge, the results mostly confirmed what she already knew. “Ibelieve most people know where they?re weak or strong. For me, it was most helpful to look at how close others? ratings of me matched with my own assessments. ... There?s so much to learn about what it takes to be a genuine leader, and this studyhelped a lot.”Deathridge says the individual feedback report she received helped her realize the importance of taking a stand and developing her Credible Activist competency. “There wa s a situation where I had a line manager who wanted to disciplinesomeone,” she recalls. “In the past, I wouldn?t have been able to stand up as strongly as I did. I was able to be very clear about how I felt. I told him that he had not done enough to document the performance issue, and that if he wanted to institute discipline it would have to be at the lowest level. In the past, I would have been more deferential and said, …Let?s compromise and do it at step two or three.? But I didn?t do it; I spoke out strongly and held my ground.”大连交通大学2012届本科生毕业设计 (论文) 外文翻译This was the second study for Shane Smith, director of HR at Coca-Cola. “I didit for the first time in 2002. Now I?m seeing some traction in the things I?ve been working on. I?m pleased to see the consistency with my evaluations of my performance when compared to my raters.”What It All MeansUlrich believes that HR professionals who would have succeeded 30, 20, even 10 years ago, are not as likely to succeed today. They are expected to play new roles. To do so, they will need the new competencies.Ulrich urges HR to reflect on the new competencies and what they reveal about the future of the HR profession. His message is direct and unforgiving. “Legacy HR work is going, and HR people who don?t change with it will be gone.” Still, he remains optimistic that many in HR are heeding his call. “Twenty percent of HRpeople will never get it; 20 percent are really top performing. The middle 60 percent are moving in the right direction,” says Ulrich.“Within that 60 percent there are HR professionals who may be at the table but are not contributing fully,” he adds. “That?s the group I want to talk to. ... I want to show them what they need to do to have an impact.”As a start, Ulrich recommends HR professionals consider initiating three conversations. “One is with your business leaders. Review the competencies withthem and ask them if you?re doing them. Next, pose the same questions to your HR team. Then, ask yourself whether you really know the bu siness or if you?re glossing on the surface.” Finally, set your priorities. “Our data say: …Get working on thatCredible Activist!? ”Robert J. Grossman, a contributing editor of HR Magazine, is a lawyer and aprofessor of management studies at Marist College in Poughkeepsie, N.Y. from:HR Magazine, 2007,06 Robert J. Grossman ,大连交通大学2012届本科生毕业设计 (论文) 外文翻译译文:人力资源管理的新型胜任力如何在人力资源管理领域取得更大成功,需要怎样的专业知识和技能, 从1988年开始,密歇根大学的商业管理教授Dave Ulrich先生和他的助手们就开始研究这个课题。

软件工程专业毕业设计外文文献翻译

软件工程专业毕业设计外文文献翻译

软件工程专业毕业设计外文文献翻译1000字本文将就软件工程专业毕业设计的外文文献进行翻译,能够为相关考生提供一定的参考。

外文文献1: Software Engineering Practices in Industry: A Case StudyAbstractThis paper reports a case study of software engineering practices in industry. The study was conducted with a large US software development company that produces software for aerospace and medical applications. The study investigated the company’s software development process, practices, and techniques that lead to the production of quality software. The software engineering practices were identified through a survey questionnaire and a series of interviews with the company’s software development managers, software engineers, and testers. The research found that the company has a well-defined software development process, which is based on the Capability Maturity Model Integration (CMMI). The company follows a set of software engineering practices that ensure quality, reliability, and maintainability of the software products. The findings of this study provide a valuable insight into the software engineering practices used in industry and can be used to guide software engineering education and practice in academia.IntroductionSoftware engineering is the discipline of designing, developing, testing, and maintaining software products. There are a number of software engineering practices that are used in industry to ensure that software products are of high quality, reliable, and maintainable. These practices include software development processes, software configuration management, software testing, requirements engineering, and project management. Software engineeringpractices have evolved over the years as a result of the growth of the software industry and the increasing demands for high-quality software products. The software industry has developed a number of software development models, such as the Capability Maturity Model Integration (CMMI), which provides a framework for software development organizations to improve their software development processes and practices.This paper reports a case study of software engineering practices in industry. The study was conducted with a large US software development company that produces software for aerospace and medical applications. The objective of the study was to identify the software engineering practices used by the company and to investigate how these practices contribute to the production of quality software.Research MethodologyThe case study was conducted with a large US software development company that produces software for aerospace and medical applications. The study was conducted over a period of six months, during which a survey questionnaire was administered to the company’s software development managers, software engineers, and testers. In addition, a series of interviews were conducted with the company’s software development managers, software engineers, and testers to gain a deeper understanding of the software engineering practices used by the company. The survey questionnaire and the interview questions were designed to investigate the software engineering practices used by the company in relation to software development processes, software configuration management, software testing, requirements engineering, and project management.FindingsThe research found that the company has a well-defined software development process, which is based on the Capability Maturity Model Integration (CMMI). The company’s software development process consists of five levels of maturity, starting with an ad hoc process (Level 1) and progressing to a fully defined and optimized process (Level 5). The company has achieved Level 3 maturity in its software development process. The company follows a set of software engineering practices that ensure quality, reliability, and maintainability of the software products. The software engineering practices used by the company include:Software Configuration Management (SCM): The company uses SCM tools to manage software code, documentation, and other artifacts. The company follows a branching and merging strategy to manage changes to the software code.Software Testing: The company has adopted a formal testing approach that includes unit testing, integration testing, system testing, and acceptance testing. The testing process is automated where possible, and the company uses a range of testing tools.Requirements Engineering: The company has a well-defined requirements engineering process, which includes requirements capture, analysis, specification, and validation. The company uses a range of tools, including use case modeling, to capture and analyze requirements.Project Management: The company has a well-defined project management process that includes project planning, scheduling, monitoring, and control. The company uses a range of tools to support project management, including project management software, which is used to track project progress.ConclusionThis paper has reported a case study of software engineering practices in industry. The study was conducted with a large US software development company that produces software for aerospace and medical applications. The study investigated the company’s software development process,practices, and techniques that lead to the production of quality software. The research found that the company has a well-defined software development process, which is based on the Capability Maturity Model Integration (CMMI). The company uses a set of software engineering practices that ensure quality, reliability, and maintainability of the software products. The findings of this study provide a valuable insight into the software engineering practices used in industry and can be used to guide software engineering education and practice in academia.外文文献2: Agile Software Development: Principles, Patterns, and PracticesAbstractAgile software development is a set of values, principles, and practices for developing software. The Agile Manifesto represents the values and principles of the agile approach. The manifesto emphasizes the importance of individuals and interactions, working software, customer collaboration, and responding to change. Agile software development practices include iterative development, test-driven development, continuous integration, and frequent releases. This paper presents an overview of agile software development, including its principles, patterns, and practices. The paper also discusses the benefits and challenges of agile software development.IntroductionAgile software development is a set of values, principles, and practices for developing software. Agile software development is based on the Agile Manifesto, which represents the values and principles of the agile approach. The manifesto emphasizes the importance of individuals and interactions, working software, customer collaboration, and responding to change. Agile software development practices include iterative development, test-driven development, continuous integration, and frequent releases.Agile Software Development PrinciplesAgile software development is based on a set of principles. These principles are:Customer satisfaction through early and continuous delivery of useful software.Welcome changing requirements, even late in development. Agile processes harness change for the customer's competitive advantage.Deliver working software frequently, with a preference for the shorter timescale.Collaboration between the business stakeholders and developers throughout the project.Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.Working software is the primary measure of progress.Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.Continuous attention to technical excellence and good design enhances agility.Simplicity – the art of maximizing the amount of work not done – is essential.The best architectures, requirements, and designs emerge from self-organizing teams.Agile Software Development PatternsAgile software development patterns are reusable solutions to common software development problems. The following are some typical agile software development patterns:The Single Responsibility Principle (SRP)The Open/Closed Principle (OCP)The Liskov Substitution Principle (LSP)The Dependency Inversion Principle (DIP)The Interface Segregation Principle (ISP)The Model-View-Controller (MVC) PatternThe Observer PatternThe Strategy PatternThe Factory Method PatternAgile Software Development PracticesAgile software development practices are a set ofactivities and techniques used in agile software development. The following are some typical agile software development practices:Iterative DevelopmentTest-Driven Development (TDD)Continuous IntegrationRefactoringPair ProgrammingAgile Software Development Benefits and ChallengesAgile software development has many benefits, including:Increased customer satisfactionIncreased qualityIncreased productivityIncreased flexibilityIncreased visibilityReduced riskAgile software development also has some challenges, including:Requires discipline and trainingRequires an experienced teamRequires good communicationRequires a supportive management cultureConclusionAgile software development is a set of values, principles, and practices for developing software. Agile software development is based on the Agile Manifesto, which represents the values and principles of the agile approach. Agile software development practices include iterative development, test-driven development, continuous integration, and frequent releases. Agile software development has many benefits, including increased customer satisfaction, increased quality, increased productivity, increased flexibility, increased visibility, and reduced risk. Agile software development also has some challenges, including the requirement for discipline and training, the requirement for an experienced team, the requirement for good communication, and the requirement for a supportive management culture.。

外文翻译-软件工程

外文翻译-软件工程

中文2860字Software engineeringFrom:/zh-cn/%E8%BD%AF%E4%BB%B6%E5%B7%A5%E7%A8%8B Software engineering is the study of the use of engineering methods to build and maintain effective, practical and high-quality software disciplines. It involves the programming language, database, software development tools, system platform, standards, design patterns and so on.In modern society, the software used in many ways. Typical software such as email, embedded systems, human-machine interface, office packages, operating systems, compilers, databases, games. Meanwhile, almost all the various sectors of computer software applications, such as industry, agriculture, banking, aviation and government departments. These applications facilitate the economic and social development, improve people's working efficiency, while improving the quality of life. Software engineers is to create software applications of people collectively, according to which software engineers can be divided into different areas of system analysts, software designers, system architects, programmers, testers and so on. It is also often used to refer to a variety of software engineers, programmers.OriginIn view of difficulties encountered in software development, North Atlantic Treaty Organization (NATO) in 1968 organized the first Conference on Software Engineering, and will be presented at the "software engineering" to define the knowledge required for software development, and suggested that "software development the activities of similar projects should be. " Software Engineering has formally proposed since 1968, this time to accumulate a large number of research results, widely lot of technical practice, academia and industry through the joint efforts of software engineering is gradually developing into a professional discipline. Definitioncreation and use of sound engineering principles in order to obtain reliable and economically efficient software.application of systematic, follow the principle can be measured approach to development, operation and maintenance of software; that is to be applied to software engineering.The development, management and updating software products related to theories, methods and tools.A knowledge or discipline (discipline), aims to produce good quality, punctual delivery, within budget and meet users need software.the practical application of scientific knowledge in the design, build computer programs, and the accompanying documents produced, and the subsequent operation and maintenance.Use systematic production and maintenance of software products related to technology and management expertise to enable software development and changes in the limited time and under cost.Construction team of engineers developed the knowledge of large software systemsdisciplines.the software analysis, design, implementation and maintenance of a systematic method.the systematic application of tools and techniques in the development of computer-based applications.Software Engineering and Computer ScienceSoftware development in the end is a science or an engineering, this is a question to be debated for a long time. In fact, both the two characteristics of software development. But this does not mean that they can be confused with each other. Many people think that software engineering, computer science and information science-based as in the traditional sense of the physical and chemical engineering as. In the U.S., about 40% of software engineers with a degree in computer science. Elsewhere in the world, this ratio is also similar. They will not necessarily use every day knowledge of computer science, but every day they use the software engineering knowledge.For example, Peter McBreen that software "engineering" means higher degree of rigor and proven processes, not suitable for all types of software development stage. Peter McBreen in the book "Software Craftsmanship: The New Imperative" put forward the so-called "craftsmanship" of the argument, consider that a key factor in the success of software development, is to develop the skills, not "manufacturing" software process.Software engineering and computer programmingSoftware engineering exists in a variety of applications exist in all aspects of software development. The program design typically include program design and coding of the iterative process, it is a stage of software development.Software engineering, software project seeks to provide guidance in all aspects, from feasibility analysis software until the software after completion of maintenance work. Software engineering that software development and marketing activities are closely related. Such as software sales, user training, hardware and software associated with installation. Software engineering methodology that should not be an independent programmer from the team and to develop, and the program of preparation can not be divorced from the software requirements, design, and customer interests.Software engineering design of industrial development is the embodiment of a computer program.Software crisisSoftware engineering, rooted in the 20th century to the rise of 60,70 and 80 years of software crisis. At that time, many of the software have been a tragic final outcome. Many of the software development time significantly beyond the planned schedule. Some projects led to the loss of property, and even some of the software led to casualties. While software developers have found it increasingly difficult for software development.OS 360 operating system is considered to be a typical case. Until now, it is still used in the IBM360 series host. This experience for decades, even extremely complexsoftware projects do not have a set of programs included in the original design of work systems. OS 360 is the first large software project, which uses about 1,000 programmers. Fred Brooks in his subsequent masterpiece, "The Mythical Man Month" (The Mythical Man-Month) in the once admitted that in his management of the project, he made a million dollar mistake.Property losses: software error may result in significant property damage. European Ariane rocket explosion is one of the most painful lesson.Casualties: As computer software is widely used, including hospitals and other industries closely related to life. Therefore, the software error might also result in personal injury or death.Was used extensively in software engineering is the Therac-25 case of accidents. In 1985 between June and January 1987, six known medical errors from the Therac-25 to exceed the dose leads to death or severe radiation burns.In industry, some embedded systems do not lead to the normal operation of the machine, which will push some people into the woods.MethodologyThere are many ways software engineering aspects of meaning. Including project management, analysis, design, program preparation, testing and quality control. Software design methods can be distinguished as the heavyweight and lightweight methods. Heavyweight methods produce large amounts of official documentation. Heavyweight development methodologies, including the famous ISO 9000, CMM, and the Unified Process (RUP).Lightweight development process is not an official document of the large number of requirements. Lightweight methods, including well-known Extreme Programming (XP) and agile process (Agile Processes).According to the "new methodology" in this article, heavyweight method presented is a "defensive" posture. In the application of the "heavyweight methods" software organizations, due to a software project manager with little or no involvement in program design, can not grasp the item from the details of the progress of the project which will have a "fear", constantly had to ask the programmer to write a lot of "software development documentation." The lightweight methods are presented "aggressive" attitude, which is from the XP method is particularly emphasized four criteria - "communication, simplicity, feedback and courage" to be reflected on. There are some people that the "heavyweight method" is suitable for large software team (dozens or more) use, and "lightweight methods" for small software team (a few people, a dozen people) to use. Of course, on the heavyweight and lightweight method of approach has many advantages and disadvantages of debate, and various methods are constantly evolving.Some methodologists think that people should be strictly followed in the development and implementation of these methods. But some people do not have the conditions to implement these methods. In fact, the method by which software development depends on many factors, but subject to environmental constraints. Software development processSoftware development process, with the subsequent development of technologyevolution and improvement. From the early waterfall (Waterfall) development model to the subsequent emergence of the spiral iterative (Spiral) development, which recently began the rise of agile development methodologies (Agile), they showed a different era in the development process for software industry different awareness and understanding of different types of projects for the method.Note distinction between software development process and software process improvement important difference between. Such as ISO 15504, ISO 9000, CMM, CMMI such terms are elaborated in the framework of software process improvement, they provide a series of standards and policies to guide software organizations how to improve the quality of the software development process, the ability of software organizations, and not give a specific definition of the development process. Development of software engineering"Agile Development" (Agile Development) is considered an important software engineering development. It stressed that software development should be able to possible future changes and uncertainties of a comprehensive response.Agile development is considered a "lightweight" approach. In the lightweight approach should be the most prestigious "Extreme Programming" (Extreme Programming, referred to as XP).Correspond with the lightweight approach is the "heavyweight method" exists. Heavyweight approach emphasizes the development process as the center, rather than people-centered. Examples of methods such as heavyweight CMM / PSP / TSP.Aspect-oriented programming (Aspect Oriented Programming, referred to as the AOP) is considered to software engineering in recent years, another important development. This aspect refers to the completion of a function of a collection of objects and functions. In this regard the contents related to generic programming (Generic Programming) and templates.软件工程From:/zh-cn/%E8%BD%AF%E4%BB%B6%E5%B7%A5%E7%A8%8B软件工程是一门研究用工程化方法构建和维护有效的、实用的和高质量的软件的学科。

软件工程-外文翻译

软件工程-外文翻译

The strategic role of management information systemsAfter studying this chapter, you will be able to:1。

Analyze six major information systems in the organizations。

2。

Describe the relationship among various types of information systems。

3。

Understand the characteristics of a strategic information system.4。

Describe how information systems in business strategy to be used for three layers.5。

Explain the problem of the establishment and maintenance of strategic information systems.Orchids Paper Company ----- return to the right directionOrchids Paper Co.Ltd. has been a lower cost paper manufacturer which produces napkin,handkerchief paper, tissues and toilet paper for fifty years. However, in the middle of 1990s, the company lost its developmental way. To take good advantage of the prosperous economic situation in the late twentieth century in the 80's, employers began to squeeze into the ascendant private-label paper market in California (the company headquarters at that time). Unfortunately, Orchids nearly went bankrupt because of the dual pressure from the high cost strategy and the debt from leveraged buyout. At the moment, its raw material and production costs exceeded its profits from customers. Orchids were forced to file for bankruptcy in 1992 and 1995.Orchids' new management organization lead by the general manager, Mike Safe and chief financial officer Jim Swagerty decided to focus on core markets, where had value-seeking customers. They moved the company from California to Pryor, Oklahoma, where the utility costs were low (paper is a resource-intensive industries) and the company's recycled papers were salable. They used a low-cost strategy so that the firm's production capacity will be maximized when companies emphasize timely delivery and allow customers to clearly understand the implementation of their orders. Orchids target market is the span from Oklahoma to Atlanta.Before the reorganization,Orchids is well known for poor service and late delivery. The company did not implement the operating and reporting practices and the financial department can not provide timely and accurate information.Orchids installed a new manufacturing resource planning systems (MRP-Ⅱ) and a financial system. The two software management systems from the Marion Ohio can monitor and coordinate sales, Inventory and financial data. They can also provide the charts based daily operations for the company. Workers and all departments can directly access to the products and order information through a central server linked with the stored data through 25personal computers. Finance Department staff can also use this system to provide timely and accurate information about the operating capacity, transportation and the product usability. They can also answer the customers’ questions. Therefore, finance department staffs make use of the financial capabilities to do more about controlling and customer service. Because employees can easily access to ensure immediate and accurate information needed to order delivery. Orchids Company can keep operating costs low. This system also makes the management of Orchids in the absence of bloated bodies and the sharp reduction in the total number of case workers to run properly. Orchids started to make profit again and its organizational and technological changes made it win a place in the industry which has traditionally been monopolized by large companies.Orchids Paper used the information systems to get the lead in the competitive advantage by providing low-cost high service products. However, compared to the simply the technological leap, it is more important to maintain this competitive edge. Managers need to find ways to maintain this competitive advantage for many years. Specifically, managers need to face the following challenges:1. Comprehensive integration: although in the company different systems are designed to serve different levels and different departments, more and more companies discovery the benefits of integrated systems. Many companies are pursuing enterprise resource planning (ERP). However, the integrated system is difficult and costly for the different organizational levels and functions to exchange of information through the technology. Managers need to determine which level of information system needs to integrate and how much it costs.2. The ability to maintain the competitive advantage: the competitive advantage brought by the strategic information systems cannot sustain long enough to ensure long-term profitability. Competitors can also install the strategic information systems. Competitive advantage is not always maintained as the market is changing rapidly. Business and economic environment is changing also. Internet can make some of the company's competitive advantage disappear soon. Technology and customer expectations are changing as well. Classic strategic information systems, such as American Airlines SABRE computer reservation systems, ATM systems and City Bank Federal Express package tracking system are benefiting users because they are the first in their respective industries. But the competitors apply the corresponding systems later. Relying on information systems solely can not get lasting business advantage. Information system originally used in decision-making often becomes a survival tool (for each company in order to survive in the industry to take some measures), or information system or even inhibit the future success of organizations to make the necessary decisions.ORCHIDS Paper Company's experience shows that information systems are very important in support of the organization's goals and making the company in the leading role in competition. In this chapter, we introduce the functions of various information systems in the organization. Then,we present the issues of the company in the competition and the methods that the Information System provides a competitive advantage in three different commercial levels.2.1 The function of the major Information System in the organizationBecause of the different attention to different targets, different characteristics and different levels in the various departments in an organization, there are different kinds of information systems. Single system cannot provide organizations with all the required information. Figure 2-1 is a description of the methods of all kinds of information systems in the organization. In the chart, the organization is divided into strategic layer, management layer, knowledge layer and business layer. And then it is further translated into the various functions into areas such as sales, marketing, production, finance, accounting and human resources. Information System is set up to meet the requirements of different organizations.2.2.1 Four different information systemsThere are four different information systems which are used for different levels of the organization. They are business layer systems, knowledge-tier system, management system and strategic level system.Business layer supports managers’work through tracking the basic business activities and things of the organization. Basic operations are such as sales progress, cash store, payroll, customer credibility determination and plant logistics. On this level,the main purpose of the system is to answer normal questions, analyze the problem of the logistics and inventory number of the organization. What is Mr. William payment and what is the problem? To answer these questions, the information must be available and the information should be current and accurate. The examples of business layer of the information system: the system using ATM data to record the bank deposit, the system to record daily time that employees work in factories, etc.Knowledge level information systems support the employees who are working for the knowledge and data in the organization. Knowledge level information system is intended to help businesses find new knowledge. New knowledge will be integrated into enterprises and help companies control document things。

软件工程专业毕业设计外文文献翻译

软件工程专业毕业设计外文文献翻译

考虑翻译工具的易用性和 价格
引用文献:确保引用的外文文献来源可靠、准确 翻译准确:保持原文意思不变,语言流畅自然 格式规范:遵循学术论文的格式要求,包括标题、作者、摘要、关键词等 文献整理:对外文文献进行分类整理,方便查阅
校对:检查语法、拼写和标点错误 修改:调整句子结构、替换用词,提高表达准确性和流畅性 对照原文:确保准确传达原文意思 团队协作:多人合作,互相校对和修改
软件工程外文文献 翻译的技巧
掌握专业术语和常用表达方式 理解原文的语境和语义 注意原文的语气和修辞 结合上下文理解原文的含义
掌握专业术语:熟 悉软件工程领域相 关术语,确保翻译 准确。
句式结构清晰:合 理安排句子结构, 使译文流畅易懂。
语义连贯:保持译 文语义连贯,避免 出现歧义或理解困 难。
智能编辑:对 机器翻译结果 进行智能优化, 减少人工干预
跨语言信息检 索:利用人工 智能技术快速 查找和获取外
文文献资源
全球化推动跨文化交流的发展
人工智能技术在跨文化交流中的应 用与前景
添加标题
添加标题
添加标题
添加标题
软件工程外文文献翻译在跨文化交 流中的作用
跨文化交流中语言翻译的挑战与机 遇
人工智能与机器学 习在软件工程中的 应用
语境理解:外文文献中的语境和中文 可能存在差异,需要准确理解原文的 语境和含义,并进行适当的翻译。
添加标题
添加标题
添加标题
添加标题
文化背景:不同国家和地区的文化背 景、历史传统、价值观念等可能存在 差异,需要对外文文献中的文化元素 进行适当的解释和调整。
专业知识:软件工程领域涉及的专业 知识较多,需要对外文文献中的相关 内容进行深入理解和翻译,以确保准 确性和专业性。

软件工程—外文翻译

软件工程—外文翻译

Artificial Immune Systems:A Novel Paradigm to Pattern RecognitionAbstractThis chapter introduces a new computational intelligence paradigm to perform pattern recognition, named Artificial Immune Systems (AIS). AIS take inspiration from the immune system in order to build novel computational tools to solve problems in a vast range of domain areas. The basic immune theories used to explain how the immune system perform pattern recognition are described and their corresponding computational models are presented. This is followed with a survey from the literature of AIS applied to pattern recognition. The chapter is concluded with a trade-off between AIS and artificial neural networks as pattern recognition paradigms. Keywords: Artificial Immune Systems;Negative Selection;Clonal Selection;Immune Network 1 IntroductionThe vertebrate immune system (IS) is one of the most intricate bodily systems and its complexity is sometimes compared to that of the brain. With the advances in the biology and molecular genetics, the comprehension of how the immune system behaves is increasing very rapidly. The knowledge about the IS functioning has unraveled several of its main operative mechanisms. These mechanisms have demonstrated to be very interesting not only from a biological standpoint, but also under a computational perspective. Similarly to the way the nervous system inspired the development of artificial neural networks (ANN), the immune system has now led to the emergence of artificial immune systems (AIS) as a novel computational intelligence paradigm.Artificial immune systems can be defined as abstract or metaphorical computational systems developed using ideas, theories, and components, extracted from the immune system. Most AIS aim at solving complex computational or engineering problems, such as pattern recognition, elimination, and optimization. This is a crucial distinction between AIS and theoretical immune system models. While the former is devoted primarily to computing, the latter is focused on the modeling of the IS in order to understand its behavior, so that contributions can be made to the biological sciences. It is not exclusive, however, the use of one approach into the other and, indeed, theoretical models of the IS have contributed to the development of AIS.This chapter is organized as follows. Section 2 describes relevant immune theories for pattern recognition and introduces their computational counterparts. In Section 3, we briefly describe how to model pattern recognition in artificial immune systems, and present a simple illustrative example. Section 4 contains a survey of AIS for pattern recognition, and Section 5 contrast the use of AIS with the use of ANN when applied to pattern recognition tasks. The chapter is concluded in Section 6.2 Biological and Artificial Immune SystemsAll living organisms are capable of presenting some type of defense against foreign attack. The evolution of species that resulted in the emergence of the vertebrates also led to the evolution of the immune system of this species. The vertebrate immune system is particularly interesting due to its several computational capabilities, as will be discussed throughout this section.The immune system of vertebrates is composed of a great variety of molecules, cells, and organs spread all over the body. There is no central organ controlling the functioning of the immune system, and there are several elements in transit and in different compartments performing complementary roles. The main task of the immune system is to survey the organism in the search for malfunctioning cells from their own body (e.g., cancer and tumour cells), and foreign disease causing elements (e.g., viruses and bacteria). Every element that can be recognized by the immune system is called an antigen (Ag). The cells that originally belong to our body and are harmless to its functioning are termed self (or self antigens), while the disease causing elements are named nonself (or nonself antigens). The immune system, thus, has to be capable of distinguishing between what is self from what is nonself; a process called self/nonself discrimination, and performed basically through pattern recognition events.From a pattern recognition perspective, the most appealing characteristic of the IS is the presence of receptor molecules, on the surface of immune cells, capable of recognising an almost limitless range of antigenic patterns. One can identify two major groups of immune cells, known as B-cells and T-cells. These two types of cells are rather similar, but differ with relation to how they recognise antigens and by their functional roles. B-cells are capable of recognising antigens free in solution (e.g., in the blood stream), while T-cells require antigens to be presented by other accessory cells.Antigenic recognition is the first pre-requisite for the immune system to be activated and to mount an immune response. The recognition has to satisfy some criteria. First, the cell receptor recognises an antigen with a certain affinity, and a binding between the receptor and the antigen occurs with strength proportional to this affinity. If the affinity is greater than a given threshold, named affinity threshold, then the immune system is activated. The nature of antigen, type of recognising cell, and the recognition site also influence the outcome of an encounter between an antigen and a cell receptor.The human immune system contains an organ called thymus that is located behind the breastbone, which performs a crucial role in the maturation of T-cells. After T-cells are generated, they migrate into the thymus where they mature. During this maturation, all T-cells that recognise self-antigens are excluded from the population of T-cells; a process termed negative selection. If a B-cell encounters a nonself antigen with a sufficient affinity, it proliferates and differentiates into memory and effector cells; a process named clonal selection. In contrast, if a B-cell recognises a self-antigen, it might result in suppression, as proposed by the immune network theory. In the following subsections, each of these processes (negative selection, clonal selection, and network theory) will be described separately, along with their computational algorithms counterparts.2.1 Negative SelectionThe thymus is responsible for the maturation of T-cells; and is protected by a blood barrier capable of efficiently excluding nonself antigens from the thymic environment. Thus, most elements found within the thymus are representative of self instead of nonself. As an outcome, the T-cells containing receptors capable of recognising these self antigens presented in the thymus are eliminated from the repertoire of T-cells through a process named negative selection. All T-cells that leave the thymus to circulate throughout the body are said to be tolerant to self, i.e., they do not respond to self.From an information processing perspective, negative selection presents an alternative paradigm to perform pattern recognition by storing information about the complement set (nonself) of the patterns to be recognised (self). A negative selection algorithm has been proposed in the literature with applications focused on the problem of anomaly detection, such as computer and network intrusion detection, time series prediction, image inspection and segmentation, and hardware fault tolerance. Given an appropriate problem representation (Section 3), define the set of patterns to be protected and call it the self- set (P). Based upon the negative selection algorithm, generate a set of detectors (M) that will be responsible to identify all elements that do not belong to the self-set, i.e., the nonself elements.After generating the set of detectors (M), the next stage of the algorithm consists in monitoring the system for the presence of nonself patterns (Fig 2(b)). In this case, assume a set P* of patterns to be protected. This set might be composed of the set P plus other new patterns, or it can be a completely novel set.For all elements of the detector set, that corresponds to the nonself patterns, check if it recognises (matches) an element of P* and, if yes, then a nonself pattern was recognized and an action has to be taken. The resulting action of detecting nonself varies according to the problem under evaluation and extrapolates the pattern recognition scope of this chapter.2.2 Clonal SelectionComplementary to the role of negative selection, clonal selection is the theory used to explain how an immune response is mounted when a nonself antigenic pattern is recognised by a B-cell. In brief, when a B-cell receptor recognises a nonself antigen with a certain affinity, it is selected to proliferate and produce antibodies in high volumes. The antibodies are soluble forms of the B-cell receptors that are released from the B-cell surface to cope with the invading nonself antigen. Antibodies bind to antigens leading to their eventual elimination by other immune cells. Proliferation in the case of immune cells is asexual, a mitotic process; the cells divide themselves (there is no crossover). During reproduction, the B-cell progenies (clones) undergo a hyper mutation process that, together with a strong selective pressure, result in B-cells with antigenic receptors presenting higher affinities with the selective antigen. This whole process of mutation and selection is known as the maturation of the immune response and is analogous to the natural selection of species. In addition to differentiating into antibody producing cells, the activated Bcells with high antigenic affinities are selected to become memory cells with long life spans. These memory cells are pre-eminent in future responses to this same antigenic pattern, or a similar one.Other important features of clonal selection relevant from the viewpoint of computation are:1. An antigen selects several immune cells to proliferate. The proliferation rate of each immune cell is proportional to its affinity with the selective antigen: the higher the affinity, the higher the number of offspring generated, and vice-versa;2. In complete opposition to the proliferation rate, the mutation suffered by each immune cell during reproduction is inversely proportional to the affinity of the cell receptor with the antigen: the higher the affinity, the smaller the mutation, and vice-versa.Some authors have argued that a genetic algorithm without crossover is a reasonable model of clonal selection. However, the standard genetic algorithm does not account for important properties such as affinity proportional reproduction and mutation. Other authors proposed a clonal selection algorithm, named CLONALG, to fulfil these basic processes involved in clonal selection. This algorithm was initially proposed to perform pattern recognition and then adapted to solvemulti-modal optimisation tasks. Given a set of patterns to be recognised (P), the basic steps of the CLONALG algorithm are as follows:1. Randomly initialise a population of individuals (M);2. For each pattern of P, present it to the population M and determine its affinity (match) with each element of the population M;3. Select n1 of the best highest affinity elements of M and generate copies of these individuals proportionally to their affinity with the antigen. The higher the affinity, the higher the number of copies, and vice-versa;4. Mutate all these copies with a rate proportional to their affinity with the input pattern: the higher the affinity, the smaller the mutation rate, and vice-versa.5. Add these mutated individuals to the population M and reselect n2 of these maturated (optimised) individuals to be kept as memories of the system;6. Repeat Steps 2 to 5 until a certain criterion is met, such as a minimum pattern recognition or classification error.Note that this algorithm allows the artificial immune system to become increasingly better at its task of recognising patterns (antigens). Thus, based upon an evolutionary like behaviour, CLONALG learns to recognise patterns.2.3 Immune NetworkThe immune network theory proposes that the immune system has a dynamic behaviour even in the absence of external stimuli. It is suggested that the immune cells and molecules are capable of recognising each other, what endows the system with an eigen behaviour that is not dependent on foreign stimulation. Several immunologists have refuted this theory, however its computational aspects are relevant and it has proved itself to be a powerful model for computational systems.According to the immune network theory, the receptor molecules contained in the surface of the immune cells present markers, named idiotopes, which can be recognized by receptors on other immune cells. These idiotopes are displayed in and/or around the same portions of the receptors that recognise nonself antigens. To explain the network theory, assume that a receptor (antibody) Ab1 ona B-cell recognises a nonself antigen Ag. Assume now, that this same receptor Ab1 also recognises an idiotope i2 on another B-cell receptor Ab2. Keeping track of the fact that i2 is part of Ab2, Ab1 is capable of recognising both Ag and Ab2. Thus, Ab2 is said to be the internal image of Ag, more precisely, i2 is the internal image of Ag. The recognition of idiotopes on a cell receptor by other cell receptors, lead to ever increasing sets of connected cell receptors and molecules. Note that the network in this case, is a network of affinities, which different from the ‘hardwired’ network of the nervous system. As a result of the network recognition events, it was suggested that the recognition of a cell receptor by another cell receptor results in network suppression, whilst the recognition of an antigen by a cell receptor results in network activation and cell proliferation. The original theory did not account explicitly for the results of network activation and/or suppression, and the various artificial immune networks found in the literature model it in a particular form.3 Modelling Pattern Recognition in AISUp to this point, the most relevant immune principles and their corresponding computational counterparts to perform pattern recognition have been presented. In order to apply these algorithms to computational problems, there is a need to specify a limited number of other aspects of artificial immune systems, not as yet covered. The first aspect to introduce is the most relevant representations to be applied to model self and nonself patterns. Here the self-patterns correspond to the components of the AIS responsible for recognising the input patterns (nonself). Secondly, the mechanism by which the evaluation of the degree of match (affinity), or degree of recognition, of an input pattern by an element of the AIS has to be discussed. To model immune cells, molecules, and the antigenic patterns, the shape-space approach proposed is usually adopted. Although AIS model recognition through pattern matching, given certain affinity functions to be described further, performing pattern recognition through complementarity or similarity is based more on practical aspects than on biological plausibility. The shape-space approach proposes that an attribute string s = ás1, s2,…,sLñ in an L dimensional shape-space, S, (s ÎSL), can represent any immune cell or molecule. Each attribute of this string is supposed to represent a feature of the immune cell or molecule, such as its charge, van der Wall interactions, etc. In the development of AIS the mapping from the attributes to their biological counterparts is usually not relevant. The type of attributes used to represent the string will define partially the shape-space under study, and is highly dependent on the problem domain. Any shape-space constructed from a finite alphabet of length k constitutes ak-ary Hamming shape-space. As an example, an attribute string built upon the set of binary elements {0,1} corresponds to a binary Hamming shape-space. It can be thought of, in this case, of a problem of recogn ising a set of characters represented by matrices composed of 0’s and 1’s. Each element of a matrix corresponds to a pixel in the character. If the elements of s are represented by real-valued vectors, then we have an Euclidean shape-space. Most of the AIS found in the literature employ binary Hamming or Euclidean shape-spaces. Other types of shape-spaces are also possible, such as symbolic shape-spaces, which combine different (symbolic) attributes in the representation of a single string s. These are usually found in data mining applications, where the data might contain symbolic information like age, name, etc., of a set of patterns.Another important characteristic of the artificial immune systems is that most of them are population based. It means that they are composed of a set of individuals, representing immune cells and molecules, which have to perform a given role; in our context, pattern recognition. If we recapitulate the three immune processes reviewed, negative selection, clonal selection, and immune network, all of them rely on a population M of individuals to recognise a set P of patterns. The negative selection algorithm has to define a set of detectors for nonself patterns; clonal selection reproduces, maturates, and selects self-cells to recognise a set of nonself; and the immune network maintains a set of individuals, connected as a network, to recognize self and nonself.Consider first the binary Hamming shape-space case, which is the most widely used. There are several expressions that can be employed in the determination of the degree of match or affinity between an element of P and an element of M . The simplest case is to simply calculate theHamming distance (DH ) between these two elements, as given by Eq. (1). Another approach is to search for a sequence of r -contiguous bits, and if the number of r -contiguous matches between the strings is greater than a given threshold, then recognition is said to have occurred. As the lastapproach to be mentioned here, we can describe the affinity measure of Hunt, given by Eq. (2). This last method has the advantage that it favours sequences of complementary matches, thus searching for similar regions between the attribute strings (patterns).1,LH i D where δ==∑ 10i i if p m otherwise δ≠⎧=⎨⎩ (1) 2i l H i D D =+∑ (2)where i l is the length of the i -th sequence of matching bits longer than 2.In the case of Euclidean shape-spaces, the Euclidean distance can be used to evaluate the affinity between any two components of the system. Other approaches such as the Manhattan distance may also be employed.Note that all the methods described rely basically, on determining the match between strings. However, there are AIS in the literature that take into account other aspects, such as the number of patterns matched by each antibody.4 A Survey of AIS for Pattern RecognitionThe applications of artificial immune systems are vast, ranging from machine learning to robotic autonomous navigation. This section will review some of the works from the AIS literature applied to the pattern recognition domain. The rationale is to provide a guide to the literature and a brief description of the scope of applications of the algorithms. The section is divided into two parts for ease of comprehension: 1) computer security, and 2) other applications. The problem ofprotecting computers (or networks of computers) from viruses, unauthorised users, etc., constitutes a rich field of research for pattern recognition systems. Due, mainly, to the appealing intuitive metaphor of building artificial immune systems to detect computer viruses, there has been a great interest from the computer science community to this particular application. The use of the negative and clonal selection algorithms have been widely tested on this application. The former because it isan inherent anomaly (change) detection system, constituting a particular case of a pattern recognition device. The latter, the clonal selection algorithm, has been used in conjunction to negative selection due to its learning capabilities. Other more classical pattern recognition tasks, such as character recognition, and data analysis have also been studied within artificial immune systems.5 AIS and ANN for Pattern RecognitionSimilar to the use of artificial neural networks, performing pattern recognition with an AIS usually involves three stages: 1) defining a representation for the patterns; 2) adapting (learning or evolving) the system to identify a set of typical data; and 3) applying the system to recognise a set of new patterns (that might contain patterns used in the adaptive phase).Refering to the three immune algorithms presented (negative selection, clonal selection, and immune network), coupled with the process of modelling pattern recognition in the immune system, as described in Section 3, this section will contrast AIS and ANN focusing the pattern recognition applications. Discussion will be based on computational aspects, such as basic components, adaptation mechanisms, etc. Common neural networks for pattern recognition will be considered, such as single and multi-layer perceptrons, associative memories, and self-organising networks. All these networks are characterised by set(s) of units (artificial neurons); they adapt to the environment through a learning (or storage) algorithm, they can have their architectures dynamically adapted along with the weights, and they have the basic knowledge stored in the connection strengths.Component: The basic unit of an AIS is an attribute string s (along with its connections in network models) represented in the appropriate shape-space. This string s might correspond to an immune cell or molecule. In an ANN, the basic unit is an artificial neuron composed of an activation function, a summing junction, connection strengths, and an activation threshold. While artificial neurons are usually processing elements, attribute strings representing immune cells and molecules are information storage and processing components.Location of the components: In immune network models, the cells and molecules usually present a dynamic behaviour that tries to mimic or counteract the environment. This way, the network elements will be located according to the environmental stimuli. Unlike the immune network models, ANN have their neurons positioned in fixed predefined locations in the network. Some neural network models also adopt fixed neighbourhood patterns for the neurons. If a network pattern of connectivity is not adopted for the AIS, each individual element will have a position in the population that might vary dynamically. Also, a metadynamic process might allow the introduction and/or elimination of particular units.Structure: In negative and clonal AIS, the components are usually structured around matrices representing repertoires or populations of individuals. These matrices might have fixed or variable dimensions. In artificial immune networks and artificial neural networks, the components of the population are interconnected and structured around patterns of connectivity. Artificial immune networks usually have an architecture that follows the spatial distribution of the antigens represented in shape-space, while ANN usually have pre-defined architectures, and weights biasedby the environment.Memory: The attribute strings representing the repertoire(s) of immune cells and molecules, and their respective numbers, constitute most of the knowledge contained in an artificial immune system. Furthermore, parameters like the affinity threshold can also be considered part of the memory of an AIS. In artificial immune network models, the connection strengths among units also carry endogenous and exogenous information, i.e., they quantify the interactions of the elements of the AIS themselves and also with the environment. In most cases, memory is content-addressable and distributed. In the standard (earliest) neural network models, knowledge was stored only in the connection strengths of individual neurons. In more sophisticate strategies, such as constructive and pruning algorithms, and networks with self-adaptive parameters, the final number of network layers, neurons, connections, and the shapes of their respective activation functions are also part of the network knowledge. The memory is usually self-associative or content-addressable, and distributed.Adaptation: Adaptation usually refers to the alteration or adjustment in the structure or behaviour of a system so that its pattern of response to other components of the system and to the environment changes. Although both evolutionary and learning processes involve adaptation, there is a conceptual difference between them. Evolution can be seen as a change in the genetic composition of a population of individuals during successive generations. It is a result of natural selection acting on the genetic variation among individuals. In contrast, learning can be seen as a long lasting change in behaviour as a result of previous experience. While AIS might present both types of adaptation, learning and evolution, ANNs adapt basically through learning procedures.Plasticity and diversity: Metadynamics refers basically to two processes: 1) the recruitment of new components into the system, and 2) the elimination of useless elements from the system. As consequences of metadynamics, the architecture of the system can be more appropriately adapted to the environment, and its search capability (diversity) increased. In addition, metadynamics reduces redundancy within the system by eliminating useless components. Metadynamics in the immune algorithms corresponds to a continuous insertion and elimination of the basic elements(cells/molecules) composing the system. In ANN, metadynamics is equivalent to the pruning and/or insertion of new connections, units, and layers in the network.Interaction with other components: The interaction among cells and molecules in AIS occurs through the recognition (matching) of attribute strings by cell receptors (other attribute strings). In immune network models, the cells usually have weighted connections that allow them to interact with (recognise and be recognised by) other cells. These weights can be stimulatory or suppressive indicating the degree of interaction with other cells. Artificial neural networks are composed of a set (or sets) of interconnected neurons whose connection strengths assume any positive or negative values, indicating an excitatory or inhibitory activation. The interaction with other neurons in the network occurs explicitly through these connection strengths, where a single neuron receives and processes inputs from the environment (or network neurons) in the same or other layer(s). An individual neuron can also receive an input from itself.Interaction with the environment: In pattern recognition applications, the environment is sually represented as a set of input patterns to be learnt, recognised, and/or classified. In AIS, an attributestring represents the genetic information of the immune cells and molecules. This string is compared with the patterns received from the environment. If there is an explicit antigenic population to be recognised (set of patterns), all or some antigens can be presented to the whole or parts of the AIS. At the end of the learning or recognition phase, each component of the AIS might recognise some of the input patterns. The artificial neurons have connections that receive input signals from the environment. These signals are processed by neurons and compared with the information contained in the artificial neural network, such as the connection strengths. After learning, the whole ANN might (approximately) recognise the input patterns.Threshold: Under the shape-space formalism, each component of the AIS interacts with other cells or molecules whose complements lie within a small surrounding region, characterised by a parameter named affinity threshold. This threshold determines the degree of recognition between the immune cells and the presented input pattern. Most current models of neurons include a bias (or threshold). This threshold determines the neuron activation, i.e., it indicates how sensitive the neuron activation will be with relation to the input signal.Robustness: Both paradigms are highly robust due mainly to the presence of populations or networks of components. These elements, cells, molecules, and neurons, can act collectively,co-operatively, and competitively to accomplish their particular tasks. As knowledge is distributed over the many components of the system, damage or failure to individual elements might not significantly deteriorate the overall performance. Both AIS and ANN are highly flexible and noise tolerant. An interesting property of immune network models and negative selection algorithms is that they are also self-tolerant, i.e., they learn to recognise themselves. In immune network models, the cells interact with each other and usually present connection strengths quantifying these interactions. In negative selection algorithms, the self-knowledge is performed by storing information about its complement.State: At each iteration, time step or interval, the state of an AIS corresponds to the concentration of the immune cells and molecules, and/or their affinities. In the case of immune network models, the connection strengths among units are also part of the current state of the system. In artificial neural networks, the activation level of the output neurons determines the state of the system. Notice that this activation level of the output neurons takes into account the number of connection strengths and their respective values, the shape of activation functions and the network dimension.Control: Any immune principle, theory or process can be used to control the types of interaction among the many components of an AIS. As examples, clonal selection can be employed to build an antibody repertoire capable of recognising a set of antigenic patterns, and negative selection can be used to define a set of antibodies (detectors) for the recognition of anomalous patterns. Differential or difference equations can be applied to the control of how an artificial immune network will interact with itself and the environment. Basically, three learning paradigms can be used to train an ANN: 1) supervised, 2) unsupervised, and 3) reinforcement learning.Generalisation capability: In the AIS case, cells and molecules capable of recognising a certain pattern, can recognise not only this specific pattern, but also any structurally related pattern.。

大学毕业论文---软件专业外文文献中英文翻译

大学毕业论文---软件专业外文文献中英文翻译

while Ifyou a ]. Every软件专业毕业论文外文文献中英文翻译Object landscapes and lifetimesTechnically, OOP is just about abstract data typing, inheritance, and polymorphism, but otheissues can be at least as important. The remainder of this section will cover these issues.One of the most important factors is the way objects are created and destroyed. Where is thedata for an object and how is the lifetime of the object controlled? There are different philosoat work here. C++ takes the approach that control of efficiency is the most important issue, sogivesthe programmer a choice.For maximum run-timespeed,the s torageand lifetimecan bedeterminedwhile the program isbeing written,by placingthe objectson the stack(thesearesometimes called automatic or scoped variables) or in the static storage area. This places a prion the speed of storage allocation and release, and control of these can be very valuable in somsituations. However, you sacrifice flexibility because you must know the exact quantity, lifetimand type of objectsyou're writing the program. are trying to solvemore generalproblem such as computer-aided design, warehouse management, or air-traffic control, this is toorestrictive.The second approach is to create objects dynamically in a pool of memory called the heap. Inthis approach, you don't know until run-time how many objects you need, what their lifetime is,what theirexacttypeis.Those aredeterminedatthespurof themoment whiletheprogram isrunning. If you need a new object, you simply make it on the heap at the point that you need it.Because the storage is managed dynamically, at run-time, the amount of time required to allocatestorage on the heap is significantly longer than the time to create storage on the stack. (Creatstorage on the stack is often a single assembly instruction to move the stack pointer down, andanother to move it back up.) The dynamic approach makes the generally logical assumption thatobjects tend to be complicated, so the extra overhead of finding storage and releasing that storwill not have an important impact on the creation of an object. In addition, the greater flexibiessential to solve the general programming problem.Java uses the second approach, exclusively timeyou want to create an object, you useIn you when th but any new youin C (suchthe new keyword to build a dynamic instance of that object.There's another issue, however, and that's the lifetime of an object. With languages that alobjectsto be createdon the stack,the compilerdetermineshow long the objectlastsand canautomatically destroy it. However, if you create it on the heap the compiler has no knowledge ofits lifetime.alanguage like C++,must determine programmatically to destroy theobject, which can lead to memory leaks if you don’t do it correctly (and this is a common problin C++ programs). Java provides a feature called a garbage collector that automatically discoverwhen an object is no longer in use and destroys it. A garbage collector is much more convenientbecause it reduces the number of issues that you must track and the code you must write. Moreimportant, the garbage collector provides a much higher level of insurance against the insidiousproblem of memory leaks (which has brought many a C++ project to its knees).The rest of this section looks at additional factors concerning object lifetimes and landsca1. The singly rooted hierarchyOne of the issues in OOP that has become especially prominent since the introduction of C++iswhether a llclassesshouldultimatelybe inheritedfrom a singlebase class.In Java (aswithvirtually all other OOP languages)answer is “yes” and the name of this ultimate base class issimply Object. It turns out that the benefits of the singly rooted hierarchy are many.All objectsin a singlyrootedhierarchyhave an interfacein common, so they are allultimatelythe same type.The alternative(providedby C++) is thatyou don’tknow thateverything is the same fundamental type. From a backward-compatibility standpoint this fits themodel of C betterand can be thoughtof as lessrestrictive, when you want to do full-onobject-orientedprogramming you must then buildyour own hierarchyto providethe sameconvenience that’s built into other OOP languages. And in library classacquire, someother incompatible interface will be used. It requires effort (and possibly multiple inheritancework the new interface into your design. Is the extra “flexibility” of C++ worth it? If you neit —if you have a large investment —it’s quite valuable. If you’re starting from scratch, otheralternatives such as Java can often be more productive.All objects in a singly rooted hierarchyas Java provides) can be guaranteed to havecertain functionality. You know you can perform certain basic operations on every object in yoursystem. A singly rooted hierarchy, along with creating all objects on the heap, greatly simplifitoso ’s that is a it’sargument passing (one of the more complex topics in C++).A singly rooted hierarchy makes it much easier to implement a garbage collector (which isconvenientlybuiltintoJava).The necessarysupportcan be installed the b ase class,and thegarbage collector can thus send the appropriate messages to every object in the system. Withoutsinglyrootedhierarchyand a system to manipulatean objectvia a reference,itisdifficultimplement a garbage collector.Since run-time type information is guaranteed to be in all objects, you’ll never end up withobject whose type you cannot determine. This is especially important with system level operationsuch as exception handling, and to allow greater flexibility in programming.2 .Collection libraries and support for easy collection useBecause a container is a tool that you’ll use frequently, it makes sense to have a librarycontainersthatare builtin a reusablefashion,so you can take one off the shelfBecause acontainer is a tool that you’ll use frequently, it makes sense to have a library of containersbuilt in a reusable fashion, so you can take one off the shelf and plug it into your program. Japrovides such a library, which should satisfy most needs.Downcasting vs. templates/genericsTo make thesecontainersreusable,they hold the one universaltype in Java thatwaspreviously mentioned: Object. The singly rooted hierarchy means that everything is an Object, soa container that holds Objects can hold anything. This makes containers easy to reuse.To use such a container, you simply add object references to it, and later ask for them backBut, since the container holds only Objects, when you add your object reference into the containit is upcast to Object, thus losing its identity. When you fetch it back, you get an Object refeand not a reference to the type that you put in. So how do you turn it back into something thatthe useful interface of the object that you put into the container?Here, the cast is used again, but this time you’re not casting up the inheritance hierarchymore general type, you cast down the hierarchy to a more specific type. This manner of casting icalled downcasting. With upcasting, you know, for example, that a Circle is aittype of Shapesafe to upcast, but you don’t knowObject an necessarily a Circle or so Shape hardly’t itit to get by safe to downcast unless you know that’s what you’re dealing with.It’s not completely dangerous, however, because if you downcast to the wrong thing you’llget a run-time error called an exception, which will be described shortly. When you fetch objectreferences from a container, though, you must have some way to remember exactly what they areso you can perform a proper downcast.Downcasting and the run-time checks require extra time for the running program, and extraeffort from the programmer. Wouldnmake sense tosomehow create the container so that itknows the types that it holds, eliminating the need for the downcast and a possible mistake? Thesolution is parameterized types, which are classes that the compiler can automatically customizework with particulartypes.For example,with a parameterizedcontainer,the compilercouldcustomize that container so that it would accept only Shapes and fetch only Shapes.Parameterized types are an important part of C++, partly because C++ has no singly rootedhierarchy. In C++, the keyword that implements parameterized types is “template.” Java current has no parameterized types since it is possible for —however awkwardly —using thesingly rooted hierarchy. However, a current proposal for parameterized types uses a syntax thatstrikingly similar to C++ templates.。

软件工程专业毕业论文英文翻译

软件工程专业毕业论文英文翻译

咋一篇文章连个题目也没有啊???This text analysis the mechanism of Hibernate and Struts, put forward 1 kind EE according to the J2 of the Hibernate and the Struts application development strategy.In this kind of strategy, the model layer use a Hibernate realization and see diagram and controller to then use a Struts frame a realization.So can consumedly lower the development efficiency that the Ou of code match sex and exaltation system. The key word Hibernate, Struts, the MVC, hold out for long time a layer one preface along with the Java technique of gradual mature and perfect, Be establishment business enterprise class application of standard terrace, the J2 EE terrace got substantial of development.Several technique asked for help from to include in the J2 EE norm:Enterprise JavaBean(EJB), Java Servlets(Servlet), Java Server Pages(JSP), Java Message Service(JMS)...etc., development many application system.But, also appeared some problem in the tradition J2 the EE the application of the development the process:1)the antinomy of of data model and logic model.Currently the database of usage basically and all is relation type database, but the Java be essentially a kind of the language which face to object, object at saving with read usage SQL and JDBC carry on a database operation and lowered plait distance of efficiency and system of can maintenance;2)tradition of J2 EE application much the adoption is according to the EJB heavy weight frame, this kind of frame suitable for develop a large business enterprise application, but usage the EJB container carry on development and adjust to try to need to be waste a great deal of time.For lowering the Ou of code to match sex, exaltation system of development efficiency, this text put forward 1 kind EE according to the J2 of the Struts frame and the Hibernate frame application development strategy. 2 datas' holding out for long time layer and Hibernate is one piece according to hold out for long time layer frame, is a kind of realization object and relationof the tool which reflect to shoot(O/R Mapping), it carried on the object of the lightweight to pack to the JDBC and make procedure member can usage object plait distance thought to operation database.It not only provided to shoot from Java to reflect of data form, but also provided a data a search and instauration mechanism.Opposite in usage JDBC and SQL to operation database, use a Hibernate ability consumedly of exaltation realization of efficiency.The Hibernate frame use allocation document of the form come to the reflect of the definition Java object and data form to shoot relation, in the meantime at more deep of level of data form of relation explanation for the relations such as inherit of and containment etc. of Java object.Pass the usage HQL language sentence complications of relation the calculate way use the way of object description, to a large extent simplification logarithms according to of search, speed development of efficiency.Have in the Hibernate a simple but keep the API of view, used for to the database mean of object performance search.Want to establish or the modification be these objects, need in the procedure carry on with them to hand over with each other, then tell Hibernate to keep.So, a great deal of pack hold out for long time turn operation of business logic no longer demand write a trivial JDBC language sentence, make data last long thus the layer got biggest of simplification.3 use the Struts realization MVC structure MVC(Model-View-Controller) is put forward by the Trygve Reenskaug, first drive application in the environment SmallTalk-80, is many to hand over with each other with interface system of constitute foundation.According to the need of variable of the interface design, MVC hand over with each other constitute of system to resolve into model and see diagram, controller three part. Model(Model) is software processing problem logic at independence in outside manifestation undercontents and form circumstance of inside abstract, packed the core data, logic of problem and function of calculation relation, independence in concrete of interface expression and I/O operation.See diagram(View) mean information and particular form demonstration of model data and logic relation and appearance to the customer.It acquire a manifestation information from the model, there can be many for homology of information dissimilarity of manifestation form or see diagram.Thecontroller(Controller) is a processing the customer hand over with software with each other operation of, its job is control provide model in any variety of dissemination, insure a customer interface among the model of rightness should contact;It accept a customer of importation, give° the importation feedback model, then realization compute model control, is make model and see diagram to moderate work of ually 1 see a diagram rightness should a controller.Model, see separate of diagram and controller, make a model be able to have many manifestation to see diagram.If the customer pass a certain see the controller of diagram change the data of model, all other dependence in these see of data diagram all should reflection arrive these variety.When therefore and regardless occurrence what data variety, controller all would variety notice allly see diagram, cause manifestation of renewal.This is actually a kind of variety of model-dissemination mechanism.The Struts frame is to be the item of Apache Jakarta to constitute part to publish luck to do at the earliest stage, it inheritted MVC of each item characteristic, and did according to the characteristics of J2 EE correspond of variety with expand.The Struts frame was good to combine Jsp, Java Servlet, Java Bean, Taglib etc. technique.In the Struts, what to undertake the controller role in the MVC be an ActionServlet.The ActionServlet is an in general use control module.This control module provided a processing all HTTPclaim which send out Struts of entrance point.Its interception with distribute these claim to arrive correspond of action type.(these action all of type is Action son type)Moreover the control module is also responsible for using to correspond of claim the parameter fill Action Form(FromBean), and pass action type(ActionBean).Action type the business logic of the interview core, then interview Java Bean or adjust to use EJB.End action type control the power pass follow-up of JSP document, from JSP document born see diagram.All these control logic make use of Struts-config.xml the document come to allocation.See diagram in the Struts frame main from JSP born page completion, the Struts provide abundant of JSP label database, this is advantageous to separating performance logic and procedure logic.The model is with 1 or the form existence of several Java Bean.In the Struts, main existence three kinds of Bean, respectively BE:Action, ActionForm, EJB perhaps Java Bean. The Struts frame have no concrete definition model layer of realization, in actually the development, model layer usually is close with business logic connect with each other, and want to carry on operation to the first floor data.The underneath's introduction is a kind of development strategy, lead the Hibernate into the model layer of Struts frame, usage it to carry on a data to pack with reflect to shoot, provide hold out for long time turn of support. 4 usage Hibernate and the Struts development J2 EE application 4.1 system structure diagram 3 manifestation according to Hibernate and Struts development strategy of system structure diagram.4.2 Development practice underneath combine a development practice, with in the J2 the EE the application very widespread customer register process for example, elucidation above-mentioned system structure is how concrete usage.The process of register is very clear:Customer from register page login.jsp importation register information, system to register theinformation carry on verification, if exactitude success register, otherwise hint correspond mistake information. In the development process, the usage Eclipse be used as development environment and added to carry to provide to the Struts and the Hibernate in the meantime better control and support of three square plug-in MyEclipse, Web server usage Tomcat, the database chose to use Mysql. Carry on an allocation to the Hibernate first, need to the system auto the born hibernate.cfg.xml carry on modification, allocation good database conjunction of various parameter and definition the data reflect to shoot a document.Because the Hibernate take of conjunction pond main used for test, the function isn't very good, can pass JNDI will it modification is usage Tomcat of conjunction pond.本文分析了Hibernate和Struts的机制,提出了一种基于Hibernate和Struts的J2EE 应用开发策略。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

大连理工大学本科外文翻译为客户服务支持进行的数据挖掘Data mining for customer service support学院(系):软件学院专业:软件工程学生姓名:XXX学号:xxx指导教师:XXX完成日期:大连理工大学Dalian University of Technology为客户服务支持进行的数据挖掘摘要在生产环境的传统客户服务支持系统中,一个客户服务数据库通常包括两种形式的服务信息:(1)无结构的客户服务报表用来记录机器故障和维修方法。

(2)为日常管理操作而产生的销售、雇员和客户方面的数据结构。

这次研究怎样将数据挖掘技术应用于将有用的数据从数据库中提取出来以支持两种客户服务活动:决策支持和机器故障分析。

一个数据挖掘过程是基于数据挖掘工具DBMiner,是为了给决策支持提供结构化的管理数据而调查的。

另外,用于将中性网络、基本事件推理和基本规则推理结合起来的数据挖掘技术正在被提出。

它将可能会为机器故障分析探询到无结构的客户服务记录。

这个被提出的技术已经履行用来支持全球范围内WEB的高级错误的分析。

关键字:数据挖掘,数据中的知识发现,客户服务支持,决策支持,机器故障诊断1 介绍客户服务支持正在成为大多数国内外制造公司生产贵重机器和电子设备的一块整体部分。

许多公司都有一个为世界范围内的客户提供安装、检查、维修的服务部门。

虽然大部分的公司都有工程师来处理日常维护和小范围内的故障,但是为了更复杂的维护和维修工作,专家的意见也常常要从制造公司那里得到。

为了使消费者满意,要对他们的要求进行立即回复。

因此,热线要建立服务中心来帮助回答消费者所遇到的普遍问题。

这个服务中心是用于接收关于错误机器的报告或者是通过电话从客户得到的咨询。

当有问题出现时,服务工程师就会通过热线咨询系统为客户建议一系列检查点,这些建议都是基于过去的经验而提出的。

这是从客户服务数据库中提取出来的,它包括那些与现行的问题相近或相似的服务记录。

如果这个问题被解决了,客户就可以试着解决其他问题,与服务中心进行确认。

如果问题仍然存在,中心就要应客户的要求,派出服务工程师进行就地修复。

在修复过程中,服务工程师要掌握客户机器过去的记录,相关的说明和那些可能需要用来进行修复的其他部分。

这种过程很不方便。

在每个服务环节结束后,就需要用客户服务报告来记录新的问题以及新的修复提议或者是可用来纠正它的一些建议。

这个数据库是用于宣传的目的和维护共同的知识基础。

这个服务中心在数据库中拥有客户服务报告。

除了能够维护日常问题和它的修复方法方面的知识基础外,客户服务数据库也能够储存销售,雇员,客户和服务报告方面的数据。

这些数据不仅仅是用于日常的管理操作中,更能帮助公司在工作安排,服务工程师的晋升,营销,生产方面的决策以及对不同机器模型的维护。

客户服务数据库作为一个对不重要的信息以及那些能被利用起来帮助客户服务部门支持它们自身活动的知识的储存库。

这项研究的目的是讨论怎样将数据挖掘技术应用于从客户服务数据库中提取知识以支持两种活动:决策支持和机器故障分析。

这项工程作为一个在多元化公司,应用科学领域,南洋科技大学,新加坡之间进行的合作型工作。

这个公司生产和提供主要应用于电子工业当中的设备的内部和外部的安装。

在传统的帮助台服务中心,服务工程师提供客户支持服务通过使用长途电话。

这样的支持方式是低效,低能和成本高,周期长,花费高,以及服务质量差。

随着互联网技术的出现,使在万维网提供客户服务支持成为可能。

本文介绍了基于网络的智能故障诊断系统,所谓WebService,就是通过互联网提供的客户服务支持。

在互联网支持系统中,基于混合案例推理(CBR)和人工神经网络(ANN)的方法运用于机械故障智能化诊断。

与其使用传统的CBR技术为索引、检索和适应性,不如采用混合CBR-ANN集安与推理的方法提取知识服务周期的记录的客户服务数据库和随后回忆近似的服务记录中使用这些知识检索阶段。

2 数据挖掘数据挖掘,同样以数据库领域当中的知识发明而著称。

它是一个快速形成的领域。

这项技术是受到对新技术的需求以用于帮助分析,理解甚至是对从商业和科技应用中收集来的大量储存信息的设想。

它是一个发现有趣的知识的过程,比如说形式,联合,转换,异常和来自于数据库总所储存的大量数据的重要结构,数据仓库,或者是其他信息仓库。

它可以被用来帮助公司做更好的决策以使他在市场中有竞争力。

在商业和研究团体中发展的主要数据挖掘功能包括总结,联合,分类,预测和分组。

这些功能可以通过使用各种技术而实现,例如数据库定向技术,机械学习和统计技术。

近年来,大量数据挖掘应用和原型已经向不同的领域发展,包括营销,银行业,金融,制造和保健。

另外,数据挖掘同样被应用于其他数据方面,例如时间,空间,电信,网络和多媒体数据。

总之,数据挖掘过程,数据挖掘技术以及即将被应用的功能大部分依靠应用领域和可获得的数据种类。

3 客户服务支持服务记录现在在客户服务数据库中得到规定和储存。

每项服务记录由客户帐户信息和服务细节所组成。

它包括两方面的信息:错误情况和检查点信息。

前者包括服务工程师对机器故障的描述,然而后者暗示某些行为或将被应用与维修机器的服务,这些都是基于事实存在的由客户给出的错误情况之上的。

检查站信息包括检查站组的名称和检查站自身描述,带有优先级和一个可选帮助文档。

检查站组的名称是用来细分一系列检查站组,每个检查站组都与一个决定顺序的优先级联系起来,这个顺序可以用来帮助文件在怎样执行这个检查组的问题上给予可视化的分析。

关于服务记录的错误条件和检查站信息将在图表2中给予图示。

另外,客户服务信息库也储存与销售,客户和雇员有关的数据:在客户服务数据库中,六个主要的表格用来为它做解释。

还有两个,叫做机械故障和检查站,用来储存基于一般机械故障检查站之上的知识。

这些是无结构的文本数据。

现行的四个表格用来储存关于客户,雇员,销售和维护等方面的信息。

而这四个仅仅是储存有结构的数据。

有超过70000的服务记录。

由于每项的错误-条件有几个检查站,有50000多个检查站。

有超过关于4000雇员,500客户,300个不同的机械模型和10000个销售交易方面的信息也在储存之内。

3.1 结构数据挖掘一系列可通过商业手段获得或是在公共领域中得到的游行数据挖掘工具都有在KDNuggets web站点中列出。

这些工具可以用来挖掘销售,维护中有结构的数据,和消费者以及在客户服务数据库中的客户的特性。

当遇上有大量工具支持多种方法的时候很有趣,i.e.不止一项数据挖掘技术。

例如,来自思维机器公司的Darwin支持中神经网络,回朔树,k-means 算法,和为分类,预测和分组功能所进行的基本案例推理。

有一些工具仅仅是为了一个具体的数据挖掘功能。

这就提供了一定的灵活性;用户可以为他们的问题的领域选择不同的数据挖掘工具以达到最佳效果。

数据挖掘工具的选择必须要根据应用领域和它的相关特性。

某些应用仅仅只需要一种数据挖掘功能;其他的也许就会需要不只一种。

在这项研究当中,就选择DBMiner。

这个系统是由来自加拿大Simon Fraser大学的高级数据库系统研究所的DBMiner研究组提出的。

这个系统,将数据仓库,在线分析程序和数据挖掘技术联合起来,以支持来自大量相关数据库中在多项概念层次上的各种知识的发现。

DBMiner系统支持多数主要功能。

这个系统通过使用许多先进的数据挖掘技术而得到使用。

另外,它提供多维数据显示支持并且通过开放式数据库连接界面与标准数据资源相互作用。

3.2 无结构化数据挖掘虽然DBMiner对于结构化数据的大型数据库来说是一个极好的数据挖掘工具,但是对于将信息从客户服务数据库的文本数据中提取出来就不适合了。

由于关于普通错误的信息和支持以及它们所倡导的修正方案都作为错误条件和检查站,以及为了机器故障分析而需要将信息从数据库中提取的新技术都储存在文本格式的文件中。

这就是所谓的文本挖掘。

过去,基本案件推理已经被成功的应用于为客户服务支持而进行的错误分析当中。

CBR系统依靠建立诊断案例的大型仓库以避免提取和编码专家领域信息的重要工作。

它是作为机械故障分析最正确的技术之一,因为它是通过在解决问题和人工智能中获得的经验。

然而,CBR系统的应用严格的说是依靠它的适应性还有案件结构和用于大型的案件数据库中检索的算法。

大多数的CBR系统都使用最接近的算法从案件数据库的索引中进行检索;这样很没有效率,特别是对于大型数据库来说。

其他CBR 系统使用分层索引,决策树。

虽然,这能进行有效的数据检索,但是建立一个分层索引需要专家在项目设计阶段的信息。

神经网络方法在提供具体的例子的时候就会提供一些有效的学习能力。

神经网络受监督与否都取决于培训方法。

它执行检索是基于最近匹配而完成的,因为它以代码本或样本向量的形式为输入模式存储了重量向量。

这种匹配是基于一个决定输出单元的竞争过程之上的,这个输出单元与输入向量最为匹配,就和就近规则比较类似。

然而,由于信息通则神经网络中的研究空间大大减少。

相比之下,为了完成更精确的数据检索,CBR系统需要在工项目数据库中储存所有的项目。

为了有效检索数据而仅仅只储存相关项目的CBR系统缺少精确性和可学性。

因此,神经网络非常适用于检索和索引工程。

在这里,一种用于联系数据库推理、神经网络和基本规则推理的数据挖掘技术正在被定义。

这两个只是CBR循环框架中的一部分。

神经网络正被应用于索引和基于用户错误描述之上的大部分正确服务记录的数据检索,而不是使用CBR系统的就近技术,基本规则推理是用于引导检查站解决方案的再利用。

4 为决策支持而进行的数据挖掘信息,比如说销售最好的设备,一种特殊设备的客户们,在不同设备销售量间的比较,以及不同服务工程师的业绩都是管理层所最最需要的。

4.1 数据挖掘的过程将潜藏的信息从大型数据库中提取出来的数据挖掘过程,而这个过程主要着眼于那些能作为有用信息的有趣版本。

它由七步组成。

4.1.1 建立挖掘目标大量的挖掘目标已被确定下来:营销:确定那些销售额较低的机器设备并找到造成的原因;然后通过改进设计和这些机器模型的耐用性来增加销量.以邮寄的形式将机器模型寄给目标客户,他们对这种形式很感兴趣。

客户支持:在机器模型的基础上向客户提供可能的服务支持,这个问题的实质是地理位置的选择。

资源管理:根据工程师们的专业特长和经验为服务工程师分配任务。

根据他们的表现给服务工程师升职。

4.1.2 数据的选择这个步骤要确定各种数据的集合或者是数据样本,数据挖掘可根据这些进行.在数据库中有许多表格。

然而,并不是全部都适用于数据挖掘,因为它们不够大。

在基本学习过后,我们发现结构型数据表格EMPLOYEE 和CUSTOMER对于数据挖掘很不合适,而MACHINE和SERVICE_REPORT对数据挖掘就很适合。

相关文档
最新文档