软件工程毕业论文文献翻译中英文对照

合集下载

软件工程(外文翻译文献)

软件工程(外文翻译文献)

外文文献资料1、Software EngineeringSoftware is the sequences of instructions in one or more programming languages that comprise a computer application to automate some business function. Engineering is the use of tools and techniques in problem solving. Putting the two words together, software engineering is the systemtic application of tools and techniques in the development of computer-based applications.The software engineering process describes the steps it takes to deelop the system. We begin a development project with the notion that there is a problem to be solved via automation. The process is how you get from problem recognition to a working solution. A quality process is desirable because it is more likely to lead to a quality product. The process followed by a project team during the development life cycle of an application should be orderly, goal-oriented, enjoyable, and a learning experience.Object-oriented methodology is an approach to system lifecycle development that takes a top-down view of data objects, their allowable actions, and the underlying communication requirement to define a system architecture. The data and action components are encapsulated, that is , they are combined together, to form abstract data types Encapsulation means that if I know what data I want ,I also know the allowable processes against that data. Data are designed as lattice hierarchies of relationships to ensure that top-down, hierarchic inheritance and side ways relationships are accommodated. Encapsulated objects are constrained only to communicate via messages. At a minimum, messages indicate the receiver and action requested. Messages may be more elaborate, including the sender and data to be acted upon.That we try to apply engineering discipline to software development does not mean that we have all the answers about how to build applications. On the contrary, we still build systems that are not useful and thus are not used. Part of the reason for continuing problems in application development, is that we are constantly trying to hita moving target. Both the technology and the type of applications needed by businesses are constantly changing and becoming more complex. Our ability to develop and disseminate knowledge about how to successfully build systems for new technologies and new application types seriously lags behind technological and business changes.Another reason for continuing problems in application development is that we aren’t always free to do what we like and it is hard to change habits and cultures from the old way of doing things, as well as get users to agree with a new sequence of events or an unfamiliar format for documentation.You might ask then, if many organizations don’t use good software engineering practices, why should I bother learning them? There are two good answers to this question. First, if you never know the right thing to do, you have no chance of ever using it. Second, organizations will frequently accept evolutionary, small steps of change instead of revolutionary, massive change. You can learn individual techniques that can be applied without complete devotion to one way of developing systems. In this way, software engineering can speed changee in their organizations by demonstrating how the tools and techniques enhance th quality of both the product and the process of building a system.2、Data Base System1、IntroductionThe development of corporate databases will be one of the most important data-processing activities for the rest of the 1970s. Date will be increasingly regarded as a vital corporate resource, which must be organized so as to maximize their value. In addition to the databases within an organization, a vast new demand is growing for database services, which will collect, organize, and sell data.The files of data which computers can use are growing at a staggering rate. The growth rate in the size of computer storage is greater than the growth in the size or power of any other component in the exploding data processing industry. The more data the computers have access to, the greater is their potential power. In all walks of life and in all areas of industry, data banks will change the areas of what it is possiblefor man to do. In the end of this century, historians will look back to the coming of computer data banks and their associated facilities as a step which changed the nature of the evolution of society, perhaps eventually having a greater effect on the human condition than even the invention of the printing press.Some most impressive corporate growth stories of the generation are largely attributable to the explosive growth in the need of information.The vast majority of this information is not yet computerized. However, the cost of data storage hardware is dropping more rapidly than other costs in data processing. It will become cheaper to store data on computer files than to store them on paper. Not only printed information will be stored. The computer industry is improving its capability to store line drawing, data in facsimile form, photo-graphs, human speech, etc. In fact, any form of information other than the most intimate communications between humans can be transmitted and stored digitally.There are two main technology developments likely to become available in the near future. First, there are electromagnetic devices that will hold much more data than disks but have much longer access time. Second, there are solid-state technologies that will give microsecond access time but capacities are smaller than disks.Disks themselves may be increased in capacity somewhat. For the longer term future there are a number of new technologies which are currently working in research labs which may replace disks and may provide very large microsecond-access-time devices. A steady stream of new storage devices is thus likely to reach the marketplace over the next 5 years, rapidly lowering the cost of storing data.Given the available technologies, it is likely that on-line data bases will use two or three levels of storage. One solid-state with microsecond access time, one electromagnetic with access time of a fraction of a second. If two ,three ,or four levels of storage are used, physical storage organization will become more complex ,probably with paging mechanisms to move data between the levels; solid-state storage offers the possibility of parallel search operation and associativememory.Both the quantity of data stored and the complexity of their organization are going up by leaps and bounds. The first trillion bit on-line stores are now in use . in a few year’s time ,stores of this size may be common.A particularly important consideration in data base design is to store the data so that the can be used for a wide variety of applications and so that the way they can be changed quickly and easily. On computer installation prior to the data base era it has been remarkably difficult to change the way data are used. Different programmers view the data in different ways and constantly want to modify them as new needs arise modification , however ,can set off a chain reaction of changes to existing programs and hence can be exceedingly expensive to accomplish .Consequently , data processing has tended to become frozen into its old data structures .To achieve flexibility of data usage that is essential in most commercial situations . Two aspects of data base design are important. First, it should be possible to interrogate and search the data base without the lengthy operation of writing programs in conventional programming languages. Second ,the data should be independent of the programs which use them so that they can be added to or restructured without the programs being changed .The work of designing a data base is becoming increasing difficult , especially if it is to perform in an optimal fashion . There are many different ways in which data can be structured ,and they have different types of data need to be organized in different ways. Different data have different characteristics , which ought to effect the data organization ,and different users have fundamentally different requirements. So we need a kind of data base management system(DBMS)to manage data.Data base design using the entity-relationship model begins with a list of the entity types involved and the relationships among them. The philosophy of assuming that the designer knows what the entity types are at the outset is significantly different from the philosophy behind the normalization-based approach.The entity-relationship(E-R)approach uses entity-relationship diagrams. The E-Rapproach requires several steps to produre a structure that is acceptable by the particular DBMS. These steps are:(1) Data analysis(2) Producing and optimizing the entity model.(3) Logical schema development(4) Physical data base design process.Developing a data base structure from user requirements is called data bases design. Most practitioners agree that there are two separate phases to the data base design process. The design of a logical database structure that is processable by the data base management system(DBMS)d escribes the user’s view of data, and is the selection of a physical structure such as the indexed sequential or direct access method of the intended DBMS.Current data base design technology shows many residual effects of its outgrowth from single-record file design methods. File design is primarily application program dependent since the data has been defined and structured in terms of individual applications to use them. The advent of DBMS revised the emphasis in data and program design approaches.There are many interlocking questions in the design of data-base systems and many types of technique that one can use is answer to the question so many; in fact, that one often sees valuable approaches being overlooked in the design and vital questions not being asked.There will soon be new storage devices, new software techniques, and new types of data bases. The details will change, but most of the principles will remain. Therefore, the reader should concentrate on the principles.2、Data base systemThe conception used for describing files and data bases has varied substantially in the same organization.A data base may be defined as a collection of interrelated data stored together with as little redundancy as possible to serve on or more applications in an optimal fashion; the data are stored so that they are independent of programs which use thedata; a common and controlled approach is used in adding new data and in modifying and retrieving existing data within the data base. One system is said to contain a collection of data bases if they are entirely separate in structure.A data base may be designed for batch processing, real-time processing, or in-line processing. A data base system involve application program, DBMS, and data base.One of the most important characteristics of most data bases is that they will constantly need to change and grow. Easy restructuring of the data base must be possible as new data types and new applications are added. The restructuring should be possible without having to rewrite the application program and in general should cause as little upheaval as possible. The ease with which a data base can be changed will have a major effect on the rate at which data-processing application can be developed in a corporation.The term data independence is often quoted as being one of the main attributes of a data base. It implies that the data and the application programs which use them are independent so that either may be changed without changing the other. When a single set of data items serves a variety of applications, different application programs perceive different relationships between the data items. To a large extent, data-base organization is concerned with the representation of relationship between data items and records as well as how and where the data are stored. A data base used for many applications can have multiple interconnections between the data item about which we may wish to record. It can describes the real world. The data item represents an attribute, and the attribute must be associated with the relevant entity. We design values to the attributes, one attribute has a special significance in that it identifies the entity.An attribute or set of attribute which the computer uses to identify a record or tuple is referred to as a key. The primary key is defined as that key used to uniquely identify one record or tuple. The primary key is of great importance because it is used by the computer in locating the record or tuple by means of an index or addressing algorithm.If the function of a data base were merely to store data, its organization would be simple. Most of the complexities arise from the fact that is must also show the relationships between the various items of data that are stored. It is different to describe the data in logical or physical.The logical data base description is referred to as a schema .A schema is a chart of the types of data that one used. It gives the names of the entities and attributes, and specifics the relations between them. It is a framework into which the values of the data-items can be fitted.We must distinguish between a record type and a instance of the record. When we talk about a “personnel record”,this is really a record type.There are no data values associated with it.The term schema is used to mean an overall chart of all of the dataitem types and record types stored in a data he uses. Many different subschema can be derived from one schema.The schema and the subschema are both used by the data-base management system, the primary function of which is to serve the application programs by executing their data operations.A DBMS will usually be handing multiple data calls concurrently. It must organize its system buffers so that different data operations can be in process together. It provides a data definition language to specify the conceptual schema and most likely, some of the details regarding the implementation of the conceptual schema by the physical schema. The data definition language is a high-level language, enabling one to describe the conceptual schema in terms of a “data model” .The choice of a data model is a difficult one, since it must be rich enough in structure to describe significant aspects of the real world, yet it must be possible to determine fairly automatically an efficient implementation of the conceptual schema by a physical schema. It should be emphasized that while a DBMS might be used to build small data bases, many data bases involve millions of bytes, and an inefficient implementation can be disastrous.We will discuss the data model in the following.3、Three Data ModelsLogical schemas are defined as data models with the underlying structure of particular database management systems superimposed on them. At the present time, there are three main underlying structures for database management systems. These are :RelationalHierarchicalNetworkThe hierarchical and network structures have been used for DBMS since the 1960s. The relational structure was introduced in the early 1970s.In the relational model, the entities and their relationships are represented by two-dimensional tables. Every table represents an entity and is made up of rows and columns. Relationships between entities are represented by common columns containing identical values from a domain or range of possible values.The last user is presented with a simple data model. His and her request are formulated in terms of the information content and do not reflect any complexities due to system-oriented aspects. A relational data model is what the user sees, but it is not necessarily what will be implemented physically.The relational data model removes the details of storage structure and access strategy from the user interface. The model provides a relatively higher degree of data. To be able to make use of this property of the relational data model however, the design of the relations must be complete and accurate.Although some DBMS based on the relational data model are commercially available today, it is difficult to provide a complete set of operational capabilities with required efficiency on a large scale. It appears today that technological improvements in providing faster and more reliable hardware may answer the question positively.The hierarchical data model is based on a tree-like structure made up of nodes and branches. A node is a collection of data attributes describing the entity at that point.The highest node of the hierarchical tree structure is called a root. The nodes at succeeding lower levels are called children .A hierarchical data model always starts with a root node. Every node consists of one or more attributes describing the entity at that node. Dependent nodes can follow the succeeding levels. The node in the preceding level becomes the parent node of the new dependent nodes. A parent node can have one child node as a dependent or many children nodes. The major advantage of the hierarchical data model is the existence of proven database management systems that use the hierarchical data model as the basic structure. There is a reduction of data dependency but any child node is accessible only through its parent node, the many-to –many relationship can be implemented only in a clumsy way. This often results in a redundancy in stored data.The network data model interconnects the entities of an enterprise into a network. In the network data model a data base consists of a number of areas. An area contains records. In turn, a record may consist of fields. A set which is a grouping of records, may reside in an area or span a number of areas. A set type is based on the owner record type and the member record type. The many-to many relation-ship, which occurs quite frequently in real life can be implemented easily. The network data model is very complex, the application programmer must be familiar with the logical structure of the data base.4、Logical Design and Physical DesignLogical design of databases is mainly concerned with superimposing the constructs of the data base management system on the logical data model. There are three mainly models: hierarchical, relational, network we have mentioned above.The physical model is a framework of the database to be stored on physical devices. The model must be constructed with every regard given to the performance of the resulting database. One should carry out an analysis of the physical model with average frequencies of occurrences of the grou pings of the data elements, with expected space estimates, and with respect to time estimates for retrieving and maintaining the data.The database designer may find it necessary to have multiple entry points into a database, or to access a particular segment type with more than one key. To provide this type of access; it may be necessary to invert the segment on the keys. Thephysical designer must have expertise in knowledge of the DBMS functions and understanding of the characteristics of direct access devices and knowledge of the applications.Many data bases have links between one record and another, called pointers. A pointer is a field in one record which indicates where a second record is located on the storage devices.Records that exist on storage devices is a given physical sequence. This sequencing may be employed for some purpose. The most common pupose is that records are needed in a given sequence by certain data-processing operations and so they are stored in that sequences.Different applications may need records in different sequences.The most common method of ordering records is to have them in sequence by a key —that key which is most commonly used for addressing them. An index is required to find any record without a lengthy search of the file.If the data records are laid out sequentially by key, the index for that key can be much smaller than they are nonsequential.Hashing has been used for addressing random-access storages since they first came into existence in the mid-1950s. But nobody had the temerity to use the word hashing until 1968.Many systems analysis has avoided the use of hashing in the suspicion that it is complicated. In fact, it is simple to use and has two important advantages over indexing. First, it finds most records with only one seek and second, insertion and deletions can be handled without added complexity. Indexing, however, can be used with a file which is sequential by prime key and this is an overriding advantage, for some batch-pro-cessing applications.Many data-base systems use chains to interconnect records also. A chain refers to a group of records scatters within the files and interconnected by a sequence of pointers. The software that is used to retrive the chained records will make them appear to the application programmer as a contiguous logical file.The primary disadvantage of chained records is that many read operations areneeded in order to follow lengthy chains. Sometimes this does not matter because the records have to be read anyway. In most search operations, however, the chains have to be followed through records which would not otherwise to read. In some file organizations the chains can be contained within blocked physical records so that excessive reads do not occur.Rings have been used in many file organizations. They are used to eliminate redundancy. When a ring or a chain is entered at a point some distance from its head, it may be desirable to obtain the information at the head quickly without stepping through all the intervening links.5、Data Description LanguagesIt is necessary for both the programmers and the data administrator to be able to describe their data precisely; they do so by means of data description languages. A data description language is the means of declaring to data-base management system what data structures will be used.A data description languages giving a logical data description should perform the folloeing functions:It should give a unique name to each data-item type, file type, data base and other data subdivision.It should identify the types of data subdivision such as data item segment , record and base file.It may define the type of encoding the program uses in the data items (binary , character ,bit string , etc.)It may define the length of the data items and the range of the values that a data item can assume .It may specify the sequence of records in a file or the sequence of groups of record in the data base .It may specify means of checking for errors in the data .It may specify privacy locks for preventing unauthorized reading or modification of the data .These may operate at the data-item ,segment ,record, file or data-base level and if necessary may be extended to the contents(value) of individual data items .The authorization may , on the other hand, be separate defined .It is more subject to change than the data structures, and changes in authorization proceduresshould not force changes in application programs.A logical data description should not specify addressing ,indexing ,or searching techniques or specify the placement of data on the storage units ,because these topics are in the domain of physical ,not logical organization .It may give an indication of how the data will be used or of searching requirement .So that the physical technique can be selected optimally but such indications should not be logically limiting.Most DBMS have their own languages for defining the schemas that are used . In most cases these data description languages are different to other programmer language, because other programmer do not have the capability to define to variety of relationship that may exit in the schemas.附录 B 外文译文1、软件工程软件是指令的序列,该指令序列由一种或者多种程序语言编写,它能使计算机应用于某些事物的运用自动化。

软件工程中英文对照外文翻译文献

软件工程中英文对照外文翻译文献

中英文对照外文翻译(文档含英文原文和中文翻译)Application FundamentalsAndroid applications are written in the Java programming language. The compiled Java code — along with any data and resource files required by the application — is bundled by the aapt tool into an Android package, an archive file marked by an .apk suffix. This file is the vehicle for distributing the application and installing it on mobile devices; it's the file users download to their devices. All the code in a single .apk file is considered to be one application.In many ways, each Android application lives in its own world:1. By default, every application runs in its own Linux process. Android starts the process when any of the application's code needs to be executed, and shuts down the process when it's no longer needed and system resources are required by other applications.2. Each process has its own virtual machine (VM), so application code runs in isolation from the code of all other applications.3. By default, each application is assigned a unique Linux user ID. Permissions are set so that the application's files are visible only to that user and only to the application itself — although there are ways to export them to other applications as well.It's possible to arrange for two applications to share the same user ID, in which case they will be able to see each other's files. To conserve system resources, applications with the same ID can also arrange to run in the same Linux process, sharing the sameVM.Application ComponentsA central feature of Android is that one application can make use of elements of other applications (provided those applications permit it). For example, if your application needs to display a scrolling list of images and another application has developed a suitable scroller and made it available to others, you can call upon that scroller to do the work, rather than develop your own. Your application doesn't incorporate the code of the other application or link to it. Rather, it simply starts up that piece of the other application when the need arises.For this to work, the system must be able to start an application process when any part of it is needed, and instantiate the Java objects for that part. Therefore, unlike applications on most other systems, Android applications don't have a single entry point for everything in the application (no main() function, for example). Rather, they have essential components that the system can instantiate and run as needed. There are four types of components:ActivitiesAn activity presents a visual user interface for one focused endeavor the user can undertake. For example, an activity might present a list of menu items users can choose from or it might display photographs along with their captions. A text messaging application might have one activity that shows a list of contacts to send messages to, a second activity to write the message to the chosen contact, and other activities to review old messages or change settings. Though they work together to form a cohesive user interface, each activity is independent of the others. Each one is implemented as a subclass of the Activity base class.An application might consist of just one activity or, like the text messaging application just mentioned, it may contain several. What the activities are, and how many there are depends, of course, on the application and its design. Typically, one of the activities is marked as the first one that should be presented to the user when the application is launched. Moving from one activity to another is accomplished by having the current activity start the next one.Each activity is given a default window to draw in. Typically, the window fills the screen, but it might be smaller than the screen and float on top of other windows. An activity can also make use of additional windows — for example, a pop-up dialog that calls for a user response in the midst of the activity, or a window that presents users with vital information when they select a particular item on-screen.The visual content of the window is provided by a hierarchy of views — objects derived from the base View class. Each view controls a particular rectangular space within the window. Parent views contain and organize the layout of their children. Leaf views (those at the bottom of the hierarchy) draw in the rectangles they control and respond to user actions directed at that space. Thus, views are where the activity's interaction with the user takes place.For example, a view might display a small image and initiate an action when the user taps that image. Android has a number of ready-made views that you can use —including buttons, text fields, scroll bars, menu items, check boxes, and more.A view hierarchy is placed within an activity's window by theActivity.setContentView() method. The content view is the View object at the root of the hierarchy. (See the separate User Interface document for more information on views and the hierarchy.)ServicesA service doesn't have a visual user interface, but rather runs in the background for an indefinite period of time. For example, a service might play background music as the user attends to other matters, or it might fetch data over the network or calculate something and provide the result to activities that need it. Each service extends the Service base class.A prime example is a media player playing songs from a play list. The player application would probably have one or more activities that allow the user to choose songs and start playing them. However, the music playback itself would not be handled by an activity because users will expect the music to keep playing even after they leave the player and begin something different. To keep the music going, the media player activity could start a service to run in the background. The system would then keep the music playback service running even after the activity that started it leaves the screen.It's possible to connect to (bind to) an ongoing service (and start the service if it's not already running). While connected, you can communicate with the service through an interface that the service exposes. For the music service, this interface might allow users to pause, rewind, stop, and restart the playback.Like activities and the other components, services run in the main thread of the application process. So that they won't block other components or the user interface, they often spawn another thread for time-consuming tasks (like music playback). See Processes and Threads, later.Broadcast receiversA broadcast receiver is a component that does nothing but receive and react to broadcast announcements. Many broadcasts originate in system code — for example, announcements that the timezone has changed, that the battery is low, that a picture has been taken, or that the user changed a language preference. Applications can also initiate broadcasts — for example, to let other applications know that some data has been downloaded to the device and is available for them to use.An application can have any number of broadcast receivers to respond to any announcements it considers important. All receivers extend the BroadcastReceiver base class.Broadcast receivers do not display a user interface. However, they may start an activity in response to the information they receive, or they may use the NotificationManager to alert the user. Notifications can get the user's attention in various ways — flashing the backlight, vibrating the device, playing a sound, and so on. They typically place a persistent icon in the status bar, which users can open to get the message.Content providersA content provider makes a specific set of the application's data available to other applications. The data can be stored in the file system, in an SQLite database, or in anyother manner that makes sense. The content provider extends the ContentProvider base class to implement a standard set of methods that enable other applications to retrieve and store data of the type it controls. However, applications do not call these methods directly. Rather they use a ContentResolver object and call its methods instead. A ContentResolver can talk to any content provider; it cooperates with the provider to manage any interprocess communication that's involved.See the separate Content Providers document for more information on using content providers.Whenever there's a request that should be handled by a particular component, Android makes sure that the application process of the component is running, starting it if necessary, and that an appropriate instance of the component is available, creating the instance if necessary.Activating components: intentsContent providers are activated when they're targeted by a request from a ContentResolver. The other three components — activities, services, and broadcast receivers — are activated by asynchronous messages called intents. An intent is an Intent object that holds the content of the message. For activities and services, it names the action being requested and specifies the URI of the data to act on, among other things. For example, it might convey a request for an activity to present an image to the user or let the user edit some text. For broadcast receivers, theIntent object names the action being announced. For example, it might announce to interested parties that the camera button has been pressed.There are separate methods for activating each type of component:1. An activity is launched (or given something new to do) by passing an Intent object toContext.startActivity() or Activity.startActivityForResult(). The responding activity can look at the initial intent that caused it to be launched by calling its getIntent() method. Android calls the activity's onNewIntent() method to pass it any subsequent intents. One activity often starts the next one. If it expects a result back from the activity it's starting, it calls startActivityForResult() instead of startActivity(). For example, if it starts an activity that lets the user pick a photo, it might expect to be returned the chosen photo. The result is returned in an Intent object that's passed to the calling activity's onActivityResult() method.2. A service is started (or new instructions are given to an ongoing service) by passing an Intent object to Context.startService(). Android calls the service's onStart() method and passes it the Intent object. Similarly, an intent can be passed to Context.bindService() to establish an ongoing connection between the calling component and a target service. The service receives the Intent object in an onBind() call. (If the service is not already running, bindService() can optionally start it.) For example, an activity might establish a connection with the music playback service mentioned earlier so that it can provide the user with the means (a user interface) for controlling the playback. The activity would call bindService() to set up that connection, and then call methods defined by the service to affect the playback.A later section, Remote procedure calls, has more details about binding to a service.3. An application can initiate a broadcast by passing an Intent object to methods like Context.sendBroadcast(), Context.sendOrderedBroadcast(), andContext.sendStickyBroadcast() in any of their variations.Android delivers the intent to all interested broadcast receivers by calling their onReceive() methods. For more on intent messages, see the separate article, Intents and Intent Filters.Shutting down componentsA content provider is active only while it's responding to a request from a ContentResolver. And a broadcast receiver is active only while it's responding to a broadcast message. So there's no need to explicitly shut down these components. Activities, on the other hand, provide the user interface. They're in a long-running conversation with the user and may remain active, even when idle, as long as the conversation continues. Similarly, services may also remain running for a long time. So Android has methods to shut down activities and services in an orderly way:1. An activity can be shut down by calling its finish() method. One activity can shut down another activity (one it started with startActivityForResult()) by calling finishActivity().2. A service can be stopped by calling its stopSelf() method, or by calling Context.stopService().Components might also be shut down by the system when they are no longer being used or when Android must reclaim memory for more active components. A later section, Component Lifecycles, discusses this possibility and its ramifications in more detail.The manifest fileBefore Android can start an application component, it must learn that the component exists. Therefore, applications declare their components in a manifest file that's bundled into the Android package, the .apk file that also holds the application's code, files, and resources.The manifest is a structured XML file and is always named AndroidManifest.xml for all applications. It does a number of things in addition to declaring the application's components, such as naming any libraries the application needs to be linked against (besides the default Android library) and identifying any permissions the application expects to be granted.But the principal task of the manifest is to inform Android about the application's components. For example, an activity might be declared as follows:The name attribute of the <activity> element names the Activity subclass that implements the activity. The icon and label attributes point to resource files containing an icon and label that can be displayed to users to represent the activity.The other components are declared in a similar way — <service> elements for services, <receiver> elements for broadcast receivers, and <provider> elements for content providers. Activities, services, and content providers that are not declared in the manifest are not visible to the system and are consequently never run. However, broadcast receivers can either be declared in the manifest, or they can be created dynamically in code (as BroadcastReceiver objects) and registered with the system by calling Context.registerReceiver().For more on how to structure a manifest file for your application, see The Android Manifest.xml File.Intent filtersAn Intent object can explicitly name a target component. If it does, Android finds that component (based on the declarations in the manifest file) and activates it. But if a target is not explicitly named, Android must locate the best component to respond to the intent. It does so by comparing the Intent object to the intent filters of potential targets. A component's intent filters inform Android of the kinds of intents the component is able to handle. Like other essential information about the component, they're declared in the manifest file. Here's an extension of the previous example that adds two intent filters to the activity:The first filter in the example — the combination of the action"android.intent.action.MAIN" and the category"UNCHER" — is a common one. It marks the activity as one that should be represented in the application launcher, the screen listing applications users can launch on the device. In other words, the activity is the entry point for the application, the initial one users would see when they choose the application in the launcher.The second filter declares an action that the activity can perform on a particular type of data.A component can have any number of intent filters, each one declaring a different set of capabilities. If it doesn't have any filters, it can be activated only by intents that explicitly name the component as the target.For a broadcast receiver that's created and registered in code, the intent filter is instantiated directly as an IntentFilter object. All other filters are set up in the manifest. For more on intent filters, see a separate document, Intents and Intent Filters.应用程序基础Android DevelopersAndroid应用程序使用Java编程语言开发。

外文译文

外文译文

成都理工大学
学生毕业设计(论文)外文译文
图表1展示了FriendTracker和FriendViewer应用所包含的不同类型的组件。

体化了组件。

关于应用中每个类型的组件的数量是没有限制的,但是作为惯例,一个组件与应用有相同的名字。

常见的情况是,这里有个activity,就像在FriendViewer
图表2展示了应用FriendTracker和FriendViewer
android分配的一部分。

在每个案例中,一个组件启动与另一个的通信。

简单来说,我们调用这个内部组件通信(ICC)。

在许多时候,在Unix基础系统中,
ICC的功能除了安全规则以外,同样的是不考虑目标是否在同一个应用或者是不同的应用中。

可用的ICCaction取决于目标组件。

每个类型的组件只特定支持他的类型的交互——举个例子,当
如图表3所示,android通过两个强制机制的共同工作来保护应用和数据,
ICC层面上。

ICC的仲裁定义了核心安全架构,这也是这篇文章的核心,但是他依赖于下层的
系统提供的保障。

(正文页面不够可加页,并在正文后附外文原文,统一用A4纸张打印或手工誊写)。

软件应用中英文对照外文翻译文献

软件应用中英文对照外文翻译文献

软件应用中英文对照外文翻译文献(文档含英文原文和中文翻译)原文:The Design and Implementation of SingleSign-on Based on Hybrid ArchitectureAbstract—For the purpose of solving the problems of user repeated logon from various kinds of Application which based on hybrid architecture and in different domains, single sign-on architecture is proposed. On the basis of analyzing the advantages and disadvantages of existing single sign-on models, combined with the key technology like Web Service, Applet and reverse proxy, two core problems such as single sign-on architecture mix B/S and C/S structure applications and cross-domain single sign-on are resolved. Meanwhile, the security and performance of this architecture are well protected since the reverse proxy and related encryption technology are adopted. The results show that this architecture is high performance and it is widely applicable, and it will be applied to practical application soon.Index Terms—single sign-on, web service, cross domain, reverse proxy, B/S, C/SINTRODUCTIONWith the information society, people enjoy the progress in the huge interests, but at the same time also faced the test of information security. With all system users need to log in the system increased, users need to set a lot of user names and passwords, which are confused easily, so it will increase the possibility of error. But most users use the same user name and password, this makes the authentication information is illegally intercepted and destroyed the possibility of increased, and security will be reduced accordingly. For managers, the more systems need more corresponding user databases and database privileges, these will increase management complexity. Single sign-on system is proposed a solution to solve the problem. Using single sign-on, we can establish a unified identity authentication system and a unified rights management system. It not only improve system efficiency and safety, but also can use user-friendly and to reduce the burden on administrators.TABLE 1 The comparison of a variety of single sign-on toachieve modelsSSO Achieve- Action ability Manageability ModelBroker Model The large Enable centralizedtransformation of the managementold systemAgent Model Need to add a new Management moreagent for each of the difficult to controlold system,transplantation isAgent and relatively simpleTransplantation Enable centralizedBroker Model simple, managementtransformation of theold system withlimited capacityGateway Model Need to use a Easy to manage, butdedicated gateway to databases between theaccess various different gateways needapplications to be synchronized Token Model Implementation of Need to add newrelatively simple components andincrease themanagement burdenSingle sign-on refers to when the user needs to access a distributed environment which has different applications to provide the service, only sign on once in the environment,no need for the user to re-sign on the various application systems[1]. Now there are many products and solutions to implement SSO, such as Passport of Microsoft, IBM Web Sphere Portal Server although these SSO products could do well in the function of single sign-on, but most of them are complex and inflexible. Currently, the typical models to achieve SSO include broker model, agent model, agent and broker model, gateway model and token model [2]. In table 1, it analyses these models can be implemented and manageability. Based on the above comparison, agent and broker model has the advantages both centralized management and revised less original application service procedure. So I decide to adopt agent and broker model as the basis for this model. In order to integrate information and applications well and with the B/S mode in-depth application software, there has been the concept of enterprise portal, offer a best way to solve this problem. Enterprise portal provides business users access information andapplications, and complete or assist in a variety of interactive behavior of a single integrated access point. The appropriate system software portal provides a development, deployment and management of portal applications services. Enterprise information portal concerns portal, content management, data integration, single sign-on, and much other content.SYSTEM CONSTRUCTION WHICH REGISTERS BASED ON THE WEB SERVICE MIX CONSTRUCTION SINGLE SIGN-ONThe system consists of multiple trust domains. Each trust domain has much B/S architecture of the application servers; in addition to B/S architecture of the application servers also included C/S architecture application servers. All the applications are bound together through a unified portal to achieve functionality of single sign-on. You can see that this architecture is based on the agent and the broker model. A unified agent portal is playing a broker role, and various applications are playing an agent role. The B/S architecture applications are installed on the Client side of SSO Agent, and the unified portal is installed on the Server side of SSO Agent. Between them is through these two Agents to interact. In addition, in Fig 1, the external provision of authentication server is LDAP authentication interface. Token authentication Web Service server provides the interfaces of single sign-on token of the additions, deletions, editions and queries. But the permission Web Service server provides the appropriate authority information system, to achieve unified management authority for accessing unified portal application system.The system supports cross- domain access, that is, the domain D1 users can access the application domain D2, and the domain D2 users can access the application domain D1. At the same time, the system also supports the application of different structures between the single sign-on, that is, user after accessing the application A of the B/S structure access the application E of C/S structure without having to repeatedly enter user name and password, or user access the application A after the application E without re-enter login information.The whole structure of Single Sign-on is as Fig 1 shown.Figure 1: The Structure of Single Sign-onA. The login processThe whole single sign-on process is as Fig 2 shown: Below is the process specific steps description:1)User login in the client browser to access A application, SSO Client of A system intercept and redirect the URL to the landing page of Unified Portal System 2)Enter the user name and password, Unified Portal System submits to the authentication server for authentication. If the information is correct, Unified Portal System automatically generates, saves notes and the role of the user ID to a local, and calls the increate-note interface of Web Service to insert the information.3)Unified Portal System returns a list of application resources pages to the user. The user clicks any one application system (e.g. A system). The SSO Client-side of A application system read the notes information and call the query-notes interface of Web Service. If it is consistent and within the time limit, it will get the role information of the user in A application system and log in A application system. At the same time, it will call the update-note interface of Note Certification Web Service to update the log-in time of this current note. Then call the interface of user rights Web Service to get this user‟s permission information with corresponding application system.4)If user end to access A application system, exit and click on the link of B application system, system implementations will be are as the same as steps (3).5)If user complete all the required access-applications and need to do the log-off operation, it will mainly call the deletion-note interface to destroy the corresponding note information.Figure 2: The whole process of Single Sign-onB.The solution of Cross-domain problemsIn the traditional implementation of single sign-on system will be generally used cookie as storage of client-side notes, but because of restrictions on cookie itself properties make it only on the host under the same domain effective, and distributed application system always can not guarantee that all hosts under the same domain. The current system does not store the note information in the client-side but placed various application parameters of the link directly. The note-verification is through the application of the SSO Client-side call to the corresponding interface of Web Service to complete.Through the Simple Object Access Protocol (SOAP) to provide software service in the Web, use WSDL file to illuminate and register by UDDI [3]. Shown in Fig 3, after the user through the application of UDDI to find a WSDL description of the document, he can call the application which through SOAP to provide by one or more operations of Web services. The biggest characteristic of Web Service is its cross-platform, whether it is the application of B/S structure or C/S structure, whether it is the application using J2EE or .NET to implement, it can access Web Service as long as to give Web Service server's I:P and interface name.The following is this system process of achieving cross-domain access:1)User log in Unified Portal system successfully.2)User accesses A application system within the trusted domain D1, complete the access and then exit this application.3)User clicks the URL of B application system within trusted domain D2 of the resources list of Unified Portal.4)SSO Client of B application intercepts the request, gets the note behind URL, and calls the query-note interface of Web Service.5)Query interface of Web Service gets back the legal information of this note to the SSO Client.6)SSO Client redirect to B application system, the user access B application.Figure 3: Web Service StructureC. The Solution of Single Sign-on between B/S and C/S StructuresAs we know, the implementation principles of applications are quite different between B/S and C/S structures. In this system, the applications of B/S structure can be accessed through by clicking URL of the application-resources-list page of Unified Portal. Since the browser security restrictions, the page does not allow users to directly call the local exe files, so need to adopt an indirect way to call C / S architecture applications. This article uses the way of Applet to call local exe files, the implementations as below:For all C/S structures, create a common Agent. This Agent's role is an interceptor, which means it need browsers to access after the C/S structure joined up Unified Portal system. (Please note that: Since the original B/S architecture and C/S structure is not using the same authentication method. For the C/S application access to the unified portal framework to achieve single sign-on system, the need for a unified authentication management, and in order to change the amount of compression to a minimum. Implementation of this system is to create a needless user name and password authentication code for all applications which are accessed a unified portal, and land on the unified portal system certified landing page. When a user uses browser to log into the unified portal system successfully and then can access any application, including the B/S architecture and C/S structure of the application. To be ensure the security of C/S application framework, when the user clicks directly to the desktop shortcut to open applications still using the original authentication.)Applications of C/S architecture are all using the same Applet of URL. The received parameters of this common Applet include bills, application name, unified login-name and password. When a user does not do the login operation before, the first visit a C/S application will be intercepted to the login-page of Unified Portal system for sign-on. If a user logged in before, when visiting a C/S application, this Agent will call the interface of Web Service note-validation to validate the note which was transferred. If the validation issuccessful, Applet object will be downloaded to the user's local to implement. In order to transform the original applications as little as possible, the method of this article is to open the login window of the corresponding application through by Applet. Below are the codes: public void OpenExe(String appName){ Runtime rn=Runtime.getRuntime();Process p=null;p=rn.exec(“c:\.” + appName + “.exe”);}After opening the log-in window of the application, the operation steps of this Applet as follows:1)Applet needs to call the bottom API of windows to get the user-name of login window, password-input box and the handle of login button through by JNI.2)Locate the user-name-input box to send unified login name. Locate password-input box to send the password. (Password information is arbitrary and in order to distinguish it from the user clicks on a shortcut directly landing system, also need to send a code that uses a unified portal access without a password authentication system.) Locate the login button to send the click event.3)At last, Applet will minimize the IE window, the related windows of applications will be placed to the forefront.These are the implementation process of C/S architecture application single sign-on. The application codes which have not been changed at all before will join up the Unified Portal system using a loosely coupled way. Need to explain that, due to the Applet JVM security restrictions, cause Applet can not directly call the user's System32 directory of local native windows dll. Now the method is first to start to use C or C + + to write the class which got the corresponding input box and button of the login window, and generate a JNIWindowUtil.dll file (JNIWindowUtil is a user-defined dll's name). And it is to place the dll in the same directory with the Applet. When the Applet is downloaded to the client side, dll is also downloaded to the user's System32 directory of local at the same time. Applet process also needs to execute statement: System.loadLibrary("JNIWindowUtil"). After completing these above steps, it can really use JNI in Applet internal to achieve the corresponding functions.D. Authentication serverThe old system user authentication information is usually stored in a database, but this architecture used LDAP to store user information. LDAP, short for Lightweight Directory Access Protocol, is the standard directory access protocol based on a simplified form. It also defines the way data organization; it is based on TCP/IP protocol of the de facto standard directory service, and has distributed information access and data manipulation functions. LDAP uses distributed directory information tree structure. It can organize and manage various users‟ information effectively and provide safe and efficient directory access.Compared with the database, LDAP is the application for reading operation more than writing operation, and database is known to support a large number of writing operations. LDAP supports a relatively simple transaction, but the database is designed to handle a large number of various transactions. When the query in Cross-domain data is mainly read data, modify the frequency is very low. When Cross-domain access to the transaction, it does not require a large load, so in comparison with the database, LDAP is the ideal choice. It is more effective and simple. This framework is applied to a large bank, the bank's systems can belong to different regions, and use of personnel may come from different geographies. In order to achieve distributed management, the use of three-level management, respectively named the Bank headquarter, Provincial and City branches of the three levels of branches, as shown in Fig 4:Figure 4: LDAP Authentication StructureDirectory replication and directory reference is the most important technology in LDAP protocol. It can be seen from the figure, Provincial and City branches of the LDAP server branch data are copied from the floor, but not a simple copy of all information, just copy the relevant data with their own information. Because for a particular application system, its users are mostly belong to the sameregion, so that implementation can greatly simplify the management of directory services and to improve the efficiency of information retrieval When a user outside the region to use this system, because of its user information in the region can not retrieve LDAP server, you need to other regions of the LDAP server to query, and therefore requires a way to use up the reference queries,first Provincial branches of the server search, without further reference to Bank headquarter of the server up until the search to the appropriate user information.The management of the regionalCitybranch, using the LDAP directory replicationmodel of Single Master/Multi Slave. When a directory user queries the directory information, Master LDAP Server and Slave LDAP Server (Slave server can have more than one) can provide services to the directory,depending on the directory user makes a request to which the directory server. When the user requests the directory update directory information, in order to ensure the Master LDAP Server and Slave LDAP Server in the same directory information content, the need for replication of directory information, thisis achieved through the LDAP Replica server data ing directory replication, when the directory number of users increases or the need to improve system performance, only simply add Slave LDAP server to the systemand then can immediatelyeffective in improving system performance,and the whole directory service system can have a good load balancing.E.Permissions Web ServerAccess Controltechnology began in the computer age of providing shared data. Previously, the way people use computers is mainly to submit the run-code written by user or run the user profile data. Users do not have much data sharing, and do not exist to control access to data. When computer comes into user's shared data, the subject of access control is nature to put on the desktop.Currently, the widely used access control models is using or reference to the early nineties of last century the rise of role-based access control model (Role-Based Access Control -RBAC). RBAC model's success is that it is inserted the "role" concept between the subject and object, decouples effectively between subject and the corresponding object (permission), and well adapts to the subject and object associated with the instability.RBAC model includes four basic elements, namely the user (User -U), roles (Roles -R), session (Session -S) and permission (Permission -P), also in the derived model also includes constraints (Constrains -C).The basic idea is to assign access rights to roles, and then the roles are assigned to users. In one session, users can gain the access rights through roles. The relationship between the elements:a user can have multiple roles, a role can be granted to multiple users; a role can have multiple permissions, a permission can be granted multiple roles;user can have multiple conversations, but a conversationis only to bind a user; a conversationcan have multiple roles, a role can share to multiple conversations at the same time; Constraints are that act on specific constraints on these relationships. As shown in Fig 5:This system is to use this very sophisticated permission access control model.Rights management, not only protects the safety of system, but also facilitates management. Currently most using the manner of code reuse and database structure reuse, rights management module is integrated into business systems.Such a framework has the following shortcomings.1)Once the permissions system has been modified, the maintenance costs will be very high.This is the general shortcoming of using code reuse and database structure reuse. Once revised, we will have to update the code in all business system and database structure, and also to ensure that existing data can smooth the transition.Some processes may require manual intervention, which is a "painful" thing for the developers and maintenance personnel.2)Did not facilitate management of Permission data.Need to enter permission management module of various business systems to manage the corresponding rights. It is complex operation, and not intuitive.3)For different architectures, different software operating environment, we must develop and maintain different permissions system. For example, B/S and C/S architecture system must each develop their own rights management system.This paper argues that most commonfunction of the permission system can abstracted from business systems to form an independent system -"unified rights system". Business system only retains the rights inquiries,read common data system and the control rights function of this system specific fine degree (such as menus, buttons, links and so on). As shown Fig 1.How to achieve a unified rights management? This paper argues that there are two implementations, one way is to use Web services to provide rights data; the other is using Mobile Agent to provided permissions data. However, the secondone run, maintenance costs are higher, and implement is more difficultythan Web services. So this architecture using Web services to provide authority data of the various systems in a unifiedway.Business system using Web services client interface to query data and obtain system privileges to share data. The client is just a port, and specific implementation code is placed in "unified rights system". These client interfaces introduced to the business system by package. If we keep the client interfaces unchanged, modify and upgrade of the unified authority system will not affect the business ers and permissions through Web pages of "unified rights system" to unify management and to achieve the user's single sign-on. The biggest advantage of Web services is the integration of data between heterogeneous systems. This breaks the restrictions of B/S, C/S structure;there is no difference between Windows and Linux platform.SYSTEM SECURITY ANALYSIS1)The interception of user name and password. The system for authentication of the user login and send the user name and password to Applet objects are used SSL protocol. And make sure that information during transmission confidentiality and integrity.Meanwhile, due to the key which is hard to get and time limited, so it can effectively prevent that intermediary attack tothe transmission of information.2)Replay attack. Many systems will use the ways of time stamp to avoid duplication attacks. However, this approach requires thecomputer clocks of communication parties to be synchronization. But it is difficult to achieve, while also appears the following situation: the two sides‟clocks which are connecting with each other, if they are out of synchronization occasionally, the correct information may be mistaken to discard for replay information, but the incorrect replay information may be as the latest one to receive. Base on the above, this system needs a simple method F of an appointment between query interfaces of Web Service provided and SSO Client of each application system or Agent.This system‟s parameter value is a random string X. The whole process of bill validation as shownin Fig6:a)When the user accesses to application system A, the SSO Client of system A intercept and call the query interface of Web Service provided, and the input parameters are a random string X and the corresponding note.b)Web Service server receives system A‟s call, intercepts note to compare with the note‟s informationof Session queue. If the queue contains the note, it will return the value of F(X) for showing validation is successful. If not, it will r eturn …failed‟ for showing validation is failed.c)SSO Client of the application A receives the return information of Web Service server, and then compares the return value with F(X) of this system. If the two are the same, it will redirect to system A, otherwise it will not be allowed to visit.The random string is different, which each interact with Web Service server. So you can limit replay attacks very well. 3)Use reverse proxy technology. Reverse proxy technologyis a substitute, which is a reverse proxy server as to N identical application servers. When external access to this application, it just knows the reverse proxy server and can not see the back multiple application servers. This improves the security of this application system.Through the above analysis, this system can provide users with a good safety Web environment.SYSTEM PERFORMANCE ANALYZESFirst, this system in addition to use SSL encryption in the transmission of user name and password, the interactions of between other servers and between user and servers are based on HTTP protocol to transmit. SSL encryption and decryption process requiresa lot of system cost, severely reduces the performance of the machine, so we should not be use this protocolto transmit data too much. Since the data which need to encrypt is small, only a userID value (note), so the performance of using MD5 to encrypt is quite satisfactory.Second, when user accessesany application system of each domain, they will be redirected to Unified Portal system for identity authentication, or directed to Web Service server for note validation. User need to sign on the system only when he is certification first time. When the visitor volume is larger, the user switch to the new application system will easily handle an interruption, which issingle sign-failure phenomenon. This phenomenon has two reasons, one is the server load is too large, the other one is network bandwidth is not enough. Among them, the method which is resolved the server load is too large is to use server cluster. Cluster is made up of multiple servers. As a unified resource, it provides a single system service to external. In this system, except for using reverse proxy technology to improve the security of accessing the applications, the more important is capability which can help to implement cluster technology of load balancing. The whole structure of reverse proxy is shown in Fig7:Fig7, reverse proxy server R provides the correspondinginterface to implement the algorithm of load balancing except for providing cache for the behind A1, A2 and A3 application. That is, it can consider the arrival request to distribute to the server which has the best performance through by scanning the conditions of CPU, memory and I/O of A1, A2, A3 server.By LoadRunner8.1, the use of reverse proxy system before and after was related to stress testing. The test results are shown in Fig 8:It can be seen from Figure 8,at the beginning, when the number of concurrent users is not large, use the reverse proxy and out of use proxy is similar. But with the gradual increase of concurrent users, the performance difference between the twois more and more evident.To 100 concurrent users to access,the system response time of using the reverse proxy is almost twice as fast as the one out of use proxy.System Web Service server needs to store the info rmation of note, so using Web Service server cluster to pay attention to this problem: the different Servers of cluster use different JVM, so an object of JVM can not be accessed by other JVM directly. For this problem, there are two methodsto resolve:1)Put the object in Session, and then configure cluster to the copy model of Session.2) Use Memcache, put the object in Memcache, and then all Server get this object from Memcache. To be equivalent to open a public memory area, which everyone can access.Any more, business system requires get rights information data through the Web services frequently. This performance of the system put forward higher requirements. The system has been taken two measures to improve performance:1)It receives a request by using time-sharing patterns of authority data server. After that, if always be calculated in real-time data, it will not certainly respond in time as the server limited resources. This will cause the system to slow down.A "time-sharing patterns of authority data" can solve this problem.When the system data changes (such as a new operation is authorized tothe role, etc), the system automatically calculates the affected user, and then re-calculate the relevant authority data, save to the specified fieldof database.When the business system requests data, only run "to read the database designated field corresponding to the specified data" such a simple action, you can greatly speed up the system response speed.2)Designed the cache structure to rely solely on time-sharing model is not enough to。

软件工程外文文献翻译

软件工程外文文献翻译

西安邮电学院毕业设计(论文)外文文献翻译院系:计算机学院专业:软件工程班级:软件0601学生姓名:导师姓名:职称:副教授起止时间:2010年3月8日至2010年6月11日ClassesOne of the most compelling features about Java is code reuse. But to be revolutionary, you’ve got to be able to do a lot more than copy code and change it.That’s the approach used in procedural languages like C, and it hasn’t worked very well. Like everything in Java, the solution revolves around the class. You reuse code by creating new classes, but instead of creating them from scratch, you use existing classes that someone has already built and debugged.The trick is to use the classes without soiling the existing code.➢Initializing the base classSince there are now two classes involved—the base class and the derived class—instead of just one, it can be a bit confusing to try to imagine the resulting object produced by a derived class. From the outside, it looks like the new class has the same interface as the base class and maybe some additional methods and fields. But inheritance doesn’t just copy the interface of the base class. When you create an object of the derived class, it contains within it a subobject of the base class. This subobject is the same as if you had created an object of the base class by itself. It’s just that from the outside, the subobject of the base class is wrapped within the derived-class object.Of course, it’s essential that th e base-class subobject be initialized correctly, and there’s only one way to guarantee this: perform the initialization in the constructor by calling the base-class constructor, which has all the appropriate knowledge and privileges to perform the base-class initialization. Java automatically inserts calls to the base-class constructor in the derived-class constructor.➢Guaranteeing proper cleanupJava doesn’t have the C++ concept of a destructor, a method that is automatically called when an object is destroyed. The reason is probably that in Java, the practice is simply to forget about objects rather than to destroy them, allowing the garbage collector to reclaim the memory as necessary.Often this is fine, but there are times when your class might perform some activities during its lifetime that require cleanup. As mentioned in Chapter 4, you can’t know when the garbage collector will be called, or if it will be called. So if you want something cleaned up for a class, you must explicitly write a special method to do it, and make sure that the client programmer knows that they must call this method.Note that in your cleanup method, you must also pay attention to the calling order for the base-class and member-object cleanup methods in case one subobject depends on another. In general, you should follow the same form that is imposed by a C++ compiler on its destructors: first perform all of the cleanup work specific to your class, in the reverse order of creation. (In general, this requires that base-class elements still be viable.) Then call the base-class cleanup method, as demonstrated here➢Name hidingIf a Java base class has a method name that’s overloaded several times, redefining that method name in the derived class will not hide any of the base-class versions (unlike C++). Thus overloading works regardless of whether the method was defined at this level or in a base class,it’s far more common to override methods of the same name, using exactly the same signature and return type as in the base class. It can be confusing otherwise (which is why C++ disallows it—to prevent you from making what is probably a mistake).➢Choosing composition vs. inheritanceBoth composition and inheritance allow you to place subobjects inside your new class (composition explicitly does this—with inheritance it’s implicit). You might wonder about the difference between the two, and when to choose one over the other.Composition is generally used when you want the features of an existing class inside your new class, but not its interface. That is, you embed an object so that you can use it to implement functionality in your new class, but the user of your new class sees the interface you’ve defined for the new class rather than the interface from theembedded object. For this effect, you embed private objects of existing classes inside your new class.Sometimes it makes sense to allow the class user to directly access the composition of your new class; that is, to make the member objects public. The member objects use implementation hiding themselves, so this is a safe thing to do. When the user knows you’re assembling a bunch of parts, it makes the interface easier to understand.When you inherit, you take an existing class and make a special version of it. In general, this mea ns that you’re taking a general-purpose class and specializing it for a particular need➢The final keywordJava’s final keyword has slightly different meanings depending on the context, but in general it says “This cannot be changed.” You might want to prev ent changes for two reasons: design or efficiency. Because these two reasons are quite different, it’s possible to misuse the final keywordThe following sections discuss the three places where final can be used: for data, methods, and classes.➢Final dataMany programming languages have a way to tell the compiler that a piece of data is “constant.” A constant is useful for two reasons:It can be a compile-time constant that won’t ever change.It can be a value initialized at run time that you don’t want ch anged.In the case of a compile-time constant, the compiler is allowed to “fold” the constant value into any calculations in which it’s used; that is, the calculation can be performed at compile time, eliminating some run-time overhead. In Java, these sorts of constants must be primitives and are expressed with the final keyword. A value must be given at the time of definition of such a constant.A field that is both static and final has only one piece of storage that cannot be changed.When using final with object references rather than primitives, the meaning gets a bit confusing. With a primitive, final makes the value a constant, but with an object reference, final makes the reference a constant. Once the reference is initialized to an object, it can never be changed to point to another object. However, the object itself can be modified; Java does not provide a way to make any arbitrary object a constant. (You can, however, write your class so that objects have the effect of being constant.) This restriction includes arrays, which are also objects.➢Final methodsThere are two reasons for final methods. The first is to put a “lock” on the method to prevent any inheriting class from changing its meaning. This is done for design reasons when you want to mak e sure that a method’s behavior is retained during inheritance and cannot be overridden.The second reason for final methods is efficiency. If you make a method final, you are allowing the compiler to turn any calls to that method into inline calls. When the compiler sees a final method call, it can (at its discretion) skip the normal approach of inserting code to perform the method call mechanism (push arguments on the stack, hop over to the method code and execute it, hop back and clean off the stack arguments, and deal with the return value) and instead replace the method call with a copy of the actual code in the method body. This eliminates the overhead of the method call. Of course, if a method is big, then your code begins to bloat, and you probably won’t see any performance gains from inlining, since any improvements will be dwarfed by the amount of time spent inside the method. It is implied that the Java compiler is able to detect these situations and choose wisely whether to inline a final method. However, it’s best to let the compiler and JVM handle efficiency issues and make a method final only if you want to explicitly prevent overriding➢Final classesWhen you say that an entire class is final (by preceding its definition with the final keyword), you state that you don’t want to inherit from this class or allow anyone else to do so. In other words, for some reason the design of your class is suchthat there is never a need to make any changes, or for safety or security reasons you don’t want subc lassingNote that the fields of a final class can be final or not, as you choose. The same rules apply to final for fields regardless of whet However, because it prevents inheritance, all methods in a final class are implicitly final, since there’s no way to override them. You can add the final specifier to a method in a final class, but it doesn’t add any meaning.her the class is defined as final.➢SummaryBoth inheritance and composition allow you to create a new type from existing types. Typically, however, composition reuses existing types as part of the underlying implementation of the new type, and inheritance reuses the interface. Since the derived class has the base-class interface, it can be upcast to the base, which is critical for polymorphism, as you’ll see in the next chapter.Despite the strong emphasis on inheritance in object-oriented programming, when you start a design you should generally prefer composition during the first cut and use inheritance only when it is clearly necessary. Composition tends to be more flexible. In addition, by using the added artifice of inheritance with your member type, you can change the exact type, and thus the behavior, of those member objects at run time. Therefore, you can change the behavior of the composed object at run time.When designing a system, your goal is to find or create a set of classes in which each class has a specific use and is neither too big (encompassing so much functionality that it’s unwieldy to reuse) nor annoyingly small (you can’t use it by itself or without adding functionality).类“Java引人注目的一项特性是代码的重复使用或者再生。

英文翻译工具:软件工程专业毕业设计外文文献翻译 英文翻译工具

英文翻译工具:软件工程专业毕业设计外文文献翻译 英文翻译工具

英文翻译工具:软件工程专业毕业设计外文文献翻译英文翻译工具英文翻译工具:软件工程专业毕业设计外文文献翻译英文翻译工具话题:英文翻译工具财务分析数据库本科毕业设计外文文献翻译 (英文题目:Software DatabaseAn Object-Oriented Perspective. 中文题目:软件数据库的面向对象的视角学生姓名:宋兰兰学院:信息工程学院系别:软件工程系专业:软件工程班级:软件09-1 指导教师:关玉欣讲师二〇一三年六月内蒙古工业大学本科毕业设计外文文献翻译内蒙古工业大学本科毕业设计外文文献翻译A HISTORICAL PERSPECTIVEFrom the earliestdays of computers, storing and manipulating data have been a major application focus. The first general-purpose DBMS was designed by Charles Bachman at General Electric in the early 1960s and was called the Integrated Data Store. It formed the basis for the network data model, which was standardized by the Conference on Data Systems Languages (CODASYL) and strongly influenced database systems through the 1960s.Bachman was the first recipient of ACM’s Turing Award (thecomputer science equivalent of a Nobel prize) for work in the database area; he received the award in 1973. In the late 1960s, IBM developed the Information Management System (IMS) DBMS, used even todayin many major installations. IMS formed the basis for an alternative data representation framework called the hierarchical data model. The SABRE system for making airline reservations was jointly developed by American Airlines and IBM around the same time, and it allowed several people to access the same data through computer network. Interestingly, today the same SABRE system is used to power popular Web-based travel services such as Travelocity!In 1970, Edgar Codd, at IBM’s San Jose ResearchLaboratory, proposed a new data representation framework called the relational data model. This proved to be a watershed in the development of database systems: it sparked rapid development of several DBMSs based on the relational model, along with a rich body of theoretical results that placed the field on a firm foundation. Codd won the 1981 Turing Award for his seminal work. Database systems matured as an academic discipline, and the popularity of relational DBMSs changed the commercial landscape. Their benefits were widely recognized, and the use of DBMSs for managing corporate data became standard practice.In the 1980s, the relational model consolidated its position as the dominant DBMS paradigm, and database systems continued to gain widespread use. The SQL query language for relational databases, developed as part of IBM’sSystem R project, is now the standard query language. SQL was standardized in the late 1980s, and the current standard, SQL-92, wasadopted by the American National Standards Institute (ANSI) and1内蒙古工业大学本科毕业设计外文文献翻译International Standards Organization (ISO). Arguably, the mostwidely used form of concurrent programming is the concurrent execution of database programs (called transactions). Users write programs as if they are to be run by themselves, and the responsibility for running them concurrently is given to the DBMS. James Gray won the 1999 Turing award for his contributions to the field of transaction management in a DBMS.In the late 1980s and the 1990s, advances have been made in many areas of database systems. Considerable research has been carried out into more powerful query languages and richer data models, and there has been a big emphasis on supporting complex analysis of data from all parts of an enterprise. Several vendors (e.g., IBM’s DB2, Ora cle 8, Informix UDS) have extended their systems with the ability to store new data types such as images and text, and with the ability to ask more complex queries. Specialized systems have been developed by numerous vendors for creating data warehouses, consolidating data from several databases, and for carrying out specialized analysis.An interesting phenomenon is the emergence of several enterprise resource planning(ERP) and management resource planning (MRP) packages, which add a substantial layer of application-oriented features on top of a DBMS. Widely used packages include systems from Baan, Oracle, PeopleSoft, SAP, and Siebel. These packages identify a set of common tasks (e.g., inventory management, human resources planning, financial analysis) encountered bya large number of organizations and provide a general application layer to carry out these tasks. The data is stored in a relational DBMS, and the application layer can be customized to different companies, leading to lower Introduction to Database Systems overall costs for the companies, compared to the cost of building the application layer from scratch. Most significantly, perhaps, DBMSs have entered the Internet Age. While the first generation of Web sites stored their dataexclusively in operating systems files, the use of a DBMS to store data that is accessedthrough a Web browser is becoming widespread. Queries are generated through Web-accessible forms and answers are formatted using a markup language such as HTML, in order to be easily displayed in a browser. All the database vendors are adding features to their DBMS aimed at makingit more suitable for deployment over the Internet. Database management continues to gain importance as more and more data is brought on-line, and made ever more accessible2内蒙古工业大学本科毕业设计外文文献翻译through computer networking. Todaythe field is being driven by exciting visions such as multimedia databases, interactive video, digital libraries, a host of scientific projects such as the human genome mapping effort a nd NASA’s Earth Observation System project, and the desire of companies to consolidate their decision-making processes and mine their data repositories for useful information about their businesses. Commercially, database manage- ment systems represent one of the largest and mostvigorous market segments. Thusthes- tudy of database systems could prove to be richly rewarding in more ways than one!INTRODUCTION TO PHYSICAL DATABASEDESIGNLike all other aspects of database design, physical design must be guided by the nature of the data and its intended use. In particular, it is important to understand the typicalworkload that the database must support; the workload consists of a mix of queries and updates. Users also have certain requirements about how fast certain queries or updates must run or how many transactions must be processed per second. The workload description and users’ performance requirements arethe basis on which a number of decisions have to be made during physical database design.To create a good physical database design and to tune the system for performance in response to evolving user requirements, the designer needs to understand the workings of a DBMS, especially the indexing and query processing techniques supported by the DBMS. If the database is expected to be accessed concurrently by many users, or is a distributed database, the task becomes more complicated, and other features of a DBMS come into play.DATABASE WORKLOADSThe key to good physical design is arriving at an accurate description of the expected workload. A workload description includes the following elements:1. A list of queries and their frequencies, as a fraction ofall queries and updates.2. A list of updates and their frequencies.3内蒙古工业大学本科毕业设计外文文献翻译3. Performance goals for each type ofquery and update.For each query in the workload, we mustidentify:Which relations are accessed.Which attributes are retained (in the SELECT clause).Which attributes have selection or join conditions expressed on them (in the WHERE clause) and howselective these conditions are likely to be. Similarly, for each update in the workload, we must identify:Which attributes have selection orjoin conditions expressed on them (in the WHERE clause) and howselective these conditions are likely to be.The type of update (INSERT, DELETE, or UPDATE) and the updated relation.For UPDATE commands, the fields that are modified by the update.Remember that queries and updates typically have parameters, for example, a debit or credit operation involves a particular account number. The values of these parameters determine selectivity of selection and join conditions.Updates have a query component that is used to find the target tuples. This component can benefit from a good physical design and the presence of indexes. On the other hand, updates typically require additional work to maintain indexes on the attributes that they modify. Thus, while queries can only benefit from the presence of an index, an index may either speed up or slow down a given update. Designers should keep this trade-offer in mind when creating indexes.NEED FORDATABASE TUNINGAccurate, detailed workload information may be hardto come by while doing the initial design of thesystem. Consequently, tuning a database after it has been designed and deployed is important—we must refine the initialdesign in the light of actual usage patterns to obtain the best possible performance.The distinction between database design and database tuning is somewhat arbitrary. We could consider the design process to be over once an initial conceptual schema is designed and a set of indexing and clustering decisions is made. Any subsequentchanges4内蒙古工业大学本科毕业设计外文文献翻译to the conceptual schema or the indexes, say, would then be regarded as a tuning activity. Alternatively, we could consider some refinement of the conceptual schema (and physical design decisions affected by this refinement) to be part of the physical designprocess.Where we draw the line between design and tuning is not very important.OVERVIEW OFDATABASE TUNINGAfter the initial phase of database design, actualuse of the database provides a valuable source of detailed information that can be used to refine the initial design. Many of the original assumptions about the expected workload can be replaced by observed usage patterns; in general, some of the initial workload specification will be validated, and some of it will turn out to be wrong. Initial guesses about the size of data can be replaced with actual statistics from the system catalogs(although this information will keep changing as the system evolves). Careful monitoring of queries can reveal unexpected problems; forexample, the optimizer may not be using some indexes as intended to produce good plans.Continued database tuning is important to get the best possibleperformance.TUNING THE CONCEPTUAL SCHEMAIn thecourse of database design, we may realize that our current choice of relation schemas does not enable us meet our performance objectives for the given workload with any (feasible) set of physical design choices. If so, we may have to redesign our conceptual schema (and re-examine physical design decisions that are affected by the changes that we make).We may realize that a redesign is necessary during the initial design process or later, after the system has been in use for a while. Once a database has been designed and populated with data, changing the conceptual schema requires a significant effort in terms of mapping the contents of relations that are affected. Nonetheless, it may sometimes be necessary to revise the conceptual schema in light of experience with the system. We now5内蒙古工业大学本科毕业设计外文文献翻译consider the issues involved in conceptual schema (re)design fromthe point of view of performance.Several options must be considered while tuning the conceptual schema:We may decide tosettle for a 3NF design instead of a BCNF design.If there are two ways to decompose a given schema into 3NF or BCNF, our choice should be guided by the workload.Sometimes we might decide to further decompose a relation that is already in BCNF. In other situations we might denormalize. That is, we might choose to replace a collection ofrelations obtained by a decomposition from a larger relation with the original (larger) relation, even though it suffers from some redundancy problems. Alternatively, we might choose to add some fields to certain relations to speed up some important queries, even if this leads to a redundant storage of some information (and consequently, a schema that is in neither 3NF nor BCNF).This discussion of normalization has concentrated on the technique ofdecomposition, which amounts to vertical partitioning of a relation. Another technique to consider is horizontal partitioning of a relation, which would lead to our having two relations with identical schemas. Note that we are not talking about physically partitioning the cuples of a single relation; rather, we want to create two distinct relations (possibly with different constraints and indexes on each).Incidentally, when we redesign the conceptual schema, especially if we are tuning an existingdatabase schema, it is worth considering whether we should create views to mask these changes from users for whom the original schema is more natural.TUNING QUERIES ANDVIEWSIf we notice that a query is running much slower than we expected, we have to examine the query carefully to end the problem. Some rewriting of the query, perhaps in conjunction with some index tuning, can often ?x the problem. Similar tuning may be called for if queries on some view run slower than expected.When tuning a query, thefirst thing to verify is that the system is using the plan that 6内蒙古工业大学本科毕业设计外文文献翻译you expect it to use. It may be that thesystem is not finding the best plan for a variety of reasons. Some common situations that are not handled efficiently by many optimizers follow:A selection condition involving null values.Selection conditions involving arithmetic or string expressions or conditions using the or connective. For example, if we have a condition E.age = 2*D.age in the WHERE clause, the optimizer may correctly utilize an available index on E.age but fail to utilize an available index on D.age. Replacing the condition by E.age/2=D.age would reverse the situation.Inability to recognize a sophisticated plan such as an index-only scan for an aggregation query involving a GROUP BY clause.If theoptimizer is not smart enough to and the best plan (using access methods and evaluation strategies supported by the DBMS), some systems allow users to guide the choice of a plan by providing hints to the optimizer; for example, users might be able to force the use of a particular index or choose the join order and join method. A user who wishes to guide optimization in this manner should have a thorough understanding of both optimization and the capabilities of the given DBMS.(8)OTHER TOPICSMOBILE DATABASESThe availability of portable computers and wireless communications has created a new breed of nomadic database users. At one level these users are simply accessing a database through a network, which is similar to distributed DBMSs. Atanother level the network as well as data and user characteristics now have several novel properties, which affect basic assumptions in many components of a DBMS, including the query engine, transaction manager, and recovery ers are connected through a wireless link whose bandwidth is ten times less than Ethernet and 100 times less than ATM networks. Communication costs are therefore significantly higher inpr oportion to I/O and CPU ers’locations are constantly changing, and mobile computers have alimited battery life. Therefore, the true communication costs is connection time and battery usage in addition to bytes transferred, and change constantly depending on location. Data is7内蒙古工业大学本科毕业设计外文文献翻译frequentlyreplicated to minimize the cost of accessing it from different locations.As a user moves around, data could be accessed from multiple database servers within a single transaction. The likelihood of losing connections is also much greater than in a traditional network. Centralized transaction management may therefore be impractical, especially if some data is resident at the mobile computers. We may in fact have to give up on ACID transactions and develop alternative notions of consistency for user programs. MAIN MEMORY DATABASESThe price ofmain memory is now low enough that we can buy enough main memory to hold the entire database for many applications; with 64-bit addressing, modern CPUs also have very large address spaces. Some commercial systemsnow have several gigabytes of main memory. This shift prompts a reexamination of some basic DBMS design decisions, since disk accesses no longer dominate processing time for a memory-resident database:Main memory does not survive system crashes, and so we still have to implement logging and recovery to ensure transaction atomicity and durability. Log records must be written to stable storage at commit time, and this process could become a bottleneck. To minimize this problem, rather than commit each transaction as it completes, we can collect completed transactions and commit them in batches; this is called group commit. Recovery algorithms can also be optimized since pages rarely have to be written out to make room for other pages.The implementation of in-memory operations has to be optimized carefully since disk accesses are no longer the limiting factor for performance.A new criterion must be considered while optimizing queries, namely the amount of space required to execute a plan. It is important to minimize the space overhead because exceeding available physical memory would lead to swapping pages to disk (through the operating system’s virtual memory mechanisms), greatlyslowing down execution.Page-oriented data structures become less important (since pages are no longer the unit of data retrieval), and clustering is not important (since the cost of accessing any region of main memory is uniform).8内蒙古工业大学本科毕业设计外文文献翻译(一)从历史的角度回顾从数据库的早期开始,存储和操纵数据就一直是主要的应用焦点。

软件工程本科毕业外文文献翻译资料

软件工程本科毕业外文文献翻译资料

软件工程本科毕业外文文献翻译学校代码:10128本科毕业设计外文文献翻译二〇一五年一月The Test Library Management System ofFramework Based on SSHThe application system features in small or medium-sized enterprise lie in the greater flexibility and safety high performance-price ratio. Traditional J2EE framework can not adapt to these needs, but the system a pplication based on SSH(Struts+Spring+Hibernate) technology can better satisfy such needs. This paper analyses some integration theory and key technologies about SSH, and according to the integration constructs a lightweight WEB framework, which has integrated the three kinds of technology ,forming the lightweight WEB framework bas ed on SSH and gaining good effects in practical applications.IntroductionGenerally the J2EE platform[27] used in large enterprise applications, can well s olve the application of reliability, safety and stability, but its weakness is the price hig h and the constructing cycle is long. Corresponding to the small or medium enterprise applications, the replace approach is the system framework of lightweight WEB, inclu ding the more commonly used methods which are based on the Struts and Hibernate. With the wide application of Spring, the three technology combination may be a bette r choice as a lightweight WEB framework. It uses layered structure and provides a go od integrated framework for Web applications at all levels in minimizing the Interlaye r coupling and increasing the efficiency of development. This framework can solve a l ot of problems, with good maintainability and scalability. It can solve the separation o f user interface and business logic separation, the separation of business logic and data base operation and the correct procedure control logic, etc. This paper studies the tech nology and principle of Struts and Spring and Hibernate, presenting a proved lightwei ght WEB application framework for enterprise.Hierarchical Web MechanismHierarchical Web framework including the user presentation layer, business logi clayer, data persistence layer ,expansion layer etc, each layer for different function, re spectively to finish the whole application. The whole system are divided into differentlogic module with relatively independent and mutual, and each module can be imple mented according to different design. It can realize the system parallel development, r apid integration, good maintainability, scalability.Struts MVC FrameworkTo ensure the reuse and efficiency of development process, adopting J2EE techn ology to build the Web application must select a system framework which has a good performance . Only in this way can we ensure not wasting lots of time because of adju sting configuration and achieve application development efficiently and quickly. So, p rogrammers in the course of practice got some successful development pattern which proved practical, such as MVC and O/R mapping, etc; many technologies, including S truts and Hibernate frameworks, realized these pattern. However, Struts framework on ly settled the separation problem between view layer and business logic layer, control layer, did not provide a flexible support for complex data saving process. On the contr ary, Hibernate framework offered the powerful and flexible support for complex data saving process. Therefore, how to integrate two frameworks and get a flexible, low-coupling solutions project which is easy to maintain for information system, is a resea rch task which the engineering staff is studying constantly.Model-View-Controller (MVC) is a popular design pattern. It divides the interactive system in three components and each of them specializes in one task. The model contains the applica tion data and manages the core functionality. The visual display of the model and the f eedback to the users are managed by the view. The controller not only interprets the in puts from the user, but also dominates the model and the view to change appropriately . MVC separates the system functionality from the system interface so as to enhance t he system scalability and maintainability. Struts is a typical MVC frame[32], and it also contains the three aforementioned components. The model level is composed of J avaBean and EJB components. The controller is realized by action and ActionServlet, and the view layer consists of JSP files. The central controller controls the action exec ution that receives a request and redirects this request to the appropriate module contr oller. Subsequently, the module controller processes the request and returns results tothe central controller using a JavaBean object, which stores any object to be presented in the view layer by including an indication to module views that must be presented. The central controller redirects the returned JavaBean object to the main view that dis plays its information.Spring Framework technologySpring is a lightweight J2EE application development framework, which uses the model of Inversion of Control(IoC) to separate the actual application from the Config uration and dependent regulations of the application. Committed to J2EE application a t all levels of the solution, Spring is not attempting to replace the existing framework, but rather “welding” the object of J2EE application at all levels together through the P OJO management. In addition, developers are free to choose Spring framework for so me or all, since Spring modules are not totally dependent.As a major business-level detail, Spring employs the idea of delay injection to assemble code for the sake o f improving the scalability and flexibility of built systems. Thus, the systems achieve a centralized business processing and reduction of code reuse through the Spring AOP module.Hibernate Persistent FrameworkHibernate is a kind of open source framework with DAO design patterns to achie ve mapping(O/R Mapping) between object and relational database.During the Web system development, the tradition approach directly interacts wi th the database by JDBC .However, this method has not only heavy workload but also complex SQL codes of JDBC which need to revise because the business logic sli ghtly changes. So, whatever development or maintain system are inconvenient. Consi dering the large difference between the object-oriented relation of java and the structure of relational database, it is necessary to intro duce a direct mapping mechanism between the object and database, which this kind of mapping should use configuration files as soon as possibility, so that mapping files w ill need modifying rather than java source codes when the business logic changes in the future. Therefore, O/R mapping pattern emerges, which hibernate is one of the most outstanding realization of architecture.It encapsulates JDBC with lightweight , making Java programmer operate a relati onal database with the object oriented programming thinking. It is a a implementation technology in the lasting layer. Compared to other lasting layer technology such as JD BC, EJB, JDO, Hibernate is easy to grasp and more in line with the object-oriented programming thinking. Hibernate own a query language (HQL), which is full y object-oriented. The basic structure in its application as shown in figure6.1.Hibernate is a data persistence framework, and the core technology is the object / relational database mapping(ORM). Hibernate is generally considered as a bridge bet ween Java applications and the relational database, owing to providing durable data se rvices for applications and allowing developers to use an object-oriented approach to the management and manipulation of relational database. Further more, it furnishes an object-oriented query language-HQL.Responsible for the mapping between the major categories of Java and the relatio nal database, Hibernate is essentially a middle ware providing database services. It su pplies durable data services for applications by utilizing databases and several profiles , such as hibernate properties and XML Mapping etc..Web services technologiesThe introduction of annotations into Java EE 5 makes it simple to create sophisticated Web service endpoints and clients with less code and a shorter learning curve than was possible with earlier Java EE versions. Annotations — first introduced in Java SE 5 — are modifiers you can add to your code as metadata. They don't affect program semantics directly, but the compiler, development tools, and runtime libraries can process them to produce additional Java language source files, XML documents, or other artifacts and behavior that augment the code containing the annotations (see Resources). Later in the article, you'll see how you can easily turn a regular Java class into a Web service by adding simple annotations.Web application technologiesJava EE 5 welcomes two major pieces of front-end technology — JSF and JSTL — into the specification to join the existing JavaServer Pages and Servlet specifications. JSF is a set of APIs that enable a component-based approach to user-interface development. JSTL is a set of tag libraries that support embedding procedural logic, access to JavaBeans, SQL commands, localized formatting instructions, and XML processing in JSPs. The most recent releases of JSF, JSTL, and JSP support a unified expression language (EL) that allows these technologies to integrate more easily (see Resources).The cornerstone of Web services support in Java EE 5 is JAX-WS 2.0, which is a follow-on to JAX-RPC 1.1. Both of these technologies let you create RESTful and SOAP-based Web services without dealing directly with the tedium of XML processing and data binding inherent to Web services. Developers are free to continue using JAX-RPC (which is still required of Java EE 5 containers), but migrating to JAX-WS is strongly recommended. Newcomers to Java Web services might as well skip JAX-RPC and head right for JAX-WS. That said, it's good to know that both of them support SOAP 1.1 over HTTP 1.1 and so are fully compatible: a JAX-WS Web services client can access a JAX-RPC Web services endpoint, and vice versa.The advantages of JAX-WS over JAX-RPC are compelling. JAX-WS:•Supports the SOAP 1.2 standard (in addition to SOAP 1.1).•Supports XML over HTTP. You can bypass SOAP if you wish. (See the article "Use XML directly over HTTP for Web services (where appropriate)"for more information.)•Uses the Java Architecture for XML Binding (JAXB) for its data-mapping model. JAXB has complete support for XML schema and betterperformance (more on that in a moment).•Introduces a dynamic programming model for both server and client.The client model supports both a message-oriented and an asynchronous approach.•Supports Message Transmission Optimization Mechanism (MTOM), a W3C recommendation for optimizing the transmission and format of a SOAP message.•Upgrades Web services interoperability (WS-I) support. (It supports Basic Profile 1.1; JAX-WS supports only Basic Profile 1.0.)•Upgrades SOAP attachment support. (It uses the SOAP with Attachments API for Java [SAAJ] 1.3; JAX-WS supports only SAAJ 1.2.)•You can learn more about the differences by reading the article "JAX-RPC versus JAX-WS."The wsimport tool in JAX-WS automatically handles many of the mundane details of Web service development and integrates easily into a build processes in a cross-platform manner, freeing you to focus on the application logic that implements or uses a service. It generates artifacts such as services, service endpoint interfaces (SEIs), asynchronous response code, exceptions based on WSDL faults, and Java classes bound to schema types by JAXB.JAX-WS also enables high-performing Web services. See Resources for a link to an article ("Implementing High Performance Web Services Using JAX-WS 2.0") presenting a benchmark study of equivalent Web service implementations based on the new JAX-WS stack (which uses two other Web services features in Java EE 5 —JAXB and StAX) and a JAX-RPC stack available in J2EE 1.4. The study found 40% to 1000% performance increases with JAX-WS in various functional areas under different loads.ConclusionEach framework has its advantages and disadvantages .Lightweight J2EE struct ure integrates Struts and Hibernate and Spring technology, making full use the powerf ul data processing function of Struts and the management flexible of Spring and the m ature of Hibernate. According to the practice, putting forward an open-source solutions suitable for small or medium-sized enterprise application of. The application system based on this architecture tech nology development has interlayer loose coupling ,structure distinctly, short develop ment cycle, maintainability. In addition, combined with commercial project developm ent, the solution has achieved good effect. The lightweight framework makes the paral lel development and maintenance for commercial system convenience, and can push f orward become other industry business system development.Through research and practice, we can easily find that Struts / Spring / Hiberna te framework utilizes Struts maturity in the presentation layer, flexibility of Spring bu siness management and convenience of Hibernate in the serialization layer, three kind s of framework integrated into a whole so that the development and maintenance beca me more convenient and handy. This kind of approach also will play a key role if appl ying other business system. Of course ,how to optimize system performance, enhance the user's access speed, improve security ability of system framework ,all of these wor ks, are need to do for author in the further.基于SSH框架实现的试题库管理系统小型或者中型企业的应用系统具有非常好的灵活性、安全性以及高性价比,传统的J2EE架构满足不了这些需求,但是基于SSH框架实现的应用系统更好的满足了这样的需求,这篇文章分析了关于SSH的一体化理论和关键技术,通过这些集成形成了轻量级Web框架,在已经集成三种技术的基础上,伴随形成了基于SSH的轻量级Web 框架,并且在实际应用中有着重要作用。

(完整版)软件工程专业_毕业设计外文文献翻译_

(完整版)软件工程专业_毕业设计外文文献翻译_

(二〇一三年六月A HISTORICAL PERSPECTIVEFrom the earliest days of computers, storing and manipulating data a major application focus. The first general-purpose DBMS was designed by Charles Bachman at General Electric in the early 1960s and was called the Integrated Data Store. It formed the basis for the network data model, which was standardized by the Conference on Data Systems Languages (CODASYL) and strongly influenced database systems through the 1960s. Bachman was the fi rst recipient of ACM’s Turing Award (the computer science equivalent of a Nobel prize) for work in the database area; 1973. In the late 1960s, IBM developed the Information Management System (IMS) DBMS, used even today in many major installations. IMS formed the basis for an alternative data representation framework called the Airlines and IBM around the same time, and it allowed several people to access the same data through computer network. Interestingly, today the same SABRE system is used to power popular Web-based travel services such as Travelocity!In 1970, Edgar Codd, at IBM’s San Jose Research Laboratory, proposed a new data representation framework called the relational data model. This proved to be a watershed in the development of database systems: it sparked rapid development of several DBMSs based on the relational model, along with a rich body of theoretical results that placed the field on a firm foundation. Codd won the 1981 Turing Award for academic discipline, and the popularity of relational DBMSs changed thecommercial landscape. Their benefits were widely recognized, and the use of DBMSs for managing corporate data became standard practice.In the 1980s, the relational model consolidated its position as the dominant DBMS paradigm, and database systems continued to gain widespread use. The SQL query language for relational databases, developed as part of IBM’s System R project, is now the standard query language. SQL was standardized in the late 1980s, and the current standard, SQL-92, was adopted by the American National Standards Institute (ANSI) and International Standards Organization (ISO). Arguably, the most widely used form of concurrent programming is the concurrent execution of database programs (called transactions). Users write programs as if they are to be run by themselves, and the responsibility for running them concurrently is given to the DBMS. James Gray won the 1999 Turing award for management in a DBMS.In the late 1980s and the 1990s, advances made in many areas of database systems. Considerable research carried out into more powerful query languages and richer data models, and there a big emphasis on supporting complex analysis of data from all parts of an enterprise. Several vendors (e.g., IBM’s DB2, Oracle 8, Informix UDS) developed by numerous vendors for creating data warehouses, consolidating data from several databases, and for carrying out specialized analysis.An interesting phenomenon is the emergence of several enterprise resource planning(ERP) and management resource planning (MRP) packages, which add a substantial layer of application-oriented features on top of a DBMS. Widely used packages include systems from Baan, Oracle,PeopleSoft, SAP, and Siebel. These packages identify a set of common tasks (e.g., inventory management, resources planning, financial analysis) encountered by a large number of organizations and provide a general application layer to carry out these tasks. The data is stored in a relational DBMS, and the application layer can be customized to different companies, leading to lower Introduction to Database Systems overall costs for the companies, compared to the cost of building the application layer from scratch. Most significantly, perhaps, DBMSs of Web sites stored their data exclusively in operating systems files, the use of a DBMS to store data that is accessed through a Web browser is becoming widespread. Queries are generated through Web-accessible forms and answers are formatted using a markup language such as HTML, in order to be easily displayed in a browser. All the database vendors are adding features to their DBMS aimed at making it more suitable for deployment over the Internet. Database management continues to gain importance as more and more data is brought on-line, and made ever more accessible through computer networking. Today the field is being driven by exciting visions such as multimedia databases, interactive video, digital libraries, a genome mapping effort and NASA’s Earth Observation System project,and the desire of companies to consolidate their decision-making processes and mine their data repositories for useful information about their businesses. Commercially, database manage- ment systems represent one of the largest and most vigorous market segments. Thusthes- tudy of database systems could prove to be richly rewarding in more ways than one!INTRODUCTION TO PHYSICAL DATABASEDESIGNLike all other aspects of database design, physical design must be guided by the nature of the data and its intended use. In particular, it is important to understand the typical workload that the database must support; the workload consists of a mix of queries and updates. Users also requirements about queries or updates must run or and users’ performance requirements are the basis on which a number of decisions .To create a good physical database design and to tune the system for performance in response to evolving user requirements, the designer needs to understand the workings of a DBMS, especially the indexing and query processing techniques supported by the DBMS. If the database is expected to be accessed concurrently by many users, or is a distributed database, the task becomes more complicated, and other features of a DBMS come into play.DATABASE WORKLOADSThe key to good physical design is arriving at an accurate description of the expected workload. A workload description includes the following elements:1. A list of queries and their frequencies, as a fraction of all queries and updates.2. A list of updates and their frequencies.3. Performance goals for each type of query and update.For each query in the workload, we must identify:Which relations are accessed.Which attributes are retained (in the SELECT clause).Which attributes or join conditions expressed on them (in the WHERE clause) and the workload, we must identify:Which attributes or join conditions expressed on them (in the WHERE clause) and .For UPDATE commands, the fields that are modified by the update.Remember that queries and updates typically involves a particular account number. The values of these parameters determine selectivity of selection and join conditions.Updates benefit from a good physical design and the presence of indexes. On the other indexes on the attributes that they modify. Thus, while queries can only benefit from the presence of an index, an index may either speed up or slow down a given update. Designers should keep this trade-offer in mind when creating indexes.NEED FOR DATABASE TUNINGAccurate, detailed workload information may be of the system. Consequently, tuning a database after it designed and deployed is important—we must refine the initial design in the light of actual usage patterns to obtain the best possible performance.The distinction between database design and database tuning is somewhat arbitrary.We could consider the design process to be over once an initial conceptual schema is designed and a set of indexing and clustering decisions is made. Any subsequent changes to the conceptual schema or the indexes, say, would then be regarded as a tuning activity. Alternatively, we could consider some refinement of the conceptual schema (and physical design decisions affected by this refinement) to be part of the physical design process.Where we draw the line between design and tuning is not very important.OVERVIEW OF DATABASE TUNINGAfter the initial phase of database design, actual use of the database provides a valuable source of detailed information that can be used to refine the initial design. Many of the original assumptions about the expected workload can be replaced by observed usage patterns; in general, some of the initial workload specification will be validated, and some of it will turn out to be wrong. Initial guesses about the size of data can be replaced with actual statistics from the system catalogs (although this information will keep changing as the system evolves). Careful monitoring of queries can reveal unexpected problems; for example, the optimizer may not be using some indexes as intended to produce good plans.Continued database tuning is important to get the best possibleperformance.TUNING THE CONCEPTUAL SCHEMAIn the course of database design, we may realize that our current choice of relation schemas does not enable us meet our performance objectives for the given workload with any (feasible) set of physical design choices. If so, we may our conceptual schema (and re-examine physical design decisions that are affected by the changes that we make).We may realize that a redesign is necessary during the initial design process or later, after the system in use for a while. Once a database designed and populated with data, changing the conceptual schema requires a significant effort in terms of mapping the contents of relations that are affected. Nonetheless, it may sometimes be necessary to revise the conceptual schema in light of experience with the system. We now consider the issues involved in conceptual schema (re)design from the point of view of performance.Several options must be considered while tuning the conceptual schema:We may decide to settle for a 3NF design instead of a BCNF design.If there are two ways to decompose a given schema into 3NF or BCNF, our choice should be guided by the workload.Sometimes we might decide to further decompose a relation that is already in BCNF.In other situations we might denormalize. That is, we might choose toreplace a collection of relations obtained by a decomposition from a larger relation with the original (larger) relation, even though it suffers from some redundancy problems. Alternatively, we might choose to add some fields to certain relations to speed up some important queries, even if this leads to a redundant storage of some information (and consequently, a schema that is in neither 3NF nor BCNF).This discussion of normalization the technique of decomposition, which amounts to vertical partitioning of a relation. Another technique to consider is , which would lead to our ; rather, we want to create two distinct relations (possibly with different constraints and indexes on each).Incidentally, when we redesign the conceptual schema, especially if we are tuning an existing database schema, it is worth considering whether we should create views to mask these changes from users for whom the original schema is more natural.TUNING QUERIES AND VIEWSIf we notice that a query is running much slower than we expected, we conjunction with some index tuning, can often ?x the problem. Similar tuning may be called for if queries on some view run slower than expected.When tuning a query, the first thing to verify is that the system is using the plan that you expect it to use. It may be that the system is not finding the best plan for a variety of reasons. Some common situations that are not condition involving null values.Selection conditions involving arithmetic or string expressions orconditions using the or connective. For example, if we E.age = 2*D.age in the WHERE clause, the optimizer may correctly utilize an available index on E.age but fail to utilize an available index on D.age. Replacing the condition by E.age2=D.age would reverse the situation.Inability to recognize a sophisticated plan such as an index-only scan for an aggregation query involving a GROUP BY clause.If the optimizer is not smart enough to and the best plan (using access methods and evaluation strategies supported by the DBMS), some systems allow users to guide the choice of a plan by providing order and join method. A user who wishes to guide optimization in this manner should and the capabilities of the given DBMS.(8)OTHER TOPICSMOBILE DATABASESThe availability of portable computers and wireless communications many components of a DBMS, including the query engine, transaction manager, and recovery manager.Users are connected through a wireless link whose bandwidth is ten times less than Ethernet and 100 times less than ATM networks. Communication costs are therefore significantly proportion to IO and CPU costs.Users’ locati ons are constantly changing, and mobile computers costs is connection time and battery usage in addition to bytes transferred, and change constantly depending on location. Data is frequently replicated to minimize the cost of accessing it from different locations.As a user moves around, data could be accessed from multipledatabase servers within a single transaction. The likelihood of losing connections is also much greater than in a traditional network. Centralized transaction management may therefore be impractical, especially if some data is resident at the mobile computers. We may in fact ACID transactions and develop alternative notions of consistency for user programs.MAIN MEMORY DATABASESThe price of main memory is now low enough that we can buy enough main memory to CPUs also memory. This shift prompts a reexamination of some basic DBMS design decisions, since disk accesses no longer dominate processing time for a memory-resident database: Main memory does not survive system crashes, and so we still atomicity and durability. Log records must be written to stable storage at commit time, and this process could become a bottleneck. To minimize this problem, rather than commit each transaction as it completes, we can collect completed transactions and commit them in batches; this is called group commit. Recovery algorithms can also be optimized since pages rarely out to make room for other pages.The implementation of in-memory operations must be considered while optimizing queries, namely the amount of space required to execute a plan. It is important to minimize the space overhead because exceeding available physical memory would lead to swapping pages to disk (through the operating system’s virtual memory mechanisms), greatly slowing down execution.Page-oriented data structures become less important (since pages areno longer the unit of data retrieval), and clustering is not important (since the cost of accessing any region of main memory is uniform).(一)从历史的角度回顾从数据库的早期开始,存储和操纵数据就一直是主要的应用焦点。

毕业设计(论文)外文资料翻译(学生用)

毕业设计(论文)外文资料翻译(学生用)

毕业设计外文资料翻译学院:信息科学与工程学院专业:软件工程姓名: XXXXX学号: XXXXXXXXX外文出处: Think In Java (用外文写)附件: 1.外文资料翻译译文;2.外文原文。

附件1:外文资料翻译译文网络编程历史上的网络编程都倾向于困难、复杂,而且极易出错。

程序员必须掌握与网络有关的大量细节,有时甚至要对硬件有深刻的认识。

一般地,我们需要理解连网协议中不同的“层”(Layer)。

而且对于每个连网库,一般都包含了数量众多的函数,分别涉及信息块的连接、打包和拆包;这些块的来回运输;以及握手等等。

这是一项令人痛苦的工作。

但是,连网本身的概念并不是很难。

我们想获得位于其他地方某台机器上的信息,并把它们移到这儿;或者相反。

这与读写文件非常相似,只是文件存在于远程机器上,而且远程机器有权决定如何处理我们请求或者发送的数据。

Java最出色的一个地方就是它的“无痛苦连网”概念。

有关连网的基层细节已被尽可能地提取出去,并隐藏在JVM以及Java的本机安装系统里进行控制。

我们使用的编程模型是一个文件的模型;事实上,网络连接(一个“套接字”)已被封装到系统对象里,所以可象对其他数据流那样采用同样的方法调用。

除此以外,在我们处理另一个连网问题——同时控制多个网络连接——的时候,Java内建的多线程机制也是十分方便的。

本章将用一系列易懂的例子解释Java的连网支持。

15.1 机器的标识当然,为了分辨来自别处的一台机器,以及为了保证自己连接的是希望的那台机器,必须有一种机制能独一无二地标识出网络内的每台机器。

早期网络只解决了如何在本地网络环境中为机器提供唯一的名字。

但Java面向的是整个因特网,这要求用一种机制对来自世界各地的机器进行标识。

为达到这个目的,我们采用了IP(互联网地址)的概念。

IP以两种形式存在着:(1) 大家最熟悉的DNS(域名服务)形式。

我自己的域名是。

所以假定我在自己的域内有一台名为Opus的计算机,它的域名就可以是。

软件工程外文翻译文献

软件工程外文翻译文献

软件工程外文翻译文献(文档含中英文对照即英文原文和中文翻译)Software engineeringSoftware engineering is the study of the use of engineering methods to build and maintain effective, practical and high-quality software disciplines. It involves the programming language, database, software development tools, system platform, standards, design patterns and so on.In modern society, the software used in many ways. Typical software such as email, embedded systems, human-machine interface, office packages, operating systems, compilers, databases, games. Meanwhile, almost all the various sectors of computer software applications, such as industry, agriculture, banking, aviation and government departments. These applications facilitate the economic and social development,improve people's working efficiency, while improving the quality of life. Software engineers is to create software applications of people collectively, according to which software engineers can be divided into different areas of system analysts, software designers, system architects, programmers, testers and so on. It is also often used to refer to a variety of software engineers, programmers.OriginIn view of difficulties encountered in software development, North Atlantic Treaty Organization (NATO) in 1968 organized the first Conference on Software Engineering, and will be presented at the "software engineering" to define the knowledge required for software development, and suggested that "software development the activities of similar projects should be. " Software Engineering has formally proposed since 1968, this time to accumulate a large number of research results, widely lot of technical practice, academia and industry through the joint efforts of software engineering is gradually developing into a professional discipline.Definitioncreation and use of sound engineering principles in order to obtain reliable and economically efficient software.application of systematic, follow the principle can be measured approach to development, operation and maintenance of software; that is to beapplied to software engineering.The development, management and updating software products related to theories, methods and tools.A knowledge or discipline (discipline), aims to produce good quality, punctual delivery, within budget and meet users need software.the practical application of scientific knowledge in the design, build computer programs, and the accompanying documents produced, and the subsequent operation and maintenance.Use systematic production and maintenance of software products related to technology and management expertise to enable software development and changes in the limited time and under cost.Construction team of engineers developed the knowledge of large software systems disciplines.the software analysis, design, implementation and maintenance of a systematic method.the systematic application of tools and techniques in the development of computer-based applications.Software Engineering and Computer ScienceSoftware development in the end is a science or an engineering, this is a question to be debated for a long time. In fact, both the two characteristics of software development. But this does not mean that they can be confused with each other. Many people think that softwareengineering, computer science and information science-based as in the traditional sense of the physical and chemical engineering as. In the U.S., about 40% of software engineers with a degree in computer science. Elsewhere in the world, this ratio is also similar. They will not necessarily use every day knowledge of computer science, but every day they use the software engineering knowledge.For example, Peter McBreen that software "engineering" means higher degree of rigor and proven processes, not suitable for all types of software development stage. Peter McBreen in the book "Software Craftsmanship: The New Imperative" put forward the so-called "craftsmanship" of the argument, consider that a key factor in the success of software development, is to develop the skills, not "manufacturing" software process.Software engineering and computer programmingSoftware engineering exists in a variety of applications exist in all aspects of software development. The program design typically include program design and coding of the iterative process, it is a stage of software development.Software engineering, software project seeks to provide guidance in all aspects, from feasibility analysis software until the software after completion of maintenance work. Software engineering that software development and marketing activities are closely related. Such assoftware sales, user training, hardware and software associated with installation. Software engineering methodology that should not be an independent programmer from the team and to develop, and the program of preparation can not be divorced from the software requirements, design, and customer interests.Software engineering design of industrial development is the embodiment of a computer program.Software crisisSoftware engineering, rooted in the 20th century to the rise of 60,70 and 80 years of software crisis. At that time, many of the software have been a tragic final outcome. Many of the software development time significantly beyond the planned schedule. Some projects led to the loss of property, and even some of the software led to casualties. While software developers have found it increasingly difficult for software development.OS 360 operating system is considered to be a typical case. Until now, it is still used in the IBM360 series host. This experience for decades, even extremely complex software projects do not have a set of programs included in the original design of work systems. OS 360 is the first large software project, which uses about 1,000 programmers. Fred Brooks in his subsequent masterpiece, "The Mythical Man Month" (The Mythical Man-Month) in the once admitted that in his management of theproject, he made a million dollar mistake.Property losses: software error may result in significant property damage. European Ariane rocket explosion is one of the most painful lesson.Casualties: As computer software is widely used, including hospitals and other industries closely related to life. Therefore, the software error might also result in personal injury or death.Was used extensively in software engineering is the Therac-25 case of accidents. In 1985 between June and January 1987, six known medical errors from the Therac-25 to exceed the dose leads to death or severe radiation burns.In industry, some embedded systems do not lead to the normal operation of the machine, which will push some people into the woods. MethodologyThere are many ways software engineering aspects of meaning. Including project management, analysis, design, program preparation, testing and quality control.Software design methods can be distinguished as the heavyweight and lightweight methods. Heavyweight methods produce large amounts of official documentation.Heavyweight development methodologies, including the famous ISO 9000, CMM, and the Unified Process (RUP).Lightweight development process is not an official document of the large number of requirements. Lightweight methods, including well-known Extreme Programming (XP) and agile process (Agile Processes).According to the "new methodology" in this article, heavyweight method presented is a "defensive" posture. In the application of the "heavyweight methods" software organizations, due to a software project manager with little or no involvement in program design, can not grasp the item from the details of the progress of the project which will have a "fear", constantly had to ask the programmer to write a lot of "software development documentation." The lightweight methods are presented "aggressive" attitude, which is from the XP method is particularly emphasized four criteria - "communication, simplicity, feedback and courage" to be reflected on. There are some people that the "heavyweight method" is suitable for large software team (dozens or more) use, and "lightweight methods" for small software team (a few people, a dozen people) to use. Of course, on the heavyweight and lightweight method of approach has many advantages and disadvantages of debate, and various methods are constantly evolving.Some methodologists think that people should be strictly followed in the development and implementation of these methods. But some people do not have the conditions to implement these methods. In fact, themethod by which software development depends on many factors, but subject to environmental constraints.Software development processSoftware development process, with the subsequent development of technology evolution and improvement. From the early waterfall (Waterfall) development model to the subsequent emergence of the spiral iterative (Spiral) development, which recently began the rise of agile development methodologies (Agile), they showed a different era in the development process for software industry different awareness and understanding of different types of projects for the method.Note distinction between software development process and software process improvement important difference between. Such as ISO 15504, ISO 9000, CMM, CMMI such terms are elaborated in the framework of software process improvement, they provide a series of standards and policies to guide software organizations how to improve the quality of the software development process, the ability of software organizations, and not give a specific definition of the development process.Development of software engineering"Agile Development" (Agile Development) is considered an important software engineering development. It stressed that software development should be able to possible future changes and uncertaintiesof a comprehensive response.Agile development is considered a "lightweight" approach. In the lightweight approach should be the most prestigious "Extreme Programming" (Extreme Programming, referred to as XP).Correspond with the lightweight approach is the "heavyweight method" exists. Heavyweight approach emphasizes the development process as the center, rather than people-centered. Examples of methods such as heavyweight CMM / PSP / TSP.Aspect-oriented programming (Aspect Oriented Programming, referred to as the AOP) is considered to software engineering in recent years, another important development. This aspect refers to the completion of a function of a collection of objects and functions. In this regard the contents related to generic programming (Generic Programming) and templates.软件工程软件工程是一门研究用工程化方法构建和维护有效的、实用的和高质量的软件的学科。

软件工程外文文献翻译

软件工程外文文献翻译

软件工程外文文献翻译大连交通大学2012届本科生毕业设计 (论文) 外文翻译原文:New Competencies for HRWhat does it take to make it big in HR? What skills and expertise do you need? Since 1988, Dave Ulrich, professor of business administration at the University of Michigan, and his associates have been on a quest to provide the answers. This year, they?ve released an all-new 2007 Human Resource Competency Study (HRCS). Thefindings and interpretations lay out professional guidance for HRfor at least the next few years.“People want to know what set of skills high-achieving HR people need toperform even better,” says Ulrich, co-director of the project along with WayneBrockbank, also a professor of business at the University of Michigan.Conducted under the auspices of the Ross School of Business at the University of Michigan and The RBL Group in Salt Lake City, with regional partners including the Society for Human Resource Management (SHRM) in North America and other institutions in Latin America, Europe, China and Australia, HRCS is the longest-running, most extensive global HR competency study in existence. “Inreaching our conclusions, we?ve looked across more than 400 companies and are able to report with statistical accuracy what HR executives say and do,” Ulrich says.“The re search continues to demonstrate the dynamic nature of the humanresource management profession,” says SHRM President and CEO Susan R. Meisinger, SPHR. “The findings also highlight what an exciting time it is to be in the profession. We continue to have the ability to really add value to an organization.”“HRCS is foundational work that is really important to HR as a profession,” says Cynthia McCague, senior vice president of the Coca-Cola Co., who participated in the study. “They have created and continue to enhance a framework for thinking about how HR drives organizational performance.”What’s NewResearchers identified six core competencies that high-performing HR professionals embody. These supersede the five competencies outlined in the 2002 HRCS—the last study published—reflecting the continuing evolution of the HRprofession. Each competency is broken out into performance elements.“This is the fifth round, so we can look at past models and compare where the profession is going,” says Evren Esen, survey program manager at SHRM, which provided the sample of HR professionals surveyed in NorthAmerica. “We can actually see the profession changing. Some core areas remain the same, but others,大连交通大学2012届本科生毕业设计 (论文) 外文翻译based on how the raters asse ss and perceive HR, are new.” (For more information, see “The Competencies and Their Elements,” at right.) To some degree, the new competencies reflect a change in nomenclature or a shuffling of the competency deck. However, there are some key differences.Five years ago, HR?s role in managing culture was embedded within a broadercompetency. Now its importance merits a competency of its own. Knowledge of technology, a stand-alone competency in 2002, now appears within Business Ally. In other instances, the new competencies carry expectations that promise to change the way HR views its role. For example, the Credible Activist calls for HR to eschew neutrality and to take a stand—to practice the craft “with an attitude.”To put the competencies in perspective, it?s helpful to view them as a three-tierpyramid with Credible Activist at the pinnacle.Credible Activist. This competency is the top indicator inpredicting overall outstanding performance, suggesting that mastering it should be a priority. “You?ve got to be good at all of them, but, no question, [this competency] is key,” Ulrich says.“But you can?t be a Credible Activist without having all the other competencies. In a sense, it?s the whole package.”“It?s a deal breaker,” agrees Dani Johnson, project manager of the Human Resource Competency Study at The RBL Group in Salt Lake City. “If you don?t cometo the table with it, you?re done. It permeates everything you do.”The Credible Activist is at the heart of what it takes to be an effective H R leader. “The best HR people do not hold back; they step forward and advocate for theirposition,” says Susan Harmansky, SPHR, senior director of domestic restaurant operations for HR at Papa John?s International in Louisville, Ky., and former chair of the Human Resource Certification Institute. “CEOs are not waiting for HR to come in with options—they want your recommendations; they want you to speak from your position as an expert, similar to what you see from legal or finance executives.”“You don?t w ant to be credible without being an activist, because essentially you?re worthless to the business,” Johnson says. “People like you, but you have no impact. On the other hand, you don?t want to be an activist without being credible. You can be dangerous in a situation like that.”Below Credible Activist on the pyramid is a cluster of three competencies: Cultural Steward, Talent Manager/Organizational Designer and Strategy Architect.Cultural Steward. HR has always owned culture. But with Sarbanes-Oxley and other regulatory pressures, and CEOs relying more on HR to manage culture, this is the first time it has emerged as an independent competency. Of the six competencies,大连交通大学2012届本科生毕业设计 (论文) 外文翻译Cultural Steward is the second highest predictor of performance of both HR professionals and HR departments.Talent Manager/Organizational Designer. Talent management focuses on howindividuals enter, move up, across or out of the organization. Organizational design centers on the policies, practices and structure that shape how the organization works. Their linking reflects Ulrich?s belief that HR may be placing too much emphasis ontalent acquisition at the expense of organizational design. Talent management will not succeed in the long run without an organizational structure that supports it.Strategy Architect. Strategy Architects are able to recognize business trends and their impact on the business, and to identify potential roadblocks and opportunities. Harmansky, who recently joined Papa John?s, demonstrates how the Strategy Architect competency helps HR contribute to the overall business strategy. “In my first months here, I?m spending a lot of time traveling, going to see stores all over the country. Every time I go to a store, while my counterparts of the management team are talking about [operational aspects], I?m talking tothe people who work there. I?m trying to find out what the issues are surrounding people. How do I develop them? I?m looking for my business differentiator on the people side s o I can contribute to the strategy.”When Charlease Deathridge, SPHR, HR manager of McKee Foods inStuarts Draft, Va., identified a potential roadblock to implementing a new management philosophy, she used the Strategy Architect competency. “When we were rolling out …lean manufacturing? principles at our location, we administered an employee satisfaction survey to assess how the workers viewed the new system. The satisfaction scores were lower than ideal. I showed [management] how a negative could become a positive, how we could use the data and follow-up surveys as a strategic tool to demonstrate progress.”Anchoring the pyramid at its base are two competencies that Ulrich describes as “table stakes—necessary but not sufficient.” Except in China, where HR is at an earlier stage in professional development and there is great emphasis on transactional activities, these competencies are looked upon as basic skills that everyone must have. There is some disappointing news here. In the United States, respondents rated significantly lower on these competencies than the respondents surveyedin other countries.Business Ally. HR contributes to the success of a business byknowing how it makes money, who the customers are, and why they buy the company?s products and services. For HR professionals to be BusinessAllies (and Credible Activists and Strategy Architects as well), they should be what Ulrich describes as “businessliterate.” The mantra about understanding the business—how it works, the financialsand strategic issues—remains as important today as it did in every iteration of the survey the past 20 years. Yet progress in this area continues to lag.大连交通大学2012届本科生毕业设计 (论文) 外文翻译“Even these high performers don?t know the business as well as they should,” U lrich says. In his travels, he gives HR audiences 10 questions to test their business literacy.Operational Executor. These skills tend to fall into the range of HR activities characterized as transactional or “legacy.” Policies need to be drafted, adapted and implemented. Employees need to be paid, relocated, hired, trained and more. Every function here is essential, but—as with the Business Allycompetency—high-performing HR managers seem to view them as less important and score higher on the other competencies. Even some highly effective HR people may be running a risk in paying too little attention to these nuts-and-bolts activities, Ulrich observes.Practical ToolIn conducting debriefings for people who participated in the HRCS, Ulrich observes how delighted they are at the prescriptive nature of theexercise. The individual feedback reports they receive (see “How the Study Was Done”) offer thema road map, and they are highly motivated to follow it.Anyone who has been through a 360-degree appraisal knows that criticism can be jarring. It?s risky to open yourself up to others? opinions when you don?t have to. Add the prospect of sharing the results with your boss and colleagues who will be rating you, and you may decide to pass. Still, it?s not surprising that highly motivated people like Deathridge jumped at the chance for the free feedback.“All of it is not good,” says Deathridge. “You have to be willing to face up to it. You go home, work it out and say, …Why am I getting this bad feedback?? ”But for Deathridge, the results mostly confirmed what she already knew. “Ibelieve most people know where they?re weak or strong. For me, it was most helpful to look at how close others? ratings of me matched with my own assessments. ... There?s so much to learn about what it takes to be a genuine leader, and this studyhelped a lot.”Deathridge says the individual feedback report she received helped her realize the importance of taking a stand and developing her Credible Activist competency. “There wa s a situation where I had a line manager who wanted to disciplinesomeone,” she recalls. “In the past, I wouldn?t have been able to stand up as strongly as I did. I was able to be very clear about how I felt. I told him that he had not done enough to document the performance issue, and that if he wanted to institute discipline it would have to be at the lowest level. In the past, I would have been more deferential and said, …Let?s compromise and do it at step two or three.? But I didn?t do it; I spoke out strongly and held my ground.”大连交通大学2012届本科生毕业设计 (论文) 外文翻译This was the second study for Shane Smith, director of HR at Coca-Cola. “I didit for the first time in 2002. Now I?m seeing some traction in the things I?ve been working on. I?m pleased to see the consistency with my evaluations of my performance when compared to my raters.”What It All MeansUlrich believes that HR professionals who would have succeeded 30, 20, even 10 years ago, are not as likely to succeed today. They are expected to play new roles. To do so, they will need the new competencies.Ulrich urges HR to reflect on the new competencies and what they reveal about the future of the HR profession. His message is direct and unforgiving. “Legacy HR work is going, and HR people who don?t change with it will be gone.” Still, he remains optimistic that many in HR are heeding his call. “Twenty percent of HRpeople will never get it; 20 percent are really top performing. The middle 60 percent are moving in the right direction,” says Ulrich.“Within that 60 percent there are HR professionals who may be at the table but are not contributing fully,” he adds. “That?s the group I want to talk to. ... I want to show them what they need to do to have an impact.”As a start, Ulrich recommends HR professionals consider initiating three conversations. “One is with your business leaders. Review the competencies withthem and ask them if you?re doing them. Next, pose the same questions to your HR team. Then, ask yourself whether you really know the bu siness or if you?re glossing on the surface.” Finally, set your priorities. “Our data say: …Get working on thatCredible Activist!? ”Robert J. Grossman, a contributing editor of HR Magazine, is a lawyer and aprofessor of management studies at Marist College in Poughkeepsie, N.Y. from:HR Magazine, 2007,06 Robert J. Grossman ,大连交通大学2012届本科生毕业设计 (论文) 外文翻译译文:人力资源管理的新型胜任力如何在人力资源管理领域取得更大成功,需要怎样的专业知识和技能, 从1988年开始,密歇根大学的商业管理教授Dave Ulrich先生和他的助手们就开始研究这个课题。

软件工程专业毕业设计外文文献翻译

软件工程专业毕业设计外文文献翻译

软件工程专业毕业设计外文文献翻译1000字本文将就软件工程专业毕业设计的外文文献进行翻译,能够为相关考生提供一定的参考。

外文文献1: Software Engineering Practices in Industry: A Case StudyAbstractThis paper reports a case study of software engineering practices in industry. The study was conducted with a large US software development company that produces software for aerospace and medical applications. The study investigated the company’s software development process, practices, and techniques that lead to the production of quality software. The software engineering practices were identified through a survey questionnaire and a series of interviews with the company’s software development managers, software engineers, and testers. The research found that the company has a well-defined software development process, which is based on the Capability Maturity Model Integration (CMMI). The company follows a set of software engineering practices that ensure quality, reliability, and maintainability of the software products. The findings of this study provide a valuable insight into the software engineering practices used in industry and can be used to guide software engineering education and practice in academia.IntroductionSoftware engineering is the discipline of designing, developing, testing, and maintaining software products. There are a number of software engineering practices that are used in industry to ensure that software products are of high quality, reliable, and maintainable. These practices include software development processes, software configuration management, software testing, requirements engineering, and project management. Software engineeringpractices have evolved over the years as a result of the growth of the software industry and the increasing demands for high-quality software products. The software industry has developed a number of software development models, such as the Capability Maturity Model Integration (CMMI), which provides a framework for software development organizations to improve their software development processes and practices.This paper reports a case study of software engineering practices in industry. The study was conducted with a large US software development company that produces software for aerospace and medical applications. The objective of the study was to identify the software engineering practices used by the company and to investigate how these practices contribute to the production of quality software.Research MethodologyThe case study was conducted with a large US software development company that produces software for aerospace and medical applications. The study was conducted over a period of six months, during which a survey questionnaire was administered to the company’s software development managers, software engineers, and testers. In addition, a series of interviews were conducted with the company’s software development managers, software engineers, and testers to gain a deeper understanding of the software engineering practices used by the company. The survey questionnaire and the interview questions were designed to investigate the software engineering practices used by the company in relation to software development processes, software configuration management, software testing, requirements engineering, and project management.FindingsThe research found that the company has a well-defined software development process, which is based on the Capability Maturity Model Integration (CMMI). The company’s software development process consists of five levels of maturity, starting with an ad hoc process (Level 1) and progressing to a fully defined and optimized process (Level 5). The company has achieved Level 3 maturity in its software development process. The company follows a set of software engineering practices that ensure quality, reliability, and maintainability of the software products. The software engineering practices used by the company include:Software Configuration Management (SCM): The company uses SCM tools to manage software code, documentation, and other artifacts. The company follows a branching and merging strategy to manage changes to the software code.Software Testing: The company has adopted a formal testing approach that includes unit testing, integration testing, system testing, and acceptance testing. The testing process is automated where possible, and the company uses a range of testing tools.Requirements Engineering: The company has a well-defined requirements engineering process, which includes requirements capture, analysis, specification, and validation. The company uses a range of tools, including use case modeling, to capture and analyze requirements.Project Management: The company has a well-defined project management process that includes project planning, scheduling, monitoring, and control. The company uses a range of tools to support project management, including project management software, which is used to track project progress.ConclusionThis paper has reported a case study of software engineering practices in industry. The study was conducted with a large US software development company that produces software for aerospace and medical applications. The study investigated the company’s software development process,practices, and techniques that lead to the production of quality software. The research found that the company has a well-defined software development process, which is based on the Capability Maturity Model Integration (CMMI). The company uses a set of software engineering practices that ensure quality, reliability, and maintainability of the software products. The findings of this study provide a valuable insight into the software engineering practices used in industry and can be used to guide software engineering education and practice in academia.外文文献2: Agile Software Development: Principles, Patterns, and PracticesAbstractAgile software development is a set of values, principles, and practices for developing software. The Agile Manifesto represents the values and principles of the agile approach. The manifesto emphasizes the importance of individuals and interactions, working software, customer collaboration, and responding to change. Agile software development practices include iterative development, test-driven development, continuous integration, and frequent releases. This paper presents an overview of agile software development, including its principles, patterns, and practices. The paper also discusses the benefits and challenges of agile software development.IntroductionAgile software development is a set of values, principles, and practices for developing software. Agile software development is based on the Agile Manifesto, which represents the values and principles of the agile approach. The manifesto emphasizes the importance of individuals and interactions, working software, customer collaboration, and responding to change. Agile software development practices include iterative development, test-driven development, continuous integration, and frequent releases.Agile Software Development PrinciplesAgile software development is based on a set of principles. These principles are:Customer satisfaction through early and continuous delivery of useful software.Welcome changing requirements, even late in development. Agile processes harness change for the customer's competitive advantage.Deliver working software frequently, with a preference for the shorter timescale.Collaboration between the business stakeholders and developers throughout the project.Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.Working software is the primary measure of progress.Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.Continuous attention to technical excellence and good design enhances agility.Simplicity – the art of maximizing the amount of work not done – is essential.The best architectures, requirements, and designs emerge from self-organizing teams.Agile Software Development PatternsAgile software development patterns are reusable solutions to common software development problems. The following are some typical agile software development patterns:The Single Responsibility Principle (SRP)The Open/Closed Principle (OCP)The Liskov Substitution Principle (LSP)The Dependency Inversion Principle (DIP)The Interface Segregation Principle (ISP)The Model-View-Controller (MVC) PatternThe Observer PatternThe Strategy PatternThe Factory Method PatternAgile Software Development PracticesAgile software development practices are a set ofactivities and techniques used in agile software development. The following are some typical agile software development practices:Iterative DevelopmentTest-Driven Development (TDD)Continuous IntegrationRefactoringPair ProgrammingAgile Software Development Benefits and ChallengesAgile software development has many benefits, including:Increased customer satisfactionIncreased qualityIncreased productivityIncreased flexibilityIncreased visibilityReduced riskAgile software development also has some challenges, including:Requires discipline and trainingRequires an experienced teamRequires good communicationRequires a supportive management cultureConclusionAgile software development is a set of values, principles, and practices for developing software. Agile software development is based on the Agile Manifesto, which represents the values and principles of the agile approach. Agile software development practices include iterative development, test-driven development, continuous integration, and frequent releases. Agile software development has many benefits, including increased customer satisfaction, increased quality, increased productivity, increased flexibility, increased visibility, and reduced risk. Agile software development also has some challenges, including the requirement for discipline and training, the requirement for an experienced team, the requirement for good communication, and the requirement for a supportive management culture.。

软件工程英文文献原文及翻译

软件工程英文文献原文及翻译

英文文献原文及译文学生姓名:赵凡学号:1021010639学院:软件学院专业:软件工程指导教师:武敏顾晨昕2014年 6月英文文献原文The use of skinWhat is a skin? In the role of setting, the preparations made for the animation in the final process is skinning. The so-called skinning skinning tool is to use role-model role do with our skeletal system to help set the course together. After this procedure, fine role model can be rendered on real numbers can be made into animation. Bones in skinning process, in which the position is called Bind Pose. After the skin, bone deformation of the skin caused by the Games. However, sometimes inappropriate distortion, which requires bone or skin to make the appropriate changes, then you can make use of relevant command to restore the bone binding position, and then disconnect the association between bone and skin. In Maya, you can always put the bones and skin disconnected or reconnected. There is a direct way to skin the skin (skin flexible rigid skinning) and indirect skin (or wrap the lattice deformation of flexible or rigid skinning skinning joint use).In recent years, more and more 3D animation software, a great competition in the market, software companies are constantly developing and updating the relevant software only more humane, but in three-dimensional animation maya mainstream animation software. Able to create bone, meat, God's role is that each CG digital artists dream. Whether the digital characters charm, the test is the animator of life, understanding of life. Digital character to have bone and meat producers are required for the role of the body and has a full grasp of motor function. In addition, the roles of whether there is realism, the key lies in the design and production of the skin, which is skinning animation software for skilled technical and creative mastery is essential. Skin is ready to work in animation final steps, after this procedure, you can do the movements designed, if the skin did not do the work, after the animation trouble, so the skin is very important.As the three-dimensional animation with accuracy and authenticity, the current three-dimensional animation is rapidly developing country, nowadays the use ofthree-dimensional animation everywhere, the field of architecture, planning areas, landscape areas, product demonstrations, simulated animation, film animation, advertising, animation, character animation, virtual reality and other aspects of three-dimensional animation fully reflects the current importance. If compared to the three-dimensional animation puppet animation in real life, then the doll puppet animation equivalent of Maya modeling, puppet performers equivalent Maya animators and puppet steel joints in the body is the skeletal system. Bones in the animation will not be the final rendering, its role is only equivalent to a bracket that can simulate real bones set of major joints to move, rotate, etc.. When the bones are set, we will be bound to the skeleton model, this step is like a robot mounted to a variety of external parts, like hanging, and then through the various settings, add a keyframe animation on bone, and then drive to be bound by the bones corresponding to the model on the joints. Thus, in the final animation, you can see the stiffness of a stationary model with vitality. The whole process from the rigging point of view, may not compare more tedious keyframe animation, rigging, but it is the core of the whole three-dimensional animation, and soul.Rigging plays a vital role in a three-dimensional animation. Good rigging easy animation production, faster and more convenient allows designers to adjust the action figures. Each step are bound to affect the skeleton final animation, binding is based on the premise of doing animation, animators animate convenient, good binding can make animation more fluid, allowing the characters to life even more performance sex. In addition to rigging as well as expression of the binding character, but also to let people be able to speak or behave different facial expressions. Everything is done in order to bind the animation is set, it is bound to set a good animation is mainly based on the entire set of styles and processes. Rigging is an indispensable part in the three-dimensional animation.Three-dimensional animation production process: model, texture, binding, animation, rendering, special effects, synthesis. Each link is associated. Model and material determines the style of animation, binding, and animation determine fluency animation, rendering, animation effects, and synthetic colors and determine the finalresult.Three-dimensional animation, also known as 3D animation, is an emerging technology. Three-dimensional animation gives a three-dimensional realism, even subtle animal hair, this effect has been widely applied to the production of film and television in many areas, education, and medicine. Movie Crash, deformed or fantasy scenes are all three-dimensional animation in real life. Designers in the first three-dimensional animation software to create a virtual scene, and then create the model according to the proportion, according to the requirements set trajectory models, sports, and other parameters of the virtual camera animation, and finally as a model assigned a specific material, and marked the lights , the final output rendering, generating the final screen. DreamWorks' "Shrek" and Pixar's "Finding Nemo" is so accomplished visual impact than the two-dimensional animation has.Animated film "Finding Nemo" extensive use of maya scene technology. Produced 77,000 jellyfish animation regardless of the technical staff or artist is one of the most formidable challenge. This pink translucent jellyfish is most needed is patience and skill, you can say, jellyfish appeared animated sea creatures taken a big step. His skin technology can be very good. The use of film roles skinning techniques is very good, so that each character is vivid, is not related to expression, or action is so smooth, these underwater underwater world is so beautiful. Maya maya technology for the creation of the first to have a full understanding and knowledge. He first thought of creative freedom virtual capacity, but the use of technology has limitations. When the flexible skinning animation technique many roles in the smooth bound for editing, re-allocation tools needed to adjust the skeletal model for the control of the weight through the right point, every detail clownfish are very realistic soft. In the joint on the affected area should smear, let joints from other effects, this movement was not wearing a tie. Used less rigid, rigid lattice bound objects must be created in a position to help the bones of the joint motion. Animated film "Finding Nemo," the whole movie a lot of facial animation, facial skin but also a good technique to make facial expressions, the facial animation is also animated, and now more and more animated facial animationtechnology increasingly possible, these should be good early skin behind it will not affect the expression, there is therefore the creation of the film how maya digital technology, play his video works styling advantages and industrial processes are needed to explore creative personnel, all and three-dimensional figures on the production of content, from maya part. Two-dimensional hand-painted parts, post-synthesis of several parts, from a technical production, artistic pursuit. Several angles to capture the entire production cycle of creation. Maya techniques used in the animated film "Finding Nemo", the flexible skinning performance of many, clown face on with a lot of smooth binding, so more people-oriented, maya application of technical advantages in certain limited extent. Realistic three-dimensional imaging technology in the animation depth spatial density, the sense of space, mysterious underwater world to play the most. Because lifelike action, it also brings the inevitable footage and outdoor sports realistic density, but also to explore this movie maya main goal of the three-dimensional animation.英文文献译文蒙皮的运用什么是蒙皮?在角色设定中,为动画所作的准备工作里的最后一道工序就是蒙皮。

计算机 软件工程 外文翻译 外文文献 英文文献

计算机 软件工程 外文翻译 外文文献 英文文献

一、外文资料译文:Java开发2.0:使用Hibernate Shards 进行切分横向扩展的关系数据库Andrew Glover,作者兼开发人员,Beacon50摘要:Sharding并不适合所有网站,但它是一种能够满足大数据的需求方法。

对于一些商店来说,切分意味着可以保持一个受信任的RDBMS,同时不牺牲数据可伸缩性和系统性能。

在Java 开发 2.0系列的这一部分中,您可以了解到切分何时起作用,以及何时不起作用,然后开始着手对一个可以处理数TB 数据的简单应用程序进行切分。

日期:2010年8月31日级别:中级PDF格式:A4和信(64KB的15页)取得Adobe®Reader®软件当关系数据库试图在一个单一表中存储数TB 的数据时,总体性能通常会降低。

索引所有的数据读取,显然是很耗时的,而且其中有可能是写入,也可能是读出。

因为NoSQL 数据商店尤其适合存储大型数据,但是NoSQL 是一种非关系数据库方法。

对于倾向于使用ACID-ity 和实体结构关系数据库的开发人员及需要这种结构的项目来说,切分是一个令人振奋的选方法。

切分一个数据库分区的分支,不是在本机上的数据库技术,它发生在应用场面上。

在各种切分实现,Hibernate Shards 可能是Java™ 技术世界中最流行的。

这个漂亮的项目可以让您使用映射至逻辑数据库的POJO 对切分数据集进行几乎无缝操作。

当你使用Hibernate Shards 时,您不需要将你的POJO 特别映射至切分。

您可以像使用Hibernate 方法对任何常见关系数据库进行映射时一样对其进行映射。

Hibernate Shards 可以为您管理低级别的切分任务。

迄今为止,在这个系列,我用一个比赛和参赛者类推关系的简单域表现出不同的数据存储技术比喻为基础。

这个月,我将使用这个熟悉的例子,介绍一个实际的切分策略,然后在Hibernate实现它的碎片。

软件工程毕业论文文献翻译中英文对照

软件工程毕业论文文献翻译中英文对照

软件工程毕业论文文献翻译中英文对照学生毕业设计(论文)外文译文学生姓名: 学号专业名称:软件工程译文标题(中英文):Qt Creator白皮书(Qt Creator Whitepaper)译文出处:Qt network 指导教师审阅签名: 外文译文正文:Qt Creator白皮书Qt Creator是一个完整的集成开发环境(IDE),用于创建Qt应用程序框架的应用。

Qt是专为应用程序和用户界面,一次开发和部署跨多个桌面和移动操作系统。

本文提供了一个推出的Qt Creator和提供Qt开发人员在应用开发生命周期的特点。

Qt Creator的简介Qt Creator的主要优点之一是它允许一个开发团队共享一个项目不同的开发平台(微软Windows?的Mac OS X?和Linux?)共同为开发和调试工具。

Qt Creator的主要目标是满足Qt开发人员正在寻找简单,易用性,生产力,可扩展性和开放的发展需要,而旨在降低进入新来乍到Qt的屏障。

Qt Creator 的主要功能,让开发商完成以下任务: , 快速,轻松地开始使用Qt应用开发项目向导,快速访问最近的项目和会议。

, 设计Qt物件为基础的应用与集成的编辑器的用户界面,Qt Designer中。

, 开发与应用的先进的C + +代码编辑器,提供新的强大的功能完成的代码片段,重构代码,查看文件的轮廓(即,象征着一个文件层次)。

, 建立,运行和部署Qt项目,目标多个桌面和移动平台,如微软Windows,Mac OS X中,Linux的,诺基亚的MeeGo,和Maemo。

, GNU和CDB使用Qt类结构的认识,增加了图形用户界面的调试器的调试。

, 使用代码分析工具,以检查你的应用程序中的内存管理问题。

, 应用程序部署到移动设备的MeeGo,为Symbian和Maemo设备创建应用程序安装包,可以在Ovi商店和其他渠道发布的。

, 轻松地访问信息集成的上下文敏感的Qt帮助系统。

软件工程本科毕业外文文献翻译资料

软件工程本科毕业外文文献翻译资料

学校代码: 10128学号:本科毕业设计外文文献翻译二〇一五年一月The Test Library Management System ofFramework Based on SSHThe application system features in small or medium-sized enterprise lie in the gre ater flexibility and safety high performance-price ratio. Traditional J2EE framework c an not adapt to these needs, but the system application based on SSH(Struts+Spring+ Hibernate) technology can better satisfy such needs. This paper analyses some integra tion theory and key technologies about SSH, and according to the integration construc ts a lightweight WEB framework, which has integrated the three kinds of technology , forming the lightweight WEB framework based on SSH and gaining good effects in p ractical applications.IntroductionGenerally the J2EE platform[27] used in large enterprise applications, can well s olve the application of reliability, safety and stability, but its weakness is the price hig h and the constructing cycle is long. Corresponding to the small or medium enterprise applications, the replace approach is the system framework of lightweight WEB, inclu ding the more commonly used methods which are based on the Struts and Hibernate. With the wide application of Spring, the three technology combination may be a bette r choice as a lightweight WEB framework. It uses layered structure and provides a go od integrated framework for Web applications at all levels in minimizing the Interlaye r coupling and increasing the efficiency of development. This framework can solve a l ot of problems, with good maintainability and scalability. It can solve the separation o f user interface and business logic separation, the separation of business logic and data base operation and the correct procedure control logic, etc. This paper studies the tech nology and principle of Struts and Spring and Hibernate, presenting a proved lightwei ght WEB application framework for enterprise.Hierarchical Web MechanismHierarchical Web framework including the user presentation layer, business logic layer, data persistence layer ,expansion layer etc, each layer for different function, res pectively to finish the whole application. The whole system are divided into different logic module with relatively independent and mutual, and each module can be implem ented according to different design. It can realize the system parallel development, rap id integration, good maintainability, scalability.Struts MVC FrameworkTo ensure the reuse and efficiency of development process, adopting J2EE techn ology to build the Web application must select a system framework which has a good performance . Only in this way can we ensure not wasting lots of time because of adju sting configuration and achieve application development efficiently and quickly. So, p rogrammers in the course of practice got some successful development pattern which proved practical, such as MVC and O/R mapping, etc; many technologies, including S truts and Hibernate frameworks, realized these pattern. However, Struts framework on ly settled the separation problem between view layer and business logic layer, control layer, did not provide a flexible support for complex data saving process. On the contr ary, Hibernate framework offered the powerful and flexible support for complex data saving process. Therefore, how to integrate two frameworks and get a flexible, low-co upling solutions project which is easy to maintain for information system, is a researc h task which the engineering staff is studying constantly.Model-View-Controller (MVC) is a popular design pattern. It divides the interact ive system in three components and each of them specializes in one task. The model c ontains the application data and manages the core functionality. The visual display of t he model and the feedback to the users are managed by the view. The controller not o nly interprets the inputs from the user, but also dominates the model and the view to c hange appropriately. MVC separates the system functionality from the system interfac e so as to enhance the system scalability and maintainability. Struts is a typical MV C frame[32], and it also contains the three aforementioned components. The model le vel is composed of JavaBean and EJB components. The controller is realized by actio n and ActionServlet, and the view layer consists of JSP files. The central controller co ntrols the action execution that receives a request and redirects this request to the appr opriate module controller. Subsequently, the module controller processes the request a nd returns results to the central controller using a JavaBean object, which stores any object to be presented in the view layer by including an indication to module views that must be presented. The central controller redirects the returned JavaBean object to th e main view that displays its information.Spring Framework technologySpring is a lightweight J2EE application development framework, which uses the model of Inversion of Control(IoC) to separate the actual application from the Config uration and dependent regulations of the application. Committed to J2EE application a t all levels of the solution, Spring is not attempting to replace the existing framework, but rather “welding” the object of J2EE application at all levels together through the P OJO management. In addition, developers are free to choose Spring framework for so me or all, since Spring modules are not totally dependent.As a major business-level detail, Spring employs the idea of delay injection to as semble code for the sake of improving the scalability and flexibility of built systems. Thus, the systems achieve a centralized business processing and reduction of code reu se through the Spring AOP module.Hibernate Persistent FrameworkHibernate is a kind of open source framework with DAO design patterns to achie ve mapping(O/R Mapping) between object and relational database.During the Web system development, the tradition approach directly interacts wit h the database by JDBC .However, this method has not only heavy workload but also complex SQL codes of JDBC which need to revise because the business logic sli ghtly changes. So, whatever development or maintain system are inconvenient. Consi dering the large difference between the object-oriented relation of java and the structu re of relational database, it is necessary to introduce a direct mapping mechanism bet ween the object and database, which this kind of mapping should use configuration fil es as soon as possibility, so that mapping files will need modifying rather than java so urce codes when the business logic changes in the future. Therefore, O/R mapping pat tern emerges, which hibernate is one of the most outstanding realization of architectur e.It encapsulates JDBC with lightweight , making Java programmer operate a relati onal database with the object oriented programming thinking. It is a a implementation technology in the lasting layer. Compared to other lasting layer technology such as JD BC, EJB, JDO, Hibernate is easy to grasp and more in line with the object-oriented pr ogramming thinking. Hibernate own a query language (HQL), which is fully object-or iented. The basic structure in its application as shown in figure6.1.Hibernate is a data persistence framework, and the core technology is the object / relational database mapping(ORM). Hibernate is generally considered as a bridge bet ween Java applications and the relational database, owing to providing durable data se rvices for applications and allowing developers to use an object-oriented approach to t he management and manipulation of relational database. Furthermore, it furnishes an object-oriented query language-HQL.Responsible for the mapping between the major categories of Java and the relatio nal database, Hibernate is essentially a middle ware providing database services. It su pplies durable data services for applications by utilizing databases and several profiles , such as hibernate properties and XML Mapping etc..Web services technologiesThe introduction of annotations into Java EE 5 makes it simple to create sophisticated Web service endpoints and clients with less code and a shorter learning curve than was possible with earlier Java EE versions. Annotations — first introduced in Java SE 5 — are modifiers you can add to your code as metadata. They don't affect program semantics directly, but the compiler, development tools, and runtime libraries can process them to produce additional Java language source files, XML documents, or other artifacts and behavior that augment the code containing the annotations (see Resources). Later in the article, you'll see how you can easily turn a regular Java class into a Web service by adding simple annotations.Web application technologiesJava EE 5 welcomes two major pieces of front-end technology — JSF and JSTL —into the specification to join the existing JavaServer Pages and Servletspecifications. JSF is a set of APIs that enable a component-based approach to user-interface development. JSTL is a set of tag libraries that support embedding procedural logic, access to JavaBeans, SQL commands, localized formatting instructions, and XML processing in JSPs. The most recent releases of JSF, JSTL, and JSP support a unified expression language (EL) that allows these technologies to integrate more easily (see Resources).The cornerstone of Web services support in Java EE 5 is JAX-WS 2.0, which is a follow-on to JAX-RPC 1.1. Both of these technologies let you create RESTful and SOAP-based Web services without dealing directly with the tedium of XML processing and data binding inherent to Web services. Developers are free to continue using JAX-RPC (which is still required of Java EE 5 containers), but migrating to JAX-WS is strongly recommended. Newcomers to Java Web services might as well skip JAX-RPC and head right for JAX-WS. That said, it's good to know that both of them support SOAP 1.1 over HTTP 1.1 and so are fully compatible: a JAX-WS Web services client can access a JAX-RPC Web services endpoint, and vice versa.The advantages of JAX-WS over JAX-RPC are compelling. JAX-WS:•Supports the SOAP 1.2 standard (in addition to SOAP 1.1).•Supports XML over HTTP. You can bypass SOAP if you wish. (See the article "Use XML directly over HTTP for Web services (where appropriate)" for more information.)•Uses the Java Architecture for XML Binding (JAXB) for its data-mapping model. JAXB has complete support for XML schema and better performance (more on that in a moment).•Introduces a dynamic programming model for both server and client.The client model supports both a message-oriented and an asynchronous approach.•Supports Message Transmission Optimization Mechanism (MTOM), a W3C recommendation for optimizing the transmission and format of a SOAPmessage.•Upgrades Web services interoperability (WS-I) support. (It supports Basic Profile 1.1; JAX-WS supports only Basic Profile 1.0.)•Upgrades SOAP attachment support. (It uses the SOAP with Attachments API for Java [SAAJ] 1.3; JAX-WS supports only SAAJ 1.2.)•You can learn more about the differences by reading the article "JAX-RPC versus JAX-WS."The wsimport tool in JAX-WS automatically handles many of the mundane details of Web service development and integrates easily into a build processes in a cross-platform manner, freeing you to focus on the application logic that implements or uses a service. It generates artifacts such as services, service endpoint interfaces (SEIs), asynchronous response code, exceptions based on WSDL faults, and Java classes bound to schema types by JAXB.JAX-WS also enables high-performing Web services. See Resources for a link to an article ("Implementing High Performance Web Services Using JAX-WS 2.0") presenting a benchmark study of equivalent Web service implementations based on the new JAX-WS stack (which uses two other Web services features in Java EE 5 —JAXB and StAX) and a JAX-RPC stack available in J2EE 1.4. The study found 40% to 1000% performance increases with JAX-WS in various functional areas under different loads.ConclusionEach framework has its advantages and disadvantages .Lightweight J2EE struc ture integrates Struts and Hibernate and Spring technology, making full use the power ful data processing function of Struts and the management flexible of Spring and the mature of Hibernate. According to the practice, putting forward anopen-source solutions suitable for small or medium-sized enterprise application of. Th e application system based on this architecture technology development has interlayer loose coupling ,structure distinctly, short development cycle, maintainability. In addition, combined with commercial project development, the solution has achieved good effect. The lightweight framework makes the parallel development and maintenance f or commercial system convenience, and can push forward become other industry busi ness system development.Through research and practice, we can easily find that Struts / Spring / Hiberna te framework utilizes Struts maturity in the presentation layer, flexibility of Spring bu siness management and convenience of Hibernate in the serialization layer, three kind s of framework integrated into a whole so that the development and maintenance beca me more convenient and handy. This kind of approach also will play a key role if appl ying other business system. Of course ,how to optimize system performance, enhance the user's access speed, improve security ability of system framework ,all of these wor ks, are need to do for author in the further.基于SSH框架实现的试题库管理系统小型或者中型企业的应用系统具有非常好的灵活性、安全性以及高性价比,传统的J2EE架构满足不了这些需求,但是基于SSH框架实现的应用系统更好的满足了这样的需求,这篇文章分析了关于SSH的一体化理论和关键技术,通过这些集成形成了轻量级Web框架,在已经集成三种技术的基础上,伴随形成了基于SSH的轻量级Web 框架,并且在实际应用中有着重要作用。

大学毕业论文---软件专业外文文献中英文翻译

大学毕业论文---软件专业外文文献中英文翻译

while Ifyou a ]. Every软件专业毕业论文外文文献中英文翻译Object landscapes and lifetimesTechnically, OOP is just about abstract data typing, inheritance, and polymorphism, but otheissues can be at least as important. The remainder of this section will cover these issues.One of the most important factors is the way objects are created and destroyed. Where is thedata for an object and how is the lifetime of the object controlled? There are different philosoat work here. C++ takes the approach that control of efficiency is the most important issue, sogivesthe programmer a choice.For maximum run-timespeed,the s torageand lifetimecan bedeterminedwhile the program isbeing written,by placingthe objectson the stack(thesearesometimes called automatic or scoped variables) or in the static storage area. This places a prion the speed of storage allocation and release, and control of these can be very valuable in somsituations. However, you sacrifice flexibility because you must know the exact quantity, lifetimand type of objectsyou're writing the program. are trying to solvemore generalproblem such as computer-aided design, warehouse management, or air-traffic control, this is toorestrictive.The second approach is to create objects dynamically in a pool of memory called the heap. Inthis approach, you don't know until run-time how many objects you need, what their lifetime is,what theirexacttypeis.Those aredeterminedatthespurof themoment whiletheprogram isrunning. If you need a new object, you simply make it on the heap at the point that you need it.Because the storage is managed dynamically, at run-time, the amount of time required to allocatestorage on the heap is significantly longer than the time to create storage on the stack. (Creatstorage on the stack is often a single assembly instruction to move the stack pointer down, andanother to move it back up.) The dynamic approach makes the generally logical assumption thatobjects tend to be complicated, so the extra overhead of finding storage and releasing that storwill not have an important impact on the creation of an object. In addition, the greater flexibiessential to solve the general programming problem.Java uses the second approach, exclusively timeyou want to create an object, you useIn you when th but any new youin C (suchthe new keyword to build a dynamic instance of that object.There's another issue, however, and that's the lifetime of an object. With languages that alobjectsto be createdon the stack,the compilerdetermineshow long the objectlastsand canautomatically destroy it. However, if you create it on the heap the compiler has no knowledge ofits lifetime.alanguage like C++,must determine programmatically to destroy theobject, which can lead to memory leaks if you don’t do it correctly (and this is a common problin C++ programs). Java provides a feature called a garbage collector that automatically discoverwhen an object is no longer in use and destroys it. A garbage collector is much more convenientbecause it reduces the number of issues that you must track and the code you must write. Moreimportant, the garbage collector provides a much higher level of insurance against the insidiousproblem of memory leaks (which has brought many a C++ project to its knees).The rest of this section looks at additional factors concerning object lifetimes and landsca1. The singly rooted hierarchyOne of the issues in OOP that has become especially prominent since the introduction of C++iswhether a llclassesshouldultimatelybe inheritedfrom a singlebase class.In Java (aswithvirtually all other OOP languages)answer is “yes” and the name of this ultimate base class issimply Object. It turns out that the benefits of the singly rooted hierarchy are many.All objectsin a singlyrootedhierarchyhave an interfacein common, so they are allultimatelythe same type.The alternative(providedby C++) is thatyou don’tknow thateverything is the same fundamental type. From a backward-compatibility standpoint this fits themodel of C betterand can be thoughtof as lessrestrictive, when you want to do full-onobject-orientedprogramming you must then buildyour own hierarchyto providethe sameconvenience that’s built into other OOP languages. And in library classacquire, someother incompatible interface will be used. It requires effort (and possibly multiple inheritancework the new interface into your design. Is the extra “flexibility” of C++ worth it? If you neit —if you have a large investment —it’s quite valuable. If you’re starting from scratch, otheralternatives such as Java can often be more productive.All objects in a singly rooted hierarchyas Java provides) can be guaranteed to havecertain functionality. You know you can perform certain basic operations on every object in yoursystem. A singly rooted hierarchy, along with creating all objects on the heap, greatly simplifitoso ’s that is a it’sargument passing (one of the more complex topics in C++).A singly rooted hierarchy makes it much easier to implement a garbage collector (which isconvenientlybuiltintoJava).The necessarysupportcan be installed the b ase class,and thegarbage collector can thus send the appropriate messages to every object in the system. Withoutsinglyrootedhierarchyand a system to manipulatean objectvia a reference,itisdifficultimplement a garbage collector.Since run-time type information is guaranteed to be in all objects, you’ll never end up withobject whose type you cannot determine. This is especially important with system level operationsuch as exception handling, and to allow greater flexibility in programming.2 .Collection libraries and support for easy collection useBecause a container is a tool that you’ll use frequently, it makes sense to have a librarycontainersthatare builtin a reusablefashion,so you can take one off the shelfBecause acontainer is a tool that you’ll use frequently, it makes sense to have a library of containersbuilt in a reusable fashion, so you can take one off the shelf and plug it into your program. Japrovides such a library, which should satisfy most needs.Downcasting vs. templates/genericsTo make thesecontainersreusable,they hold the one universaltype in Java thatwaspreviously mentioned: Object. The singly rooted hierarchy means that everything is an Object, soa container that holds Objects can hold anything. This makes containers easy to reuse.To use such a container, you simply add object references to it, and later ask for them backBut, since the container holds only Objects, when you add your object reference into the containit is upcast to Object, thus losing its identity. When you fetch it back, you get an Object refeand not a reference to the type that you put in. So how do you turn it back into something thatthe useful interface of the object that you put into the container?Here, the cast is used again, but this time you’re not casting up the inheritance hierarchymore general type, you cast down the hierarchy to a more specific type. This manner of casting icalled downcasting. With upcasting, you know, for example, that a Circle is aittype of Shapesafe to upcast, but you don’t knowObject an necessarily a Circle or so Shape hardly’t itit to get by safe to downcast unless you know that’s what you’re dealing with.It’s not completely dangerous, however, because if you downcast to the wrong thing you’llget a run-time error called an exception, which will be described shortly. When you fetch objectreferences from a container, though, you must have some way to remember exactly what they areso you can perform a proper downcast.Downcasting and the run-time checks require extra time for the running program, and extraeffort from the programmer. Wouldnmake sense tosomehow create the container so that itknows the types that it holds, eliminating the need for the downcast and a possible mistake? Thesolution is parameterized types, which are classes that the compiler can automatically customizework with particulartypes.For example,with a parameterizedcontainer,the compilercouldcustomize that container so that it would accept only Shapes and fetch only Shapes.Parameterized types are an important part of C++, partly because C++ has no singly rootedhierarchy. In C++, the keyword that implements parameterized types is “template.” Java current has no parameterized types since it is possible for —however awkwardly —using thesingly rooted hierarchy. However, a current proposal for parameterized types uses a syntax thatstrikingly similar to C++ templates.。

软件工程专业毕业论文英文翻译

软件工程专业毕业论文英文翻译

咋一篇文章连个题目也没有啊???This text analysis the mechanism of Hibernate and Struts, put forward 1 kind EE according to the J2 of the Hibernate and the Struts application development strategy.In this kind of strategy, the model layer use a Hibernate realization and see diagram and controller to then use a Struts frame a realization.So can consumedly lower the development efficiency that the Ou of code match sex and exaltation system. The key word Hibernate, Struts, the MVC, hold out for long time a layer one preface along with the Java technique of gradual mature and perfect, Be establishment business enterprise class application of standard terrace, the J2 EE terrace got substantial of development.Several technique asked for help from to include in the J2 EE norm:Enterprise JavaBean(EJB), Java Servlets(Servlet), Java Server Pages(JSP), Java Message Service(JMS)...etc., development many application system.But, also appeared some problem in the tradition J2 the EE the application of the development the process:1)the antinomy of of data model and logic model.Currently the database of usage basically and all is relation type database, but the Java be essentially a kind of the language which face to object, object at saving with read usage SQL and JDBC carry on a database operation and lowered plait distance of efficiency and system of can maintenance;2)tradition of J2 EE application much the adoption is according to the EJB heavy weight frame, this kind of frame suitable for develop a large business enterprise application, but usage the EJB container carry on development and adjust to try to need to be waste a great deal of time.For lowering the Ou of code to match sex, exaltation system of development efficiency, this text put forward 1 kind EE according to the J2 of the Struts frame and the Hibernate frame application development strategy. 2 datas' holding out for long time layer and Hibernate is one piece according to hold out for long time layer frame, is a kind of realization object and relationof the tool which reflect to shoot(O/R Mapping), it carried on the object of the lightweight to pack to the JDBC and make procedure member can usage object plait distance thought to operation database.It not only provided to shoot from Java to reflect of data form, but also provided a data a search and instauration mechanism.Opposite in usage JDBC and SQL to operation database, use a Hibernate ability consumedly of exaltation realization of efficiency.The Hibernate frame use allocation document of the form come to the reflect of the definition Java object and data form to shoot relation, in the meantime at more deep of level of data form of relation explanation for the relations such as inherit of and containment etc. of Java object.Pass the usage HQL language sentence complications of relation the calculate way use the way of object description, to a large extent simplification logarithms according to of search, speed development of efficiency.Have in the Hibernate a simple but keep the API of view, used for to the database mean of object performance search.Want to establish or the modification be these objects, need in the procedure carry on with them to hand over with each other, then tell Hibernate to keep.So, a great deal of pack hold out for long time turn operation of business logic no longer demand write a trivial JDBC language sentence, make data last long thus the layer got biggest of simplification.3 use the Struts realization MVC structure MVC(Model-View-Controller) is put forward by the Trygve Reenskaug, first drive application in the environment SmallTalk-80, is many to hand over with each other with interface system of constitute foundation.According to the need of variable of the interface design, MVC hand over with each other constitute of system to resolve into model and see diagram, controller three part. Model(Model) is software processing problem logic at independence in outside manifestation undercontents and form circumstance of inside abstract, packed the core data, logic of problem and function of calculation relation, independence in concrete of interface expression and I/O operation.See diagram(View) mean information and particular form demonstration of model data and logic relation and appearance to the customer.It acquire a manifestation information from the model, there can be many for homology of information dissimilarity of manifestation form or see diagram.Thecontroller(Controller) is a processing the customer hand over with software with each other operation of, its job is control provide model in any variety of dissemination, insure a customer interface among the model of rightness should contact;It accept a customer of importation, give° the importation feedback model, then realization compute model control, is make model and see diagram to moderate work of ually 1 see a diagram rightness should a controller.Model, see separate of diagram and controller, make a model be able to have many manifestation to see diagram.If the customer pass a certain see the controller of diagram change the data of model, all other dependence in these see of data diagram all should reflection arrive these variety.When therefore and regardless occurrence what data variety, controller all would variety notice allly see diagram, cause manifestation of renewal.This is actually a kind of variety of model-dissemination mechanism.The Struts frame is to be the item of Apache Jakarta to constitute part to publish luck to do at the earliest stage, it inheritted MVC of each item characteristic, and did according to the characteristics of J2 EE correspond of variety with expand.The Struts frame was good to combine Jsp, Java Servlet, Java Bean, Taglib etc. technique.In the Struts, what to undertake the controller role in the MVC be an ActionServlet.The ActionServlet is an in general use control module.This control module provided a processing all HTTPclaim which send out Struts of entrance point.Its interception with distribute these claim to arrive correspond of action type.(these action all of type is Action son type)Moreover the control module is also responsible for using to correspond of claim the parameter fill Action Form(FromBean), and pass action type(ActionBean).Action type the business logic of the interview core, then interview Java Bean or adjust to use EJB.End action type control the power pass follow-up of JSP document, from JSP document born see diagram.All these control logic make use of Struts-config.xml the document come to allocation.See diagram in the Struts frame main from JSP born page completion, the Struts provide abundant of JSP label database, this is advantageous to separating performance logic and procedure logic.The model is with 1 or the form existence of several Java Bean.In the Struts, main existence three kinds of Bean, respectively BE:Action, ActionForm, EJB perhaps Java Bean. The Struts frame have no concrete definition model layer of realization, in actually the development, model layer usually is close with business logic connect with each other, and want to carry on operation to the first floor data.The underneath's introduction is a kind of development strategy, lead the Hibernate into the model layer of Struts frame, usage it to carry on a data to pack with reflect to shoot, provide hold out for long time turn of support. 4 usage Hibernate and the Struts development J2 EE application 4.1 system structure diagram 3 manifestation according to Hibernate and Struts development strategy of system structure diagram.4.2 Development practice underneath combine a development practice, with in the J2 the EE the application very widespread customer register process for example, elucidation above-mentioned system structure is how concrete usage.The process of register is very clear:Customer from register page login.jsp importation register information, system to register theinformation carry on verification, if exactitude success register, otherwise hint correspond mistake information. In the development process, the usage Eclipse be used as development environment and added to carry to provide to the Struts and the Hibernate in the meantime better control and support of three square plug-in MyEclipse, Web server usage Tomcat, the database chose to use Mysql. Carry on an allocation to the Hibernate first, need to the system auto the born hibernate.cfg.xml carry on modification, allocation good database conjunction of various parameter and definition the data reflect to shoot a document.Because the Hibernate take of conjunction pond main used for test, the function isn't very good, can pass JNDI will it modification is usage Tomcat of conjunction pond.本文分析了Hibernate和Struts的机制,提出了一种基于Hibernate和Struts的J2EE 应用开发策略。

软件工程专业软件质量中英文资料外文翻译文献

软件工程专业软件质量中英文资料外文翻译文献

软件工程专业软件质量中英文资料外文翻译文献Planning for Software QualityIn this Chapter: Defining quality in software projects, working with your organization’s quality policy, Creating a quality management plan, Identifying how changes in time and cost will affect project quality.When it comes to quality, you’ve probably heard some great clichés: quality is planned into a project, not added through inspection(you should spend your time in planning quality instead of inspecting after you have errors).It’s always cheaper(and more efficient) to do a job right the first time around. Why is there always time to do work right the second time? Always underpromise and overdeliver.There sure are some catchy slogans, and clichés become clichés because they’re usually accurate. In this chapter we explore what quality is , how to plan it into your project, and how to create a quality management plan.6.1 Defining QualityBefore you can plan for quality, you must first define what quality is. Ask your customers, your project team, your management team, and even yourself what quality is and you get a variety of answers:What customers say: The software you create lives up to expectations, is reliable, and does some incredible things the customer doesn’t expect(or even think of ).What your project team says: The work is completed as planned and as expected, with few errors- and fewer surprises.What managers say: The customer is happy and the project delivers on time and on budget.What you may say: The project team completes its work according to its estimates, the customer is happy, and management is happy with the final costs andschedule.Quality, for everyone concerned, is the ability of the project and the project’s deliverable to satisfy the stated and implied requirement. Quality is all of items we mention here, but it’s more than just the deliverable; it’s following a process, meeting specified requirements, and performing to create the best possible deliverable. Everything, from the project kickoff meeting to the final testing, affects the project quality.6.2 Referring to the product scopeAs the project manager, you primary concern is satisfying the product scope. The product scope is the description of the software the customer expects from your project.If you work primarily to satisfy the product scope, then you’ll be in good shape with satisfying the customer’s expectations for quality. But, in order to satisfy the product scope you must first have several documents:Product scope description document. This document defines what the customer expects from the project. What are the characteristics of the software? This description becomes more complete as you progress through the project and gather more knowledge.Project requirements document. This document defines exactly what the project must create without being deemed a failure. What types of functionality should stakeholders be able to perform with the software? This document prioritizes the stakeholders’ requirements.Detailed design document. This document specifies how the project team will create units that meet the project requirements, which in turn will satisfy the product scope.Metrics for acceptability. Many software projects need metrics for acceptability. These metrics include speeds, data accuracy, and metrics from user acceptability tests. You’ll need to avoid vague metrics, such as good and fast. Instead, aim to define accurate numbers and determine how the values will be captured.Satisfying the product scope will assure that the customer is happy with you and with deliverables the project team has created. You will only satisfy the product scope if you plan how to do it. Quality is no accident.6.3 Referring to the project scopeThe project scope defines all of the work(and only the required work ) to createthe project deliverable. The project scope defines what will and won’t be included in the project deliverable. Project scope is different than the product scope, because the product scope describes only the finished deliverable, whereas the project scope describes the work and activities needed to reach the deliverable.You must define the project scope so that you use it as an appropriate quality tool. The project scope draws a line in the sand when it comes to project changes. Changes, as we’re sure you’ve experienced, can trickle into the project and cause problems with quality. Even the most innocent changes can bloom into monsters that wreck your project.Figure 6-1 shows the project manager’s approach to project changes and quality. Early in the project, during the initiation and planning stages, you safely entertain changes to the project. After you create the project scope, however, your rule when it comes to changes should be “Just say no!”Changes to the project may affect the quality of the product. This isn’t to say that changes should come into a project at all- far from it. But changes to the project must be examined, weighed, and considered for the affect on time, cost, and impact on project quality.6.3.1 Going the extra mileOne cliché that rings true is that it’s always better to underpromise and overdiliver. We’ve heard project managers tell us this is their approach to keep people happy. It sounds good, right? A customer asks for a piece of software that can communicate with a database through a Web form. Your project team, however, creates a piece of software that can communicate through a Web form to the customer’s database, and you add lots of query combinations for each table in the database. Fantastic!A valid argument can be made that you should never underpromise, but promise what you can deliver and live up to those promises. Technically, in project management, quality is achieved by meeting the customer’s expectations-not more than they expect, and certainly not less than they expect.Surprising the customer with more than the project scope outlines can actually backfire, for the following reasons: The customer may believe the project deliverable could have been completed faster without all the extras you’ve included. The customer may believe the project deliverable could have been completed for fewer dollars without all the extras you’ve included. If the customer discovers bugs in the software, the blame may lie with the extras. The customer may not want the extras,regardless of how ingenious you believe they are. Overdelivering is not meeting expectations: you’re not giving the customer what he or she asked for.Now, Having put the wet blanket on the fire of creativity, let us say this: communicate. We can’t emphasize enough how important it is to tell the customer what you can do, even if it’s more than what the customer has originally asked for. In software development, customers may need guidance on what each deliverable can do, and they look to you as the expert to help them make those decisions. But notice this process is done before the project execution begins, not during the implementation phase.The product scope and the project scope support one another. If the customer changes details in the product scope will also change. If not, then your project team will be completing a project scope that wo n’t create what the customer expects.Avoiding gold- plated softwareYou create gold-plated software when you complete a project, and the software is ready to go the customer, but suddenly realize that you have money to burn. If you find yourself with a hefty sum of cash remaining in the project budget, you may feel tempted to fix the situation with a lot of bling. After all, if you give the project deliverable to the customer exactly as planned, several things may happen: You customer may be initially happy that you’ve delivered underbudget. Then they’ll wonder whether you cut corners or just didn’t have a clue as to the actual cost of the project. The customer may wonder why your estimate and the actual cost of the project deliverable are not in sync. The remaining budget will be returned to the customer unless your contract stipulates otherwise. Other project managers may not be happy that you’ve created a massive, unused project budget when their projects have been strapped for cash. Key stakeholders may lose confidence in your future estimates and believe them to be bloated, padded, or fudged.This is , in case you haven’t guessed, a bad thing. The best thing to do is to deliver an accurate estimate to begin with and avoid this scenario altogether. We discuss time estimates in Chapter 8 and cost estimates in Chapter 9. For now, know that your customer’s confidence in future estimates is always measured on your ability to provide accurate estimates at the beginning of the process.If you find yourself in the scenario where you have a considerable amount of cash left in the project budget, the best thing to do is to give an accurate assessment to the customer of what you’ve accomplished in the project and what’s leftin the kitty. Don’t eat up the budget with extras, and don’t beat yourself up over it. Mistakes happen, especially to beginners, and it’s still more forgivable to be underbudget than it is to be overbudget.So should you also present extras to the customer when you present the project’s status and the remaining budget? If the extras are value-added scope changes, we say yes. If the extras are truly gold-plated extras to earn more dollars, then we say no. Software quality is based on whether the product delivers on its promises. If the proposed changes don’t make the software better, no one needs them.What you do on your current project may influence what you get to do on future projects. Honesty now pays dividends later.6.4 Examining quality versus gradeQuality and grade are not the same thing. Low quality is always a problem, but low grade may not be. Quality, as you know, is the ability of software to deliver on its promises. Grade is the ranking or classification we assign to things.Consider your next flight. You expect the airplane to safely launch, fly, and land. And you expect to be reasonably comfortable during the flight. You expect the behavior of the crew and fellow passengers to be relatively considerate, even if they’re a little cramped and annoyed. (You have to factor in that crying baby three rows back. It’s not the baby’s fault, after all.)Now consider were you’re seated on the airplane. Are you in first class or coach? That’s the grade!Within software developments we also have grades and quality issues.A quick, cheap software fix may be considered low grade, but it can still be a high-quality software solution because it satisfies the scope of the simple project. On the other hand, the rinky-dink approach won’t work during the development of a program to track financial data through e-ecommerce solutions for an international company.During the planning process, one goal of stakeholder analysis is to determine the requirements for quality and grade.6.5 Working with a Quality policyA quality policy isn’t a policy that’s “real good.”A quality policy is an organization-wide policy that dictates how your organization will plan, manage, and then control quality in all projects. This policy sets the expectations for your projects, and everyone else’s, for metrics of acceptability.Quality policies fall under the big umbrella of quality assurance. QA is an organization-wide program, the goal of which is to improve quality and to prevent mistakes.So who decides what quality is and what’s bunk? You might guess and say the customer, which to some extent is true, but generally the quality policy is set by management. The quality policy can be written by the geniuses within your organization, or your organization may follow a quality system and the proved quality approaches within these systems. For example, your company might participate in any number of proprietary and nonproprietary organizations, thereby pledging to adhere to their quality policies. The following sections discuss a few of them.Working ISO programsThe International Organization for Standardization(ISO) is a world wide body with153 members that convenes in Geneva, Switzerland. The goal of the ISO is to set compatibility standards for all industries, to establish common ground, and to maintain interoperability between businesses, countries, and devices.In case you’re wondering, the abbreviation for the international Organization for Standardization is ISO, not IOS. This is because of all the different countries represented and the varying languages; they decided to use the abbreviation of ISO taken form the Greek isos, which means equal.There are many different ISO programs, but the most popular is ISO 9000. An ISO 9000-certified organization focuses on business-to-business dealing and striving to ensure customer satisfaction. An ISO 9000-certified organization must ensure that it: Establishes and meets the customer’s quality requirements. Adheres to applicable regulatory requirements. Achieves customer satisfaction throughout the project. Takes internal measures to continually improve performance, not just once.You can learn more about ISO programs and how your organization canparticipate by visiting their Web site: .Visit these Web sites for more assistance with quality management:Getting a total Quality Management workoutThe U.S. Naval Air Systems Command originated the term Total QualityManagement as a means of describing the Japanese-style management approach to qualityimprovement.TQM requires that all members of an organization contribute to quality improvements in products, services, and the work culture. The idea is that if everyone is involved in quality and works to make the total environment better, then the services and products of the organization will continue to improve.In software development, TQM means that the entire team works to make the development of the software better, the process from start to completion better, and the deliverable better as well. TQM is largely based on W.Edwards Deming’s 14 Points for Quality. Here’s how Deming’s 14 points and TQM are specifically applicable to software development(you can find out more about W.Edwards Deming in the nearby sidebar, “W.Edwards Deming and the software project manager”):Create constancy of purpose for improving products and services. Every developer must agree and actively pursue quality in all of his or her software creation, testing, and development.Adopt the new philosophy. This philosophy can’t be a fad. The software project manager has to constantly motivate the project team to work towards quality.Cease dependence on inspection to achieve quality. Software development has tradition of coding and inspection, and then reacting to errors. This model is dangerous because developers begin to lean on the testing phase to catch errors, rather than striving to incorporate quality into the development phase. As a rule, quality should be planned into software design, never inspected in.End the practice of awarding business on price alone; instead, minimize total cost by working with a single supplier. The idea here is that a relationship will foster a commitment to quality between you and the supplier that’s a bit more substantial than an invoice and a check.Constantly strive to improve every process for planning, production, and service. Quality planning and delivery is an iterative process.Institute training on the job. If your software developers don’t know how to develop, they’ll certainly create some lousy software. If your team doesn’t know how to do something, you must train them.Adopt and institute leadership. The project manager must identify how to leadand motivate the project team, or the team may lead itself, remaining stagnant.Drive out fear. Are your software developers afraid of you? If so, how can they approach you with ideas for quality improvements, news on development, and flaws they’ve identified within the software? Fear does nothing to improve quality.为软件质量制定计划在这一章中我们将讨论:在软件工程中为质量下定义,遵循你所在团队的质量方针,创造一个质量管理计划,弄清楚时间和成本上的变化会对软件质量有何影响。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

软件工程毕业论文文献翻译中英文对照学生毕业设计(论文)外文译文学生姓名: 学号专业名称:软件工程译文标题(中英文):Qt Creator白皮书(Qt Creator Whitepaper)译文出处:Qt network 指导教师审阅签名: 外文译文正文:Qt Creator白皮书Qt Creator是一个完整的集成开发环境(IDE),用于创建Qt应用程序框架的应用。

Qt是专为应用程序和用户界面,一次开发和部署跨多个桌面和移动操作系统。

本文提供了一个推出的Qt Creator和提供Qt开发人员在应用开发生命周期的特点。

Qt Creator的简介Qt Creator的主要优点之一是它允许一个开发团队共享一个项目不同的开发平台(微软Windows?的Mac OS X?和Linux?)共同为开发和调试工具。

Qt Creator的主要目标是满足Qt开发人员正在寻找简单,易用性,生产力,可扩展性和开放的发展需要,而旨在降低进入新来乍到Qt的屏障。

Qt Creator 的主要功能,让开发商完成以下任务: , 快速,轻松地开始使用Qt应用开发项目向导,快速访问最近的项目和会议。

, 设计Qt物件为基础的应用与集成的编辑器的用户界面,Qt Designer中。

, 开发与应用的先进的C + +代码编辑器,提供新的强大的功能完成的代码片段,重构代码,查看文件的轮廓(即,象征着一个文件层次)。

, 建立,运行和部署Qt项目,目标多个桌面和移动平台,如微软Windows,Mac OS X中,Linux的,诺基亚的MeeGo,和Maemo。

, GNU和CDB使用Qt类结构的认识,增加了图形用户界面的调试器的调试。

, 使用代码分析工具,以检查你的应用程序中的内存管理问题。

, 应用程序部署到移动设备的MeeGo,为Symbian和Maemo设备创建应用程序安装包,可以在Ovi商店和其他渠道发布的。

, 轻松地访问信息集成的上下文敏感的Qt帮助系统。

Qt Creator是Qt Quick的一部分,它允许设计人员和开发人员创造一种直观的,现代的外观,流畅的用户,正越来越多地用于手机,媒体播放器,机顶盒和其他便携设备的接口。

Qt Creator的使设计师之间的合作和开发商。

设计师在可视化的环境中工作,而开发人员的工作,在一个全功能的IDE和Qt Creator支持往返迭代从设计,代码,测试,和背部的设计。

支持的操作系统可用于Microsoft Windows,Mac OS X中,Linux的Qt Creator的安装包。

Qt Creator的,可以在其他平台上运行,但需要公开可用的源代码的编译。

从源代码建立和运行的Qt Creator可能需要您的计算机上单独安装的Qt。

Qt Creator的工作当您启动的Qt Creator,它会打开欢迎的模式,在那里你可以打开教程和示例项目或启动项目向导来创建您自己的项目。

Qt Creator的满足其设计目标简单,易于使用,和生产力,依靠模式的概念。

这些用户界面适应不1同的应用手头的开发任务。

开发人员可以使用的模式选择或键盘快捷键切换到Qt Creator的模式。

每种模式都有自己的看法,只显示需要执行一个给定的任务,只提供最相关的功能和功能有关的信息。

因此,广大的Qt Creator的窗口区域,一直致力于以实际应用的发展任务。

创建项目为了能够建立和运行的应用程序,Qt Creator的需要作为一个编译器将需要相同的信息。

该信息被指定在项目建设和运行设置。

设立一个新的Qt Creator项目通过向导的帮助,引导开发商循序渐进的方式,通过项目创建过程中的步骤。

在第一步中,开发商从类别中选择项目类型:Qt 的C + +项目,Qt Quick的项目,或其他项目。

接下来,开发人员可以选择一个项目的位置,并为它指定的设置。

全新的Qt GUI应用程序项目向导。

Qt Creator 的步骤已经完成时,自动生成所需的头文件,源文件,用户界面,描述和项目文件的项目,作为向导的定义。

不但没有向导的帮助新用户快速建立和运行,也使更多的有经验的用户,以简化他们的工作流程,创造新的项目。

方便的用户界面使得它更容易确保项目正确的配置和依赖性开始。

用户界面设计Qt Creator的提供了一个完全集成的可视化编辑器,Qt设计师。

Qt Designer是从Qt部件的图形用户界面设计和构建工具。

用户可以撰写和定制部件或对话和测试使用不同的风格和决议。

Widgets和使用Qt Designer创建的形式与程序代码无缝集成,使用Qt的信号和槽机制,它可以让用户轻松地分配行为的图形元素。

在Qt Designer中设置的所有属性,可以在代码中动态改变。

此外,如部件的推广和定制插件的功能,允许用户使用自己的小部件,使用Qt Designer。

Qt Designer是用于编辑用户界面文件。

它提出了一个直观的拖放式界面,组成新的用户界面的用户。

使用Qt Designer设计的用户接口功能齐全,可以立即预览,以确保设计为目的。

有没有必要重新编译整个项目来测试一个新的设计。

下图显示了集成的Qt Designer中被用来编辑一个简单的形式。

编码写作,编辑和导航的源代码是在应用程序开发的核心任务。

因此,在代码编辑器是Qt Creator的关键部件之一。

在编辑模式下的代码编辑器,可以用来编写代码。

代码编辑器提供了一个功能,可以帮助开发商保持可读性和编码风格:, 语法高亮关键字,符号和C + +文件中的宏。

此外,通用的突出支持其他类型的文件。

, 元素,属性,ID和代码片断的代码完成。

这也是支持在当前项目的开发自己的类。

, 检查代码语法和编辑时标记错误(红色波浪下划线),使得它不需要使用汇编作为一个简单的方式找到拼写错误和语法错误。

, 自动缩进源代码的布局。

, 折叠和展开的源代码(代码折叠)的功能的能力。

, 定位导航工具,以便快速访问文件,符号,层次结构,以及其他信息。

, 支持代码重构,以提高内部质量或您的应用程序,它的性能和可扩展性,代码的可读性和可维护性,以及简化代码结构。

除了这些功能,代码编辑器中有其他有用的功能,例如:, 打字时,突出窗口中的匹配字符串的增量搜索。

高级搜索允许你搜索从当前打开的项目或文件系统上的文件。

此外,您可以搜索符号,当你想重构代码。

, 行号和当前行高亮。

, 容易的注释和代码注释。

, 方法定义和函数声明之间的快速切换。

, 书签更容易在导航的代码。

代码编辑器支持不同的键盘快捷键,更快的编辑。

这是可能的工作,而无需使用鼠标,使开发人员能够保持他们的手在键盘上,工作更富有成效。

建立多目标2Qt Creator的建设和运行桌面环境(在Windows,Linux和Mac OS)和移动设备(诺基亚的MeeGo,Maemo操作)的Qt应用程序提供支持。

Qt Creator的,允许开发人员指定每个开发平台的建立为独立设置和快速切换之间构建目标。

默认情况下,阴影生成用于继续建设,从源头分开的具体文件。

开发人员可以创建不同版本的项目文件,以保持平台相关的代码分开。

他们可以使用qmake的范围,选择的文件处理根据qmake的平台上运行。

以及qmake的,Qt的构建自己的工具提供支持,Qt Creator的还带有支持的CMake [],一种流行的替代。

CMake是一个跨平台的配置和构建工具的Mac OS X,微软Windows,Linux和工具支持的其他平台上的本地编译器工具链。

然而,只支持建立在Qt Creator的移动应用系统是qmake。

Qt Creator也支持通用的项目,其中开发人员使用不受支持的构建系统,或不想联想到与他们的项目建设系统。

在这样的情况下,Qt Creator的工作作为一个代码编辑器,生成设置可以手动指定的项目。

调试Qt Creator是整合与外部本地调试符号的GNU调试器(GDB),微软控制台调试器(CDB)和内部的JavaScript调试器。

下图显示了Qt的造物主在调试模式下面的代码编辑器的调试工具窗格。

在调试模式下,开发人员可以执行常见的调试任务,包括以下内容:, 中断程序执行。

, 通过程序的行由行或指令由指令步骤。

, 设置断点。

, 检查调用堆栈的内容。

, 审查和修改调试程序的寄存器和存储器内容。

, 检查和修改寄存器和存储器内容的局部和全局变量。

, 检查加载的共享库的列表。

, 创建调试程序的当前状态的快照,并重新审视它们。

分析代码设备上的可用内存是有限的,你应该仔细地使用它。

Qt Creator的集成内存泄漏检测和分析函数执行Valgrind的代码分析工具。

您必须下载并安装Valgrind的工具,分别使用他们的Qt Creator。

QML的事件探查器安装的Qt Creator的一部分。

它允许您配置您的Qt的快速应用。

使用版本控制系统成立了项目建议的方式是使用一个版本控制系统。

只有项目源文件应存放。

构建系统的Qt Creator生成的文件不应该被保存。

其他方法是可能的,但我们不推荐使用网络资源,例如。

Qt Creator的支持版本控制系统,融入工作环境的使用。

支持的系统包括巴扎,CVS,GIT中,水银,Perforce的,和Subversion。

配置很简单,位于一起的版本控制功能位于“ 工具”子菜单中的特定版本控制系统的通用设置。

每个系统的输出显示在版本控制输出窗格。

还提供了一些系统显示提交和管理信息库的用户界面元素。

获得帮助不时,开发人员可能需要一定的阶级,功能,或其他部分的Qt API的进一步信息。

所有Qt文档和例子是通过Qt的帮助插件的Qt Creator。

要查看文档,用于帮助模式,在窗口的最重要的是致力于帮助文本。

虽然在编辑模式下工作的源代码,开发人员可以访问上下文敏感的帮助文本光标移动到Qt的类或函数,然后按F1键。

该文件将显示在面板上的代码编辑器的右侧,它也可以添加外部文件的Qt Creator,补充或替换现有的文件。

总结Qt Creator的提供Qt应用程序创建一个完整的开发环境。

它是一个轻量级的工具上的Qt开发,生产力和可用性的需求,严格重点。

主要特点是先进的C + +代码编辑器和调试的图形用户界面的C + +函数。

集成的Qt Designer,Qt的帮助,并快速导航定位工具,使Qt Creator的Qt应用开发的理想环境。

3Qt Creator的模式为中心的工作方式,帮助开发重点任务,通过介绍相关的用户界面功能,他们的手。

支持跨平台,建立系统和版本控制软件,确保Qt Creator的,可以完全集成到开发团队的工作环境。

此外,在与开发商密切合作,创造流畅的用户界面的Qt快速工具允许UI设计师加入我们的团队。

外文译文原文:Qt Creator WhitepaperQt Creator is a complete integrated development environment (IDE)for creating applications with the Qt application framework. Qt is designed for developing applications and user interfaces once and deploying them across several desktop and mobile operating systems. This paper provides an introduction to Qt Creator and the features itprovides to Qt developers during the application development life-cycle.Introduction to Qt CreatorOne of the major advantages of Qt Creator is that it allows a teamof developers to share a project across differentdevelopment platforms (Microsoft Windows?, Mac OS X?, and Linux?)with a common tool for development and debugging.The main goal for Qt Creator is meeting the development needs of Qt developers who are looking for simplicity,usability, productivity, extendibility and openness, while aiming to lower the barrier of entry for newcomers to Qt. The key features of Qt Creator allow the developers to accomplish the following tasks: , Get started with Qt application development quickly and easilywith project wizards, and quickly accessrecent projects and sessions., Design Qt widget-based application user interface with the integrated editor, Qt Designer., Develop applications with the advanced C++ code editor thatprovides new powerful features forcompleting code snippets, refactoring code, and viewing the outline of files (that is, the symbol hierarchyof a file)., Build, run, and deploy Qt projects that target multiple desktop and mobile platforms, such as MicrosoftWindows, Mac OS X, Linux, Symbian, MeeGo, and Maemo., Debug with the GNU and CDB debuggers using a graphical user interface with increased awareness of Qtclass structures., Use code analysis tools to check for memory management issues in your applications., Deploy applications to mobile devices and create application installation packages for Symbian, MeeGo,and Maemo devices that can be published in the Ovi Store and other channels., Easily access information with the integrated context-sensitive Qt Help system.Qt Creator is part of Qt Quick, which allows designers and developers to create the kind of intuitive, modern-looking, fluid user interfaces that are increasingly used on mobile phones, media players, set-top boxesand other portable devices. Qt Creator enables collaboration between designersand developers.Supported Operating SystemsQt Creator installation packages are available for Microsoft Windows, Mac OS X, and Linux. Qt Creator can be run on other platforms, but that requires the compilation of the publicly available source code. Building andrunning Qt Creator from source code may require a separateinstallation of Qt on your computer.Working with Qt CreatorWhen you start Qt Creator, it opens to the Welcome mode, where youcan open tutorials and example projectsor start the project wizard to create your own projects.Qt Creator meets its design goals of simplicity, ease-of-use, and productivity by relying on the concept ofmodes. These adapt the user interface to the different application development tasks at hand. Developers can usethe mode selector or keyboard shortcuts to switch to a Qt Creator mode.4Each mode has its own view that shows only the information requiredfor performing a given task and provides only the most relevant features and functions related to it. As a result, the majority of the Qt Creator window area is always dedicated to actual application development tasks.Creating ProjectsTo be able to build and run applications, Qt Creator needs the same information as a compiler would need.This information is specified in the project build and run settings.When the steps have been completed, Qt Creator automatically generates the project with required headers, source files, userinterface descriptions and project files, as defined by the wizard.Not only does the wizard help new users get up and running quickly,it also enables more experienced users to streamline their workflow for the creation of new projects. The convenient user interface makes it easier toensure that a project begins with the correct configuration and dependencies.Designing User InterfaceQt Creator provides a fully integrated visual editor, Qt Designer.Qt Designer is a tool for designing and building graphical user interfaces from Qt widgets. Users can compose and customize widgets or dialogs and test those using different styles and resolutions.Widgets and forms created with Qt Designer are integrated seamlessly with programmed code, using the Qt signals and slots mechanism, which lets users easily assign behavior to graphical elements. All properties set in Qt Designer can be changed dynamically within the code. Furthermore, features such as widget promotion and custom plugins allow users to use their own widgets with Qt Designer.Qt Designer is used for editing user interface files. It presents users with an intuitive drag-and-drop interfacefor composing new user interfaces. The user interfaces that are designed with Qt Designer are fully functional and can be previewed immediately to ensure that the design is as intended. There is no needto recompile the entire project to test out a new design.CodingWriting, editing and navigating in source code are core tasks in application development. Therefore, thecode editor is one of the key components of Qt Creator. The codeeditor can be used in the Edit mode to write code.he code editor offers a number of features that help developers maintain readability and coding style:, Syntax highlighting for keywords, symbols, and macros in C++ files. In addition, generic highlighting issupported for other types of files., Code completion for elements, properties, ids and code snippets. This is also supported for developers’own classes in the current project., Checking code syntax and marking errors (with wavy underlining in red) while editing, making itunnecessary to use compilation simply as a way to find typos and syntax errors., Auto-indentation for source code layout., The ability to collapse and expand functions in the source code (code folding)., The Locator navigation tool for quick access to files, symbols, hierarchy, and other information., Support for refactoring code to improve the internal quality oryour application, its performance andextendibility, and code readability and maintainability, as well asto simplify code structure.In addition to these features, the code editor has other useful features, such as:, Incremental search that highlights the matching strings in the window while typing. Advanced searchallows you to search from currently open projects or files on thefile system. In addition, you can searchfor symbols when you want to refactor code., Line numbering and current line highlighting.5, Easy commenting and uncommenting of code., Quick switching between method definition and function declaration., Bookmarks for easier navigation in the code.The code editor supports different keyboard shortcuts for faster editing. It is possible to work without using the mouse at all, allowing developers to keep their hands on the keyboard and work moreproductively.Building for Multiple TargetsQt Creator provides support for building and running Qt applications for desktop environments (Windows, Linux, and Mac OS) and mobile devices (Symbian, MeeGo, Maemo).Qt Creator allows developers to specify separate build settings for each development platform and to quickly switch between build targets. By default, shadow builds are used to keep the build specific files separate from the source. Developers can create separate versions of project files to keep platform-dependent code separate. They can use qmake scopes to select the file to process depending onwhich platform qmake is run on.As well as providing support for qmake, Qt’s own build tool, Qt Creator also comes with support for CMake[], a popular alternative. CMake is a cross-platform configuration and build tool that works with thenative compiler toolchains on Microsoft Windows, Mac OS X, Linux and other platforms that the tool supports.However, the only supported build system for mobile applications in Qt Creator is qmake.Qt Creator also supports generic projects, where developers either use an unsupported build system, or do not want to associate a build system with their project at all. In cases like these, Qt Creator works as a code editor, and build settings can be manually specified for the project.DebuggingQt Creator is integrated with several external native debuggers: GNU Symbolic Debugger (gdb), MicrosoftConsole Debugger (CDB) and an internal JavaScript debugger. The following figure shows Qt Creator in Debugmode with the debugging tools pane below the code editor.In Debug mode, developers can perform common debugging tasks, including the following:, Interrupt program execution., Step through the program line-by-line or instruction-by-instruction., Set breakpoints., Examine call stack contents., Examine and modify registers and memory contents of the debugged program., Examine and modify registers and memory contents of local and global variables., Examine the list of loaded shared libraries., Create snapshots of the current state of the debugged program and re-examine them later.Qt Creator displays the raw information provided by the native debuggers in a clear and concise manner. This simplifies the debugging process as much as possible without losing the power of the native debuggers.In addition to the generic IDE functionality provided by stack view, views for locals and expressions, registers,and so on, Qt Creator includes features to make debugging Qt-based applications easy. The debugger pluginunderstands the internal layout of several Qt classes, as well as most containers of the C++ Standard Library andsome gcc and Symbian extensions. This deeper understanding is used to present objects of such classes in a useful way.If Qt Creator is installed as part of a Qt SDK, the GNU Symbolic Debugger is installed automatically and developers should be ready to start debugging after they create a new project. However, they can change the setup to use debugging tools for Windows, for example.Developers can connect mobile devices to the development PC and debug processes running on the devices.Analyzing Code6The memory available on devices is limited and you should use it carefully. Qt Creator integrates Valgrind code analysis tools for detecting memory leaks and profiling function execution. You must download and install the Valgrind tools separately to use them from Qt Creator.The QML Profiler is installed as part of Qt Creator. It allows you to profile your Qt Quick applications.Using Version Control SystemsThe recommended way to set up a project is to use a version control system. Only project source files shouldbe stored. Files generated by the build system or Qt Creator should not be stored. Other approaches are possible, but we do not recommend using network resources, for example.Qt Creator supports a number of version control systems, integrating their use into the working environment.Supported systems include Bazaar, CVS, Git, Mercurial, Perforce, and Subversion.Configuration is straightforward, with common settings for version control located together and features for specific version control systems located in Tools sub-menus.Output for each system is shown in the Version Control output pane. User interface elements for displaying commits and managing repositories are also provided for some systems.Getting HelpFrom time to time, developers may need further information about a certain class, function, or other part of the Qt API. All the Qt documentation and examples are accessible via Qt Help plugin in Qt Creator.To view the documentation, the Help mode is used, where most of the window is devoted to the help text.While working with source code in Edit mode, developers can access context sensitive help by moving the text cursor to a Qt class orfunction and then press the F1 key. The documentation will be displayed within a panel onthe right side of the code editor, as shown in the following figure.It is also possible to add external documentation to Qt Creator, complementing or replacing the existing documentation as required.SummaryQt Creator offers a complete development environment for Qt application creation. It is a lightweight tool with a strict focus on the needs of Qt developers, productivity, and usability.Key features are the advanced C++ code editor and a graphical user interface for debugging C++ functions. Integrated Qt Designer, Qt Help, and the Locator tool for quick navigation, make Qt Creator the ideal environmentfor developing Qt applications.The Qt Creator mode-centric way of working helps developers to focus on the task at hand by presenting only relevant user interface features to them.Support for cross-platform build systems and version controlsoftware ensures that Qt Creator can be integrated fully into the working environment of a development team. In addition, Qt Quick tools for creating fluid user interfaces in close cooperation with the developers allow UI designers to join the team.7。

相关文档
最新文档