信息系统和数据库中英文对照外文翻译文献

合集下载

Web信息系统毕业论文中英文资料外文翻译文献

Web信息系统毕业论文中英文资料外文翻译文献

Web信息系统毕业论文中英文资料外文翻译文献中英文资料翻译With the popularity of the Inter NET applications, a variety of Web Information System Has become a pressing issue. Establish the essence of Web information systems Development of a Web repository (database as the core of a variety of Web letter Information storage) as the core Web applications. Currently, the Web repositorydevelopment technologyOperation of a wide range of different characteristics. Various periods at all levels, a variety of purposes Technology co-exist, dizzying mirror chaos, it is difficult to choose. More popular Java of Ser vet Web repository development program a more practical Of choice.Servlet is running the applet on the Web server, can be completed Xu Multi-client Applet can not complete the work, which runs on the server and clients No end, do not download do not by the client security restrictions, the running speed Greatly increasedAnd Applet running in a browser and extend the browser's ability similar Like, Serv the let run in the Web server to enable Java Serv the let engine And expand the capacity of the server. Therefore, we can say Serv the let is run in Applet on a Web server, Serv the let Jav a Ser vlet API And Jav a program of classes and packages.1 Servlet access model2 Serv the let, there are three access models:(1) an access model1 browser to Web server to issue a retrieval request.2 the Web server after receipt of the request, the requestforwarded tothe Servle tengine.3 Serlet engine to perform the requested the Ser vlet and directly throughJDBC4Servlet throughJDBC toretrieve searchresults to generate the html page and Page back to the Web server.5 the Web server the page is sent back to the browser.(2)The second access model1 browser to Web server to issue a retrieval request.2 the Web server receives the request after the request forwardedto the of Ser v the letengine.3 Serv let engine to perform the request the the Ser vlet and retrieve sentJa, vabean access to the data.4data access the Ja vabean searchable database throughJDBC informationAnd from the search results stored in itself.5Servlet remove search results from the data access Javabean generate Html page and Ht ml of page back to the w eb server.6 the Web server the page is sent back to the browser.(3) The third access model1 A browser issue a retrieval request to the Web server.2 Web server receives the request after the request forwarded to the ofSer v the let engine.Of Ser vlet engine to perform the requested Servlet directlythroughJDBC inspection3 The cable database and search results are stored in the result isstored the Jav abean into.Javabean,4. Ser v the let from the results are stored to remove the search results and JSP files to format the output page.2 Servlet functionality and life cycle2.1Servlet functions(1) Create and return dynamic Web pages based on customer requests.(2) create can be embedded into existing HTML pages as part of HTML Page (HT fragment) of the ML.(3) and other server resources (including databases and applications based on the Jav a Program) to communicate.(4) to handle multiple client connections, receiving the input of more than one client, and The results broadcast to multiple clients. For example, Ser vlet is a multi-participant Game server.(5) of MIM E type filter information on the special handling, such as image Conversion and server-side include (SSI).(6) custom processing available to all servers in the standard routine.2.2Servlet lifecycleServlet life cycle begins with it into the Web server's memory And end in the termination or re-loaded Serv the let.(1) load.Load the servlet at the following times:1. If you have configured automatic load option, and then start the Webserver automatically loaded2.After the start of the Web server, the client Serv the let issued for the first time, pleaseDemand.3.Reload Serv the let.Loaded Servlet, Web servers to create a servlet instance, and Servlet's init () method is called. Servlet initialization parameters in the initialization phase, The number is passed to the Servlet configuration object.(2) terminateWhen the Web server no longer needs the servlet, or reload Servlet A new instance of the server calls Serv the let's destroy () method, remove it from the Memory deleted.3 How to call ServletMethod of Ser vlet is called Total five kinds: call in the URL in the formT ag call, call, in HT the ML page in the JSP files Call, call in an ASP file. The following itemized to be introduced.(1) call the servlet in the URL.Simply input format in the browser as http: ∥yo ur webser ver the same the ser vlet name name / servlet path / servlet the URL to The site canbe. Ofwhich:your webser ver name is to refer to the Servlet where theWeb server name, the servlet path is the path refers to the Servlet, the servletThe name refers to the Servlet real name or an alias.(2) call the Servlet tagsCall of Ser the let the the tag allows users to input data on the Web page, andinput data submitted to the vlet of Ser.Serv the let will be submitted to receive data in different ways.For example: {place the text input area tags, buttons and other logos} (3) in the HTML page to call the servlet.Use mark tags, no need to create a complete HTML page.Instead,the servlet output isonly part of the HTMLpage (HTML fragment) and dynamicallyembedded into the static text in the original HTML page.All this happened on the server andsent to the user only the resulting HTML page. tag contained in the original HTML page.Servlet will be invoked in these two markers and the Ser vlet response will cover these two markersbetween all things and mark itself, for example: 〈SERVLET NAME= “my serv let ”CODE= “my serv let .class”CODEBASE= “u r l”initpar am= “v alue”〉〈PARAM NAME= “parm1”VALU E= “v alue1”〉〈PARAM NAME= “parm2”VALU E= “v alue2”〉〈/SERVLET 〉(4) call the servlet in the JSP files.Call in the JSP file format used by the Servlet and HTML page to call exactly the same.Andthe principles are identical. Only reconcile its dynamic JSP file is not a static HTML page.(5) in an ASP file calls the servlet.If you Micr oso ft I nt ernet Informatio n-Ser ver (II S) on the legacy of the ASP file, and can not be ASP files transplanted into a JSP file, you can use the ASP file to of Ser vlet iscalled.But it must be through a special ActiveX control, an ASP file is only through it can callthe servlet.4 Servlet Howto use ConnectionManager toefficiently manage the database connection (1) the functionality of the Connection Manager.For non-Web applications, Web-based application access tothe database will lead tohigher and unpredictable overhead, which is due to more frequent Web users connect anddisconnect.Normally connected to the resourcesused and disconnect from the databasewill farexceed the resources used in the retrieval.Connection Manager function is to minimize the additional occupancy of the users of the database resources to achieve thebest performance of database access.Connection Manager sharing overhead through the establishmentof the connection poolwill connect users Servlet available to multipleusers request.In other words, each userrequest only the connect/ disconnect with a small portion of the overhead costs.Initialresources to establish the connection of the buffer pool, the rest of the connect/ disconnectoverhead is not big, because this isonly reuse the existing connection.Serv the let in the following manner using the connectionpool: When a user throughRequest Web Serv the let the let Serv use an existing connection from the buffer poolNext, this means that the user requests do not cause the connection to the databasesystem overhead. InAfter the termination of serv the let it connect to return to the pool forits Connection ManagerThe Ser vlet. Thus, the user request does not cause the database is disconnectedOf system overhead.Connection Manager also allows users to be able tocontrol the concurrency of thedatabase products evenThen the number. When the database license agreement limit the number ofusers, this feature isVery useful. Create a buffer pool for the database, and connection managementBuffering pool "maximum number of connections" parameter setto the database product license limitGiven maximum number of users. If you use otherprograms without Connection ManagerconnectionsDatabase, you can not guarantee that the method is effective.(2) the structure of the Connection Manager.(3) Connection Manager connection pool to maintain a connection to a specificdatabase is open. Step 1: When the first Serv the let trying to Connection Manager communications is loaded by the Java Application ServerConnection Manager. As long as the Java application server running the Connection Manager has been loaded. Step 2: The Java application server passes the request to a servlet. Step 3: Servlet Connection Managerrequests a connection from the pool. Step four: the buffer pool to Ser vlet allocated a pool of existing idle connection. Step 5: servlet to use toconnect a direct dialogue with the database, this process is the standard API for a particular database. Step 6: the database through Ser vlet the connection returns data. Step 7: When theServlet end to communicate with the database, servlet connections returned to the connection manager pool for other servlet uses. Step 8: Servlet Jav a application server to the user sends back response.Servlet requests a connection, if the buffer pool, there is no idle connection, then the connection manager directlycommunicate with the database. Connection Manager will: Step 9: to the database requests a new connection. Step 10: Add connections to thebuffer pool. If the buffer pool is connected to the prescribed ceiling, connect to the serverWill not be a new connection to join the buffer pool(3) the performance characteristics of the Connection Manager.Buffer pool to create a new connection is a high overhead tasks, newconnections will use the resources on the database. Therefore, theConnection Manager the best use of existing connections of the buffer pool to meet the request of the Servlet. Meanwhile, the connecting tubeThe processor must be as much as possible to minimize the buffer pool idle connections, because this is a great waste of systemresources. Connection Manager Serv the let with the implementation of these minimize and maximize task. Connection Manager to maintain each connection verification time stamp, and recently used tags and use the logo. When the a Ser vlet first the connection, connection verification time stamp, and most recent time stamp is set to the current time, theconnection is being used flag is set to true.Connection Manager can be removed from a Serv the let a long-unused connections, this length of time specified by the Connection Manager, the longest cycleparameters.Connection Manager can view recently used mark is beingused to connect. If the time between the most recently used time and time difference is greater than the longest cycle configuration parameters, the connection will be considered to be a residual connection, which indicates Serv the let take its discontinued or no response. Residual connection will be returned to the pool for other Ser vlet, it is being used flag is set to false, authentication and time stamp is set to the current time.If Ser vlet is ready within a longer period of time to use the connection with the database several timesCommunications, you must code to the Serv the let, so that each time you use to connectConfirm that it still occupies this connection.Connection Manager can be removed from the buffer pool idle connections, because theyWould be a waste of resources. In order to determine which connection is idle, Connection Manager will checkInvestigation connected the sign and time stamp, this operation isconnected by periodic access toBuffer pool information. Connection Manager checks have not been any Ser vlet makeWith the connections (these connections is to use the logo is false). If you have recently usedBetween time and the current time difference exceeds amaximum idle time configuration parameters, theThat the connection is idle. Idle connection will be removed from the buffer pool, down toMinimum number of connections configuration parameter specifies thelower limit value.翻译:随着Inter net 的普及应用, 各种Web 信息系统的建立已成为一个迫在眉睫的问题。

数据库中英文对照外文翻译文献

数据库中英文对照外文翻译文献

中英文对照外文翻译Database Management SystemsA database (sometimes spelled data base) is also called an electronic database , referring to any collection of data, or information, that is specially organized for rapid search and retrieval by a computer. Databases are structured to facilitate the storage, retrieval , modification, and deletion of data in conjunction with various data-processing operations .Databases can be stored on magnetic disk or tape, optical disk, or some other secondary storage device.A database consists of a file or a set of files. The information in these files may be broken down into records, each of which consists of one or more fields. Fields are the basic units of data storage , and each field typically contains information pertaining to one aspect or attribute of the entity described by the database . Using keywords and various sorting commands, users can rapidly search , rearrange, group, and select the fields in many records to retrieve or create reports on particular aggregate of data.Complex data relationships and linkages may be found in all but the simplest databases .The system software package that handles the difficult tasks associated with creating ,accessing, and maintaining database records is called a database management system(DBMS).The programs in a DBMS package establish an interface between the database itself and the users of the database.. (These users may be applications programmers, managers and others with information needs, and various OS programs.)A DBMS can organize, process, and present selected data elements form the database. This capability enables decision makers to search, probe, and query database contents in order to extract answers to nonrecurring and unplanned questions that aren’t available in regular reports. These questions might initially be vague and/or poorly defined ,but people can “browse” through the database until they have the needed information. In short, the DBMS will “manage” the stored data items and assemble the needed items from the common database in response to the queries of those who aren’t programmers.A database management system (DBMS) is composed of three major parts:(1)a storage subsystemthat stores and retrieves data in files;(2) a modeling and manipulation subsystem that provides the means with which to organize the data and to add , delete, maintain, and update the data;(3)and an interface between the DBMS and its users. Several major trends are emerging that enhance the value and usefulness of database management systems;Managers: who require more up-to-data information to make effective decisionCustomers: who demand increasingly sophisticated information services and more current information about the status of their orders, invoices, and accounts.Users: who find that they can develop custom applications with database systems in a fraction of the time it takes to use traditional programming languages.Organizations : that discover information has a strategic value; they utilize their database systems to gain an edge over their competitors.The Database ModelA data model describes a way to structure and manipulate the data in a database. The structural part of the model specifies how data should be represented(such as tree, tables, and so on ).The manipulative part of the model specifies the operation with which to add, delete, display, maintain, print, search, select, sort and update the data.Hierarchical ModelThe first database management systems used a hierarchical model-that is-they arranged records into a tree structure. Some records are root records and all others have unique parent records. The structure of the tree is designed to reflect the order in which the data will be used that is ,the record at the root of a tree will be accessed first, then records one level below the root ,and so on.The hierarchical model was developed because hierarchical relationships are commonly found in business applications. As you have known, an organization char often describes a hierarchical relationship: top management is at the highest level, middle management at lower levels, and operational employees at the lowest levels. Note that within a strict hierarchy, each level of management may have many employees or levels of employees beneath it, but each employee has only one manager. Hierarchical data are characterized by this one-to-many relationship among data.In the hierarchical approach, each relationship must be explicitly defined when the database is created. Each record in a hierarchical database can contain only one key field and only one relationship is allowed between any two fields. This can create a problem because data do not always conform to such a strict hierarchy.Relational ModelA major breakthrough in database research occurred in 1970 when E. F. Codd proposed a fundamentally different approach to database management called relational model ,which uses a table asits data structure.The relational database is the most widely used database structure. Data is organized into related tables. Each table is made up of rows called and columns called fields. Each record contains fields of data about some specific item. For example, in a table containing information on employees, a record would contain fields of data such as a person’s last name ,first name ,and street address.Structured query language(SQL)is a query language for manipulating data in a relational database .It is nonprocedural or declarative, in which the user need only specify an English-like description that specifies the operation and the described record or combination of records. A query optimizer translates the description into a procedure to perform the database manipulation.Network ModelThe network model creates relationships among data through a linked-list structure in which subordinate records can be linked to more than one parent record. This approach combines records with links, which are called pointers. The pointers are addresses that indicate the location of a record. With the network approach, a subordinate record can be linked to a key record and at the same time itself be a key record linked to other sets of subordinate records. The network mode historically has had a performance advantage over other database models. Today , such performance characteristics are only important in high-volume ,high-speed transaction processing such as automatic teller machine networks or airline reservation system.Both hierarchical and network databases are application specific. If a new application is developed ,maintaining the consistency of databases in different applications can be very difficult. For example, suppose a new pension application is developed .The data are the same, but a new database must be created.Object ModelThe newest approach to database management uses an object model , in which records are represented by entities called objects that can both store data and provide methods or procedures to perform specific tasks.The query language used for the object model is the same object-oriented programming language used to develop the database application .This can create problems because there is no simple , uniform query language such as SQL . The object model is relatively new, and only a few examples of object-oriented database exist. It has attracted attention because developers who choose an object-oriented programming language want a database based on an object-oriented model. Distributed DatabaseSimilarly , a distributed database is one in which different parts of the database reside on physically separated computers . One goal of distributed databases is the access of informationwithout regard to where the data might be stored. Keeping in mind that once the users and their data are separated , the communication and networking concepts come into play .Distributed databases require software that resides partially in the larger computer. This software bridges the gap between personal and large computers and resolves the problems of incompatible data formats. Ideally, it would make the mainframe databases appear to be large libraries of information, with most of the processing accomplished on the personal computer.A drawback to some distributed systems is that they are often based on what is called a mainframe-entire model , in which the larger host computer is seen as the master and the terminal or personal computer is seen as a slave. There are some advantages to this approach . With databases under centralized control , many of the problems of data integrity that we mentioned earlier are solved . But today’s personal computers, departmental computers, and distributed processing require computers and their applications to communicate with each other on a more equal or peer-to-peer basis. In a database, the client/server model provides the framework for distributing databases.One way to take advantage of many connected computers running database applications is to distribute the application into cooperating parts that are independent of one anther. A client is an end user or computer program that requests resources across a network. A server is a computer running software that fulfills those requests across a network . When the resources are data in a database ,the client/server model provides the framework for distributing database.A file serve is software that provides access to files across a network. A dedicated file server is a single computer dedicated to being a file server. This is useful ,for example ,if the files are large and require fast access .In such cases, a minicomputer or mainframe would be used as a file server. A distributed file server spreads the files around on individual computers instead of placing them on one dedicated computer.Advantages of the latter server include the ability to store and retrieve files on other computers and the elimination of duplicate files on each computer. A major disadvantage , however, is that individual read/write requests are being moved across the network and problems can arise when updating files. Suppose a user requests a record from a file and changes it while another user requests the same record and changes it too. The solution to this problems called record locking, which means that the first request makes others requests wait until the first request is satisfied . Other users may be able to read the record, but they will not be able to change it .A database server is software that services requests to a database across a network. For example, suppose a user types in a query for data on his or her personal computer . If the application is designed with the client/server model in mind ,the query language part on the personal computer simple sends the query across the network to the database server and requests to be notified when the data are found.Examples of distributed database systems can be found in the engineering world. Sun’s Network Filing System(NFS),for example, is used in computer-aided engineering applications to distribute data among the hard disks in a network of Sun workstation.Distributing databases is an evolutionary step because it is logical that data should exist at the location where they are being used . Departmental computers within a large corporation ,for example, should have data reside locally , yet those data should be accessible by authorized corporate management when they want to consolidate departmental data . DBMS software will protect the security and integrity of the database , and the distributed database will appear to its users as no different from the non-distributed database .In this information age, the data server has become the heart of a company. This one piece of software controls the rhythm of most organizations and is used to pump information lifeblood through the arteries of the network. Because of the critical nature of this application, the data server is also the one of the most popular targets for hackers. If a hacker owns this application, he can cause the company's "heart" to suffer a fatal arrest.Ironically, although most users are now aware of hackers, they still do not realize how susceptible their database servers are to hack attacks. Thus, this article presents a description of the primary methods of attacking database servers (also known as SQL servers) and shows you how to protect yourself from these attacks.You should note this information is not new. Many technical white papers go into great detail about how to perform SQL attacks, and numerous vulnerabilities have been posted to security lists that describe exactly how certain database applications can be exploited. This article was written for the curious non-SQL experts who do not care to know the details, and as a review to those who do use SQL regularly.What Is a SQL Server?A database application is a program that provides clients with access to data. There are many variations of this type of application, ranging from the expensive enterprise-level Microsoft SQL Server to the free and open source mySQL. Regardless of the flavor, most database server applications have several things in common.First, database applications use the same general programming language known as SQL, or Structured Query Language. This language, also known as a fourth-level language due to its simplistic syntax, is at the core of how a client communicates its requests to the server. Using SQL in its simplest form, a programmer can select, add, update, and delete information in a database. However, SQL can also be used to create and design entire databases, perform various functions on the returned information, and even execute other programs.To illustrate how SQL can be used, the following is an example of a simple standard SQL query and a more powerful SQL query:Simple: "Select * from dbFurniture.tblChair"This returns all information in the table tblChair from the database dbFurniture.Complex: "EXEC master..xp_cmdshell 'dir c:\'"This short SQL command returns to the client the list of files and folders under the c:\ directory of the SQL server. Note that this example uses an extended stored procedure that is exclusive to MS SQL Server.The second function that database server applications share is that they all require some form of authenticated connection between client and host. Although the SQL language is fairly easy to use, at least in its basic form, any client that wants to perform queries must first provide some form of credentials that will authorize the client; the client also must define the format of the request and response.This connection is defined by several attributes, depending on the relative location of the client and what operating systems are in use. We could spend a whole article discussing various technologies such as DSN connections, DSN-less connections, RDO, ADO, and more, but these subjects are outside the scope of this article. If you want to learn more about them, a little Google'ing will provide you with more than enough information. However, the following is a list of the more common items included in a connection request.Database sourceRequest typeDatabaseUser IDPasswordBefore any connection can be made, the client must define what type of database server it is connecting to. This is handled by a software component that provides the client with the instructions needed to create the request in the correct format. In addition to the type of database, the request type can be used to further define how the client's request will be handled by the server. Next comes the database name and finally the authentication information.All the connection information is important, but by far the weakest link is the authentication information—or lack thereof. In a properly managed server, each database has its own users with specifically designated permissions that control what type of activity they can perform. For example, a user account would be set up as read only for applications that need to only access information. Another account should be used for inserts or updates, and maybe even a third account would be used for deletes.This type of account control ensures that any compromised account is limited in functionality. Unfortunately, many database programs are set up with null or easy passwords, which leads to successful hack attacks.译文数据库管理系统介绍数据库(database,有时拼作data base)又称为电子数据库,是专门组织起来的一组数据或信息,其目的是为了便于计算机快速查询及检索。

信息与计算科学中英文对照外文翻译文献

信息与计算科学中英文对照外文翻译文献

中英文对照外文翻译文献(文档含英文原文和中文翻译)【Abstract】Under the network environment the library information resource altogether constructs sharing is refers to all levels of each kind of library basis user to the social information demand, through network use computer, correspondence, electron, multimedia and so on advanced information technology, the high idealization carries on the synthesis cooperation development and the use activity to various collections information resource and the network resources . The market economy swift and violent development, the networking unceasing renewal, the information age arrival, had decided the future library trend of development will be implements the information resource altogether to construct sharing, already achieved the social mutual recognition about this point.This is because:libraries implement the information resource altogether to construct sharing are solve the knowledge information explosion and the collection strength insufficient this contradictory important way..【Key Words】Network; libraries implement: information: construction;work environment the libraryUnder the network environment the library information resource altogether constructs sharing is refers to all levels of each kind of library basis user to the social information demand, through network use computer, correspondence, electron, multimedia and so on advanced information technology, the high idealization carries on the synthesis cooperation development and the use activity to various collections information resource and the network resources.1、 information resource altogether will construct sharing is the future library development and the use information resource way that must be taken.The market economy swift and violent development, the networking unceasing renewal, the information age arrival, had decided the future library trend of development will be implements the information resource altogether to construct sharing, already achieved the social mutual recognition about this point.This is because: 。

信息管理与信息系统论文中英文资料外文翻译文献

信息管理与信息系统论文中英文资料外文翻译文献

信息管理与信息系统论文中英文资料外文翻译文献Construction of Network Management Information System of Agricultural Products Supply Chain Based on 3PLsAbstractThe necessity to construct the network management information system of 3PLs agricultural supply chain is analyzed, showing that 3PLs can improve the overall competitive advantage of agricultural supply chain. 3PLs changes the homogeneity management into specialized management of logistics service and achieves the alliance of the subjects at different nodes of agricultural products supply chain. Network management information system structure of agricultural products supply chain based on 3PLs is constructed, including the four layers (the network communication layer, the hardware and software environment layer, the database layer, and the application layer) and 7 function modules (centralized control,transportation process management, material and vehicle scheduling, customer relationship, storage management, customer inquiry, and financial management).Framework for the network management information system of agricultural products supply chain based on 3PLs is put forward. The management of 3PLs mainly includes purchasing management, supplier relationship management, planning management, customer relationship management, storage management and distribution management. Thus, a management system of internal and external integrated agricultural enterprises is obtained. The network management information system of agricultural products supply chain based on 3PLs has realized the effective sharing of enterprise information of agricultural products supply chain at different nodes, establishing a long-term partnership revolving around the 3PLs core enterprise, as well as a supply chain with stable relationship based on the supply chain network system, so as to improve the circulation efficiency of agricultural products, and to explore the sales market for agricultural products.Key words3PLs (third party logistics),Agricultural products supply chain, Network management information system, China3PLs means that production enterprises entrust the logistics activity to the professional logistics service firms in order to concentrate efforts on core business, to keep close contact with logistics enterprise through information system, and to achieve a logistics operation and management mode with full control in logistics. According to the 3PLs requirements forinformation technology, supply chain management information system based on 3PLs is a supply chain management mode with 3PLs enterprises as the core, using EDI technology, GIS/GPS system, B/S mode and other technologies. Integration, processing and application of 3PLs enterprises in supply chain management information system are fully applied in order to reduce the cost of logistics and to improve the service level of logistics.At present, management information technology in China is just at the initial stage. The existing management information system offers insufficient information for the 3PLs enterprises which are engaged in the circulation of agricultural products.Besides, its construction of logistics data processing system is imperfect, having not realized the truly professional 3PLs enterprises for the circulation of agricultural products with information technology. At the same time, 3PLs enterprise for agricultural products has just started in China. And logistics applied in the agricultural supply chain with 3PLs enterprise as the core is time-consuming, inefficient and low-level, which can hardly meet the needs of the rapid development of rural market and social productive forces. Therefore, it is particularly important and urgent to construct a management information system for agricultural products supply chain under the current Internet environment. Problems in the management of the supply chain of agricultural products are analyzed, and a network management information system of agricultural products supply chain based on 3PLs is constructed in order to offer references for the information management in the supply chain of agricultural products in China.1 Necessity of constructing the network management information system of agricultural products supply chain based on 3PLsAgricultural products are seasonal, perishable and vulnerable. With the improvement of income level,consumers have increasingly high requirements for the diversification, personalization, just-in-time nature, and environment protection of agricultural products, which requires faster, more professional,and better organized logistics. At the same time, supply chain of agricultural products has the characteristics of the special purpose of funds, the uncertainty of market, and the unbalanced development of market. Thus, the support of supply chain management information system is needed during the circulation of agricultural products. Construction of market integration,as well as the integration of production, supply and marketing,urgently needs a new management information system of agricultural products, as well as an accompanying legal support system, in order to reduce the cost and to increase the profit for agricultural enterprises. And the application of 3PLs in the supply chain of agricultural products can solve this problem.Therefore, we should give full play to the central hub function of 3PLs enterprises in agricultural products supply chain, increase the input in the informationization of agricultural products supply chain, and promote the construction of logistics operation system and management information system.1 .1 Improving the overall competitive advantage of agricultural products supply chain by 3PLs3PLs is a new logistics organizational form established by modern information technology, as well as a kind of complementary and win-win strategic alliance by signing contract with the party being served. Taking 3PLs as the professional and core enterprise in the production and circulation of agricultural products can help to realize resource consolidation of the construction and organization of the whole supply chain of agricultural products. The specialization of raw materials and the service for product distribution have greatly improved the logistics efficiency of traditional enterprise. At the same time, construction of the management information system ofagricultural products supply chain based on 3PLs has made up for the shortage of information in agricultural market, has improved the efficiency of the flow of agricultural products, has connected all the links in the supply chain into an organic whole in an reasonable and effective way,and has enhanced the overall competitive advantage and economic benefits. 3PLs platform has greatly brought down the production and circulation processes of traditional agricultural enterprises, and has reduced the costs in raw material procurement and product distribution, so as to better adapt to the changes in market demand, to realize the rational distribution of resources, and to improve the overall competitiveness of the supply chain of agricultural products.1 .2 Changing the homogeneity management to specialized operation of logistics service by 3PLsDue to the characteristics of agricultural products, market requirement for logistics varies widely. Since traditional enterprises try to obtain the competitive advantage, there is fierce market competition in commodity circulation. Therefore, behavior of logistics market shows the characteristics of homogeneity and the profit is getting lower and lower. In order to seize the customer, some enterprises even take a loss. 3PLs enterprises share business risk with partners and carry out operation according to the items number, time and cost of customer by integration and utilization of resources. As a means of the supply chain integration of agricultural products, specialized operation of 3PLs can help the stakeholders of supply chain to obtain more demand information of agricultural products, and can reduce the circulation cost of agricultural products.1 .3 Alliance of the subjects in supply chain nodes of agricultural products by 3PLs3PLs stresses the relationship of “mutual complementarity, benefit sharing, information sharing” among the stakeholders in different nodes of supply chain. Development of the agricultural producer, supplier and retailer is limited if they rely only on their own resources. 3PLs enters into the outside service market, integrates the resources through the way of strategic alliances, ensures that the subject focuses its attention on core business, reduces the cost by scale effect, enhances the anti-risk strength, and helps to achieve quick response to market demand by information sharing.At the same time, contract-0riented 3PLs enterprises unify the interests of all subjects in supply chain of agricultural products, emphasize the strategic partnership of both parties,and alleviate market competition of related industries in agricultural markets. Subjects in both downstream and upstream of the supply chain share information and establish long-term partnership with 3PLs enterprises as the core.2 Construction of the network management information system of agricultural supply chain based on 3PLs2.1 Construction of structural system3PLs platform is used to offer network communications and system services to the subjects in agricultural supply chain. Fig. 1 illustrates the structural system of network management information system of agricultural supply chain based on 3PLs.Fig.1 Structural system of network management information system of agricultural supplychain based on 3PLsFig. 1 illustrates that the basic hardware of the system is combined by the network transmission media and network equipment, that is the network communication layer. Hardware facilities, corresponding system software, operation system and netmanager software together constitute the software and hardware environment layer.This layer provides necessary software and hardware facilities for 3PLs enterprises during the data storage and management of agricultural products. Database layer is responsible for the management of data source in agricultural information resources and network systems, and offers data integration to the application layer. 3PLs standard system includes the overall standard, network infrastructure standard, application support standard, application standard, information security standard, and management standard. Safety system of 3PLs includes the security management, security infrastructure, and security service.This system is composed of 7 function modules, such as the centralized control module, transportation process management module, material and vehicle scheduling module, customer relationship module, storage management module, customer query module, and financial management module(Fig. 2),the function of which is to ensure the information fluency and system security of 3PLs enterprises during the operation and integration of resources. These modules have improved the service module of different nodes in agricultural supply chain and have reduced the operation risk of system, so that the system becomes more structured, perfect, and rational.2.2 Framework of management systemBased on the existing research result,the business and module of modern logistics management,and the management information systems,Fig.3 reports the management system of internal and external integrated agricultural enterprises according to the circulation of agricultural products from the manufacturer,supplier,and retail terminal to the consumer.Fig.2 Function modules of 3PLs network management information systemFig.3 The management system of internal and external integrated agricultural enterprises Fig.3 shows the framework of network management information system of agricultural supply chain based on 3PLs. The whole system, running under an open 3PLs, is formed by four layers of network communication layer, software and hardware environment layer, database layer and application layer. In the application layer, 3PLs, as the core of management information system of agricultural supply chain, plays the role of information processing center. It mainly manages the plan, inventory, and other subsystems, supervises subsystem through supplier relationship, conducts information interaction with procurement management subsystem and the supplier, and carries out information interaction with the supplier, producer and consumer through customer relationship management subsystem and sales management subsystem. Besides, 3PLs is also responsible for logistics management and control through the distribution management subsystem. Management of 3PLs mainly includes the 7 modules of purchasing management, supplier relationship management, planning management, customer relationship management, sales management, inventory management and distribution management. Through the effectiveintegration and coordination between 3PLs and the business with partner at the downstream and upstream of agricultural supplier chain, management system of internal and external integrated agricultural enterprises is formed using the logistics information system to realize the integration of logistics and information flow.In general,3PLs enterprise is still in the initial stage in China. Management information system of agricultural supply chain is not perfect, which can not meet the current needs of the rapid development and agricultural products circulation in rural China. Thus, there is an urgent need to build a new mode of agricultural logistics, so as to reduce the process of sales turnover, to lower the production cost of 3PLs enterprises, to improve the circulation efficiency of agricultural products, and to expand the sales market of agricultural products.3 ConclusionDeveloping modern 3PLs is an inevitable trend of market development. Design and development of management information system based on 3PLs can bring spillover benefits to the producer, supplier and retailer of agricultural products.Under the current Internet environment, management information system of agricultural supply chain based on 3PLs must be established based on the specific characteristics of operation mode and the actual business situation of 3PLs enterprises, so as to establish a management information system suitable for a given enterprise. From the perspective of overall integration of resources, the network management information system of agricultural supply chain based on 3PLs established has connected the interests of different nodes in agricultural supply chain into an organic whole, has effectively eliminated the barriers to information flow, and has increased the profits of agriculture-related enterprises and farmers. At the same time, according to the characteristics of agricultural enterprises in China, a rational agricultural products logistics mode of internal and external integrated agricultural enterprise is established, which offers a reference for the management of agricultural supply chain in China.基于第三方物流的农产品供应链网络管理信息系统的建设摘要本文对构建网络农业第三方物流供应链管理信息系统的必要性进行了分析,表明第三方物流可以提高农产品供应链的整体竞争优势。

数据采集系统中英文对照外文翻译文献

数据采集系统中英文对照外文翻译文献

中英文对照外文翻译(文档含英文原文和中文翻译)Data Acquisition SystemsData acquisition systems are used to acquire process operating data and store it on,secondary storage devices for later analysis. Many or the data acquisition systems acquire this data at very high speeds and very little computer time is left to carry out any necessary, or desirable, data manipulations or reduction. All the data are stored on secondary storage devices and manipulated subsequently to derive the variables ofin-terest. It is very often necessary to design special purpose data acquisition systems and interfaces to acquire the high speed process data. This special purpose design can be an expensive proposition.Powerful mini- and mainframe computers are used to combine the data acquisition with other functions such as comparisons between the actual output and the desirable output values, and to then decide on the control action which must be taken to ensure that the output variables lie within preset limits. The computing power required will depend upon the type of process control system implemented. Software requirements for carrying out proportional, ratio or three term control of process variables are relatively trivial, and microcomputers can be used to implement such process control systems. It would not be possible to use many of the currently available microcomputers for the implementation of high speed adaptive control systems which require the use of suitable process models and considerable online manipulation of data.Microcomputer based data loggers are used to carry out intermediate functions such as data acquisition at comparatively low speeds, simple mathematical manipulations of raw data and some forms of data reduction. The first generation of data loggers, without any programmable computing facilities, was used simply for slow speed data acquisition from up to one hundred channels. All the acquired data could be punched out on paper tape or printed for subsequent analysis. Such hardwired data loggers are being replaced by the new generation of data loggers which incorporate microcomputers and can be programmed by the user. They offer an extremely good method of collecting the process data, using standardized interfaces, and subsequently performing the necessary manipulations to provide the information of interest to the process operator. The data acquired can be analyzed to establish correlations, if any, between process variables and to develop mathematical models necessary for adaptive and optimal process control.The data acquisition function carried out by data loggers varies from one to 9 in system to another. Simple data logging systems acquire data from a few channels while complex systems can receive data from hundreds, or even thousands, of input channels distributed around one or more processes. The rudimentary data loggers scan the selected number of channels, connected to sensors or transducers, in a sequential manner and the data are recorded in a digital format. A data logger can be dedicated in the sense that it can only collect data from particular types of sensors and transducers. It is best to use a nondedicated data logger since any transducer or sensor can be connected to the channels via suitable interface circuitry. This facility requires the use of appropriate signal conditioning modules.Microcomputer controlled data acquisition facilitates the scanning of a large number of sensors. The scanning rate depends upon the signal dynamics which means that some channels must be scanned at very high speeds in order to avoid aliasing errors while there is very little loss of information by scanning other channels at slower speeds. In some data logging applications the faster channels require sampling at speeds of up to 100 times per second while slow channels can be sampled once every five minutes. The conventional hardwired, non-programmable data loggers sample all the channels in a sequential manner and the sampling frequency of all the channels must be the same. This procedure results in the accumulation of very large amounts of data, some of which is unnecessary, and also slows down the overall effective sampling frequency. Microcomputer based data loggers can be used to scan some fast channels at a higher frequency than other slow speed channels.The vast majority of the user programmable data loggers can be used to scan up to 1000 analog and 1000 digital input channels. A small number of data loggers, with a higher degree of sophistication, are suitable for acquiring data from up to 15, 000 analog and digital channels. The data from digital channels can be in the form of Transistor- Transistor Logic or contact closure signals. Analog data must be converted into digital format before it is recorded and requires the use of suitable analog to digital converters (ADC).The characteristics of the ADC will define the resolution that can be achieved and the rate at which the various channels can be sampled. An in-crease in the number of bits used in the ADC improves the resolution capability. Successive approximation ADC's arefaster than integrating ADC's. Many microcomputer controlled data loggers include a facility to program the channel scanning rates. Typical scanning rates vary from 2 channels per second to 10, 000 channels per second.Most data loggers have a resolution capability of ±0.01% or better, It is also pos-sible to achieve a resolution of 1 micro-volt. The resolution capability, in absolute terms, also depends upon the range of input signals, Standard input signal ranges are 0-10 volt, 0-50 volt and 0-100 volt. The lowest measurable signal varies form 1 t, volt to 50, volt. A higher degree of recording accuracy can be achieved by using modules which accept data in small, selectable ranges. An alternative is the auto ranging facil-ity available on some data loggers.The accuracy with which the data are acquired and logged-on the appropriate storage device is extremely important. It is therefore necessary that the data acquisi-tion module should be able to reject common mode noise and common mode voltage. Typical common mode noise rejection capabilities lie in the range 110 dB to 150 dB. A decibel (dB) is a tern which defines the ratio of the power levels of two signals. Thus if the reference and actual signals have power levels of N, and Na respectively, they will have a ratio of n decibels, wheren=10 Log10(Na /Nr)Protection against maximum common mode voltages of 200 to 500 volt is available on typical microcomputer based data loggers.The voltage input to an individual data logger channel is measured, scaled and linearised before any further data manipulations or comparisons are carried out.In many situations, it becomes necessary to alter the frequency at which particu-lar channels are sampled depending upon the values of data signals received from a particular input sensor. Thus a channel might normally be sampled once every 10 minutes. If, however, the sensor signals approach the alarm limit, then it is obviously desirable to sample that channel once every minute or even faster so that the operators can be informed, thereby avoiding any catastrophes. Microcomputer controlledintel-ligent data loggers may be programmed to alter the sampling frequencies depending upon the values of process signals. Other data loggers include self-scanning modules which can initiate sampling.The conventional hardwired data loggers, without any programming facilities, simply record the instantaneous values of transducer outputs at a regular samplingin-terval. This raw data often means very little to the typical user. To be meaningful, this data must be linearised and scaled, using a calibration curve, in order to determine the real value of the variable in appropriate engineering units. Prior to the availability of programmable data loggers, this function was usually carried out in the off-line mode on a mini- or mainframe computer. The raw data values had to be punched out on pa-per tape, in binary or octal code, to be input subsequently to the computer used for analysis purposes and converted to the engineering units. Paper tape punches are slow speed mechanical devices which reduce the speed at which channels can be scanned. An alternative was to print out the raw data values which further reduced the data scanning rate. It was not possible to carry out any limit comparisons or provide any alarm information. Every single value acquired by the data logger had to be recorded eventhough it might not serve any useful purpose during subsequent analysis; many data values only need recording when they lie outside the pre-set low and high limits.If the analog data must be transmitted over any distance, differences in ground potential between the signal source and final location can add noise in the interface design. In order to separate common-mode interference form the signal to be recorded or processed, devices designed for this purpose, such as instrumentation amplifiers, may be used. An instrumentation amplifier is characterized by good common-mode- rejection capability, a high input impedance, low drift, adjustable gain, and greater cost than operational amplifiers. They range from monolithic ICs to potted modules, and larger rack-mounted modules with manual scaling and null adjustments. When a very high common-mode voltage is present or the need for extremely-lowcom-mon-mode leakage current exists(as in many medical-electronics applications),an isolation amplifier is required. Isolation amplifiers may use optical or transformer isolation.Analog function circuits are special-purpose circuits that are used for a variety of signal conditioning operations on signals which are in analog form. When their accu-racy is adequate, they can relieve the microprocessor of time-consuming software and computations. Among the typical operations performed are multiplications, division, powers, roots, nonlinear functions such as for linearizing transducers, rimsmeasure-ments, computing vector sums, integration and differentiation, andcurrent-to-voltage or voltage- to-current conversion. Many of these operations can be purchased in available devices as multiplier/dividers, log/antilog amplifiers, and others.When data from a number of independent signal sources must be processed by the same microcomputer or communications channel, a multiplexer is used to channel the input signals into the A/D converter.Multiplexers are also used in reverse, as when a converter must distribute analog information to many different channels. The multiplexer is fed by a D/A converter which continually refreshes the output channels with new information.In many systems, the analog signal varies during the time that the converter takes to digitize an input signal. The changes in this signal level during the conversion process can result in errors since the conversion period can be completed some time after the conversion command. The final value never represents the data at the instant when the conversion command is transmitted. Sample-hold circuits are used to make an acquisition of the varying analog signal and to hold this signal for the duration of the conversion process. Sample-hold circuits are common in multichannel distribution systems where they allow each channel to receive and hold the signal level.In order to get the data in digital form as rapidly and as accurately as possible, we must use an analog/digital (A/D) converter, which might be a shaft encoder, a small module with digital outputs, or a high-resolution, high-speed panel instrument. These devices, which range form IC chips to rack-mounted instruments, convert ana-log input data, usually voltage, into an equivalent digital form. The characteristics of A/D converters include absolute and relative accuracy, linearity, monotonic, resolu-tion, conversion speed, and stability. A choice of input ranges, output codes, and other features are available. The successive-approximation technique is popular for a large number ofapplications, with the most popular alternatives being the counter-comparator types, and dual-ramp approaches. The dual-ramp has been widely-used in digital voltmeters.D/A converters convert a digital format into an equivalent analog representation. The basic converter consists of a circuit of weighted resistance values or ratios, each controlled by a particular level or weight of digital input data, which develops the output voltage or current in accordance with the digital input code. A special class of D/A converter exists which have the capability of handling variable reference sources. These devices are the multiplying DACs. Their output value is the product of the number represented by the digital input code and the analog reference voltage, which may vary form full scale to zero, and in some cases, to negative values.Component Selection CriteriaIn the past decade, data-acquisition hardware has changed radically due to ad-vances in semiconductors, and prices have come down too; what have not changed, however, are the fundamental system problems confronting the designer. Signals may be obscured by noise, rfi,ground loops, power-line pickup, and transients coupled into signal lines from machinery. Separating the signals from these effects becomes a matter for concern.Data-acquisition systems may be separated into two basic categories:(1)those suited to favorable environments like laboratories -and(2)those required for hostile environments such as factories, vehicles, and military installations. The latter group includes industrial process control systems where temperature information may be gathered by sensors on tanks, boilers, wats, or pipelines that may be spread over miles of facilities. That data may then be sent to a central processor to provide real-time process control. The digital control of steel mills, automated chemical production, and machine tools is carried out in this kind of hostile environment. The vulnerability of the data signals leads to the requirement for isolation and other techniques.At the other end of the spectrum-laboratory applications, such as test systems for gathering information on gas chromatographs, mass spectrometers, and other sophis-ticated instruments-the designer's problems are concerned with the performing of sen-sitive measurements under favorable conditions rather than with the problem ofpro-tecting the integrity of collected data under hostile conditions.Systems in hostile environments might require components for wide tempera-tures, shielding, common-mode noise reduction, conversion at an early stage, redun-dant circuits for critical measurements, and preprocessing of the digital data to test its reliability. Laboratory systems, on the other hand, will have narrower temperature ranges and less ambient noise. But the higher accuracies require sensitive devices, and a major effort may be necessary for the required signal /noise ratios.The choice of configuration and components in data-acquisition design depends on consideration of a number of factors:1. Resolution and accuracy required in final format.2. Number of analog sensors to be monitored.3. Sampling rate desired.4. Signal-conditioning requirement due to environment and accuracy.5. Cost trade-offs.Some of the choices for a basic data-acquisition configuration include:1 .Single-channel techniques.A. Direct conversion.B. Preamplification and direct conversion.C. Sample-hold and conversion.D. Preamplification, sample-hold, and conversion.E. Preamplification, signal-conditioning, and direct conversion.F. Preamplification, signal-conditioning, sample-hold, and conversion.2. Multichannel techniques.A. Multiplexing the outputs of single-channel converters.B. Multiplexing the outputs of sample-holds.C. Multiplexing the inputs of sample-holds.D. Multiplexing low-level data.E. More than one tier of multiplexers.Signal-conditioning may include:1. Radiometric conversion techniques.B. Range biasing.D. Logarithmic compression.A. Analog filtering.B. Integrating converters.C. Digital data processing.We shall consider these techniques later, but first we will examine some of the components used in these data-acquisition system configurations.MultiplexersWhen more than one channel requires analog-to-digital conversion, it is neces-sary to use time-division multiplexing in order to connect the analog inputs to a single converter, or to provide a converter for each input and then combine the converter outputs by digital multiplexing.Analog MultiplexersAnalog multiplexer circuits allow the timesharing of analog-to-digital converters between a numbers of analog information channels. An analog multiplexer consists of a group of switches arranged with inputs connected to the individual analog channels and outputs connected in common(as shown in Fig. 1).The switches may be ad-dressed by a digital input code.Many alternative analog switches are available in electromechanical and solid-state forms. Electromechanical switch types include relays, stepper switches,cross-bar switches, mercury-wetted switches, and dry-reed relay switches. The best switching speed is provided by reed relays(about 1 ms).The mechanical switches provide high do isolation resistance, low contact resistance, and the capacity to handle voltages up to 1 KV, and they are usually inexpensive. Multiplexers using mechanical switches are suited to low-speed applications as well as those having high resolution requirements. They interface well with the slower A/D converters, like the integrating dual-slope types. Mechanical switches have a finite life, however, usually expressed innumber of operations. A reed relay might have a life of 109 operations, which wouldallow a 3-year life at 10 operations/second.Solid-state switch devices are capable of operation at 30 ns, and they have a life which exceeds most equipment requirements. Field-effect transistors(FETs)are used in most multiplexers. They have superseded bipolar transistors which can introduce large voltage offsets when used as switches.FET devices have a leakage from drain to source in the off state and a leakage from gate or substrate to drain and source in both the on and off states. Gate leakage in MOS devices is small compared to other sources of leakage. When the device has a Zener-diode-protected gate, an additional leakage path exists between the gate and source.Enhancement-mode MOS-FETs have the advantage that the switch turns off when power is removed from the MUX. Junction-FET multiplexers always turn on with the power off.A more recent development, the CMOS-complementary MOS-switch has the advantage of being able to multiplex voltages up to and including the supply voltages. A±10-V signal can be handled with a ±10-V supply.Trade-off Considerations for the DesignerAnalog multiplexing has been the favored technique for achieving lowest system cost. The decreasing cost of A/D converters and the availability of low-cost, digital integrated circuits specifically designed for multiplexing provide an alternative with advantages for some applications. A decision on the technique to use for a givensys-tem will hinge on trade-offs between the following factors:1. Resolution. The cost of A/D converters rises steeply as the resolution increases due to the cost of precision elements. At the 8-bit level, the per-channel cost of an analog multiplexer may be a considerable proportion of the cost of a converter. At resolutions above 12 bits, the reverse is true, and analog multiplexing tends to be more economical.2. Number of channels. This controls the size of the multiplexer required and the amount of wiring and interconnections. Digital multiplexing onto a common data bus reduces wiring to a minimum in many cases. Analog multiplexing is suited for 8 to 256 channels; beyond this number, the technique is unwieldy and analog errors be-come difficult to minimize. Analog and digital multiplexing is often combined in very large systems.3. Speed of measurement, or throughput. High-speed A/D converters can add a considerable cost to the system. If analog multiplexing demands a high-speedcon-verter to achieve the desired sample rate, a slower converter for each channel with digital multiplexing can be less costly.4. Signal level and conditioning. Wide dynamic ranges between channels can be difficult with analog multiplexing. Signals less than 1V generally require differential low-level analog multiplexing which is expensive, with programmable-gain amplifiers after the MUX operation. The alternative of fixed-gain converters on each channel, with signal-conditioning designed for the channel requirement, with digital multi-plexing may be more efficient.5. Physical location of measurement points. Analog multiplexing is suitedfor making measurements at distances up to a few hundred feet from the converter, since analog lines may suffer from losses, transmission-line reflections, and interference. Lines may range from twisted wire pairs to multiconductor shielded cable, depending on signal levels, distance, and noise environments. Digital multiplexing is operable to thousands of miles, with the proper transmission equipment, for digital transmission systems can offer the powerful noise-rejection characteristics that are required for29 Data Acquisition Systems long-distance transmission.Digital MultiplexingFor systems with small numbers of channels, medium-scale integrated digital multiplexers are available in TTL and MOS logic families. The 74151 is a typical example. Eight of these integrated circuits can be used to multiplex eight A/D con-verters of 8-bit resolution onto a common data bus.This digital multiplexing example offers little advantages in wiring economy, but it is lowest in cost, and the high switching speed allows operation at sampling rates much faster than analog multiplexers. The A/D converters are required only to keep up with the channel sample rate, and not with the commutating rate. When large numbers of A/D converters are multiplexed, the data-bus technique reduces system interconnections. This alone may in many cases justify multiple A/D converters. Data can be bussed onto the lines in bit-parallel or bit-serial format, as many converters have both serial and parallel outputs. A variety of devices can be used to drive the bus, from open collector and tristate TTL gates to line drivers and optoelectronic isolators. Channel-selection decoders can be built from 1-of-16 decoders to the required size. This technique also allows additional reliability in that a failure of one A/D does not affect the other channels. An important requirement is that the multiplexer operate without introducing unacceptable errors at the sample-rate speed. For a digital MUX system, one can determine the speed from propagation delays and the time required to charge the bus capacitance.Analog multiplexers can be more difficult to characterize. Their speed is a func-tion not only of internal parameters but also external parameters such as channel, source impedance, stray capacitance and the number of channels, and the circuit lay-out. The user must be aware of the limiting parameters in the system to judge their ef-fect on performance.The nonideal transmission and open-circuit characteristics of analog multiplexers can introduce static and dynamic errors into the signal path. These errors include leakage through switches, coupling of control signals into the analog path, and inter-actions with sources and following amplifiers. Moreover, the circuit layout can com-pound these effects.Since analog multiplexers may be connected directly to sources which may have little overload capacity or poor settling after overloads, the switches should have a break-before-make action to prevent the possibility of shorting channels together. It may be necessary to avoid shorted channels when power is removed and a chan-nels-off with power-down characteristic is desirable. In addition to the chan-nel-addressing lines, which are normally binary-coded, it is useful to have inhibited or enable lines to turn all switches off regardless of the channel being addressed. This simplifies the external logic necessary to cascade multiplexers and can also be useful in certain modes of channeladdressing. Another requirement for both analog and digital multiplexers is the tolerance of line transients and overload conditions, and the ability to absorb the transient energy and recover without damage.数据采集系统数据采集系统是用来获取数据处理和存储在二级存储设备,为后来的分析。

1外文文献:信息系统开发和数据库开发

1外文文献:信息系统开发和数据库开发

Information System Development and Database DevelopmentIn many organizations, database development from the beginning of enterprise data modeling, data modeling enterprises determine the scope of the database and the general content. This step usually occurs in an organization's information system planning process, it aims to help organizations create an overall data description or explanation, and not the design of a specific database. A specific database for one or more information systems provide data and the corporate data model (which may involve a number of databases> described by the organization maintaining the scope of the data. Data modeling in the enterprise, you review of the current system, the need to support analysis of the nature of the business areas, the need for further description of the abstract data, and planning one or more database development project. Figure 1 shows PineValley furniture company's enterprise data model of a part.1.1 Information System ArchitectureAs shown in figure 1, senior data model is only general information system architecture (ISA> or a part of an organization's information system blueprint. In the information system planning, you can build an enterprise data model as a whole information system architecture part. According to Zachman (1987>, Sowa and Zachman (1992> views of an information system architecture consists of the following six key components:Data (Figure 1 shows, but there are other methods that>.Manipulation of data processing (of a data flow diagram can be used, with the object model methods, or other symbols that>.Networks, which organizations and in organizations with its main transmission of data between business partners (it can connect through the network topology map and to demonstrate>. People who deal with the implementation of data and information and is the source and receiver (in the process model for the data shows that the sender and the receiver>.Implementation of the events and time points (they can use state transition diagram and other means.>The reasons for the incident and data processing rules (often in the form of text display, but there are also a number of charts for the planning tools such as decision tables>.1.2 Information EngineeringInformation systems planners in accordance with the specific information system planning methods developed information system architecture. Information engineering is a popular and formal methods. Information engineering is a data-oriented creation and maintenance of the information system. Information engineering is because the data-oriented, so when you begin to understand how the database is defined by the logo and when information engineering a concise explanation is very helpful. Information Engineering follow top-down planning approach, in which specific information systems from a wide range of information needs in the understanding derived from (for example, we need about customers, products, suppliers, sales and processing of the data center>, rather than merging many detailed information requested ( orders such as a screen or in accordance with the importation of geographical sales summary report>. Top-down planning will enable developers to plan more comprehensive information system, consider system components provide an integrated approach to enhance the information system and the relationship between the business objectives of the understanding, deepen their understanding of information systems throughout the organization in understanding the impact.Information Engineering includes four steps: planning, analysis, design and implementation. The planning stage of project information generated information system architecture, including enterprise data model.1.3 Information System PlanningInformation systems planning objective is to enable IT organizations and the business strategy closely integrated, such integration for the information systems and technology to make the most of the investment interest is very important. As the table as a description, information engineering approach the planning stage include three steps, we in the follow-up of three sections they discussed.1. Critical factors determining the planningPlanning is the key factor that organizational objectives, critical success factors and problem areas. These factors determine the purpose of the establishment of planning and environment planning and information systems linked to strategic business planning. Table 2 shows the PineValley furniture company's key planning a number of possible factors, these factors contribute to the information systems manager for the new information systems and databases clubs top priority to deal with the demand. For example, given the imprecise sales forecasts this problem areas, information systems managers in the organization may be stored in the database additional historical sales data, new market research data and new product test data.2. The planning organizations set targetsOrganizations planning targets defined scope of business, and business scope will limit the subsequent analysis and information systems may change places. Five key planning targets as follows:● organizational units in the various sectors.● organizations location of the place of business operations.●functions of the business support organizations handling mission of the relevant group. Unlike business organizations function modules, in fact a function can be assigned to various organizations modules (for example, product development function is the production and sale of the common responsibility of the Ministry>.● types of entities managed by the organization on the people, places and things of the major types of data.● Information System data set processing software applications and support procedures.3. To set up a business modelA comprehensive business model including the functions of each enterprise functional decomposition model, the enterprise data model and the various planning matrix. Functional decomposition is the function of the organization for a more detailed decomposition process, the functional decomposition is to simplify the analysis of the issue, distracted and identify components and the use of the classical approach. PineValley furniture company in order to function in the functional decomposition example in figure 2 below. In dealing with business functions and support functions of the full set, multiple databases, is essential to a specific database therefore likely only to support functions (as shown in Figure 2> provide a subset of support. In order to reduce data redundancy and to make data more meaningful, has a complete, high-level business view is very helpful.The use of specific enterprise data model to describe the symbol. Apart from the graphical description of this type of entity, a complete enterprise data model should also include a description of each entity type description of business operations and a summary of that business rules. Business rules determine the validity of the data.An enterprise data model includes not only the types of entities, including the link between the data entities, as well as various other objects planning links. Showed that the linkage between planning targets a common form of matrix. Because of planning matrix need not be explicitmodeling database can be clearly described business needs, planning matrix is an important function. Regular planning matrix derived from the operational rules, it will help social development activities that top priority will be sorting and development activities under the top-down view through an enterprise-wide approach for the development of these activities. There are many types of planning matrix is available, their commonalities are:●locations - features show business function in which the implementation of operational locations.●unit - functions which showed that business function or business unit responsible for implementation.● Information System - data entities to explain how each information system interact with each data entity (for example, whether or not each system in each entity have the data to create, retrieve, update and delete>.● support functions - data in each functional entities in the data set for the acquisition, use, update and delete.●Information System - target indication for each information system to support business objectives.Figure 3 illustrate a possible functions - data entities matrix. Such a matrix can be used for a variety of purposes, including the following three objectives:1> identify gaps in the data entities to indicate the types of entities not use any function or functions which do not use any entity.2> found that the loss of each functional entities involved in the inspection staff through the matrix to identify any possible loss of the entity.3> The distinction between development activities if the priority to the top of a system development function for a high-priority (probably because it important organizational objectives related>, then this area used by entities in the development of the database has a high priority. Hoffer, George and Valacich (2002> are the works of the matrix on how to use the planning and completion of the Information EngineeringThe planning system more complete description.2 database development processBased on information engineering information systems planning database is a source of development projects. These new database development projects is usually in order to meet the strategic needs of organizations, such as improving customer support, improve product and inventory management, or a more accurate sales forecast. However, many more database development project is the bottom-up approach emerging, such as information system user needs specific information to complete their work, thus beginning a project request, and as other information systems experts found that organizations need to improve data management and begin new projects. Bottom-up even in the circumstances, to set up an enterprise data model is also necessary to understand the existing database can provide the necessary data, otherwise, the new database, data entities and attributes can be added to the current data resources to the organization. Both the strategic needs or operational information needs of each database development projects normally concentrated in a database. Some projects only concentrated in the database definition, design and implementation of a database, as a follow-up to the basis of the development of information systems. However, in most cases, the database and associated information processing function as a complete information systems development project was part of the development.2.1 System Development Life CycleGuide management information system development projects is the traditional process of system development life cycle (SDLC>. System development life cycle is an organization of the database designers and programmers information system composed of the Panel of Experts detailed description, development, maintenance and replacement of the entire information system steps. This process is because Waterfall than for every step into the adjacent the next step, that is, the information system is a specification developed by a piece of land, every piece of the output is under an input. However shown in the figure, these steps are not purely linear, each of the steps overlap in time (and thus can manage parallel steps>, but when the need to reconsider previous decisions, but also to roll back some steps ahead. (And therefore water can be put back in the waterfall!>Figure 4 on the system development life cycle and the purpose of each stage of the product can be delivered concise notes. The system development life cycle including each stage and database development-related activities, therefore, the question of database management systems throughout the entire development process. In Figure 5 we repeat of the system development life cycle stage of the seven, and outlines the common database at each stage of development activities. Please note that the systems development life cycle stages and database development steps一一对应exists between the relationship between the concept of modeling data in both systems development life cycle stages between.Enterprise ModelingDatabase development process from the enterprise modeling (system development life cycle stage of the project feasibility studies, and to choose a part>, Organizations set the scope and general database content. Enterprise modeling in information systems planning and other activities, these activities determine which part of information systems need to change and strengthen the entire organization and outlines the scope of data. In this step, check the current database and information systems, development of the project as the main areas of the nature of the business, with a very general description of each term in the development of information systems when needed data. Each item only when it achieved the expected goals of organizations can be when the next step.Conceptual Data ModelingOne has already begun on the Information System project, the concept of data modeling phase of the information systems needs of all the data. It is divided into two stages. First, it began the project in the planning stage and the establishment of a plan similar to Figure 1. At the same time outlining the establishment of other documents to the existing database without considering the circumstances specific development projects in the scope of the required data. This category only includes high-level data (entities>, and main contact. Then in the system development life-cycle analysis stage must have a management information system set the entire organization Details of the data model definition of all data attributes, listing all data types that all data inter-entity business linkages, defining description of the full data integrity rules. In the analysis phase, but also the concept of inspection data model (also called the concept behind the model> and the goal of information systems used to explain other aspects of the model of consistency categories, such as processing steps, rules and data processing time of timing. However, even if the concept is such detailed data model is only preliminary, because follow-up information system life cycle activities in the design of services, statements, display and inquiries may find that missing element or mistakes. Therefore, the concept of data often said that modeling is a top-down manner, its areas of operation from the general understanding of the driver, rather than the specific information processing activities by the driver.3. Logical Database DesignLogical database design from two perspectives database development. First, the concept of data model transform into relational database theory based on the criteria that means - between. Then, as the design of information systems, every computer procedures (including procedures for the input and output format>, database support services, statements, and inquiries revealed that a detailed examination. In this so-called Bottom-up analysis, accurate verification of the need to maintain the database and the data in each affairs, statements and so on the needs of those in the nature of the data.For each separate statements, services, and so on the analysis must take into account a specific, limited but complete database view. When statements, services, and other analysis might be necessary to change the concept of data model. Especially in large-scale projects, the different analytical systems development staff and the team can work independently in different procedures or in a centralized, the details of their work until all the logic design stage may be displayed. In these circumstances, logic database design stage must be the original concept of data model and user view these independent or merged into a comprehensive design. In logic design information systems also identify additional information processing needs of these new demands at this time must be integrated into the logic of earlier identified in the database design.Logical database design is based on the final step for the formation of good data specifications and determine the rules, the combination, the data after consultation specifications or converted into basic atomic element. Most of today's database, these rules from the relational database theory and the process known as standardization. This step is the result of management of these data have not cited any database management system for a complete description of the database map. Logical database design completed, we began to identify in detail the logic of the computer program and maintenance, the report contents of the database for inquiries.4. Physical database design and definitionPhysical database design and definition phase decisions computer memory (usually disk> database in the organization, definition ofAccording to the library management system for physical structure, the procedures outlined processing services, produce the desired management information and decision support statements. The objective of this stage is to design an effective and safe management of all data-processing database, the physical database design to closely integrate the information systems of other physical aspects of the design, including procedures, computer hardware, operating systems and data communications networks.5. Database ImplementationThe database prepared by the realization stage, testing and installation procedures for handling databases. Designers can use the standard programming language (such as COBOL, C or Visual Basic>, the dedicated database processing languages (such as SQL>, or the process of the non-exclusive language programming in order to produce a statement of the fixed format, the result will be displayed, and may also include charts. In achieving stage, but also the completion of all the database files, training users for information systems (database> user setup program. The final step is to use existing sources of information (documents legacy applications and databases and now needs new data> loading data. Loading data is often the first step in data from existing files and databases to an intermediate format (such as binary or text files> and then to turn intermediate loading data to a new database. Finally, running databases and related applications for the actual user maintenance and retrieval of data. In operation, the regular backup database and the database when damaged or affected resume database.6. Database maintenanceDuring the database in the progressive development of database maintenance. In this step, in order to meet changing business conditions, in order to correct the erroneous database design, database applications or processing speed increase, delete or change the structure of the database. When a procedure or failure of the computer database affect or damage the database may also be reconstruction. This step usually is the longest in the database development process step, as it continued to databases and related applications throughout the life cycle, the development of each database can be seen as a brief database development process and data modeling concepts arise, logical and physical database design and database to achieve dealing with the changes.2.2 Information System developed by other meansSystem Development Life Cycle minor changes in law or its variant of the often used to guide information systems and database development. Information System is a life-cycle methodology, it is highly structured approach, which includes many checks and balances to ensure that every step of produce accurate results, and new or alternative information system and it must communications or data definitions consistent existing system needs consistency. System development life cycle because of the regular need to have a working system for a long time been criticized because only work in the system until the end of the whole process generated. More and more organizations now use rapid application development method, it is a includes analysis, design and implementation of steps to repeat the rapid iterative process until convergence to users the system so far. Rapid Application Development Act required the database has been in existence, and enhance system is mainly to the application of data retrieval application, but not to those who generate and modify database applications.The most widely used method of rapid application development is one of the prototype. The prototype system is a method of iterative development process, analysts and users through close co-operation, continuing to revise the system will eventually convert all the needs of a working system. Figure 6 shows prototype of the process. In this diagram we contains notes, briefly describes each stage of the prototype of the database development activities. Normally, when information systems problems were identified, tried only a rough concept of data modeling. In the development of the initial prototype, the design of the user wants to display and statements, and that any new database needs and define a term prototype database. This is usually a new database, copy the part of the existing system, but might also added some new content. When the need for new content, these elements are usually from external data sources, such as market research data, the general economic indicators or industry standards.When a prototype of a new version to repeat the achievement and maintenance of database activities. Usually only a minimum level of security and integrity control, because at this time the focus is as soon as possible to produce a prototype version can be used. But document management project also deferred to the final, only be used in the delivery of user training. Finally, once constructed an acceptable prototype, developers, and users will be the final decision of whether to prototype delivery and the use of the database. If the system (including database> efficiency is very low, then the system and database will be re-programming and re-organization in order to achieve the desired performance.Along with visual programming tools (such as Visual Basic, Java, Visual C + + and fourth generation language> increasingly popular use of visual programming tools can easily change the user interface with the system, the prototype is becoming the choice of system development methodology. Customers using the prototype method statements and show changes to the content and layout is quite easy. In the process, the new database needs were identified, so it is the development of the use of the existing database should be amended. There is even the possibilityof a need for a new database system prototype method, in such circumstances, when the system demand in the iterative process of development in the ever-changing needs access to sample data, the construction or reconstruction of the database prototype.3 database development of the three-tier architecture modelIn this article on the front of the database development process mentioned in the interpretation of a system development project on the establishment of the several different, but related database view or model:● conceptual model (in the analysis stage of the establishment>.● external model or user view (in the analysis phase and the establishment of logical design phase>.● physical model or internal model (in the physical design phase of the establishment>. Figure 7 describes the database view that the relationship between the three, it is important to remember that they are the same organizations database view or model. In other words, each organization has a database of the physical model, a concept model and one or more users view. Therefore, the three-tier architecture model using the same data set observe the different ways definition database.Concept models on the full database structure, has nothing to do with the technical specifications. Conceptual model definition do not involve the entire database data stored in the computer how the secondary memory. Usually, the conceptual model by entities - links (E-R> map or object modeling symbols such a graphical format to describe, we have this type of concept model called the data model. In addition, the conceptual model specification as a metadata stored in the database or data dictionary.Physical models including conceptual model of how data stored in computer memory in the two specifications. Analysts and the database design is as important to the physical database (physical mode> definition, it provides information on the distribution and management of data storage and access of the physical memory space of two full database technology specifications.Database development and database technology database is among the three models divided into basis. Database development projects may have a role to only deal with these three views of a related work. For example, a beginner may be designed for one or more procedures external model, and an experienced developer will design the physical model or conceptual model. Database design issues at different levels are quite different.4 three-tier structure of the database positioning systemObviously, all the good things in the database are, and the "three"!When designing a database, you have to choose where to store data. This option in the physical database design stage. Database is divided into individual databases, the Working Group database, departmental databases, corporate databases and the Internet database. Individuals often by the end-user database design and development of their own, just by database experts to give training and advice to help, it only contains individual end-users interested in the data. Sometimes, personal database from the database or enterprise Working Group extracted from the database, such circumstances database prepared by some experts from the regular routine to create local database. Sector Working Group database and the database is often the end-user, business experts and the central database system experts development. The collaborative work of these officers is necessary because in the design of the database to be shared by a large number of issues weigh: processing speed, ease of use, data definition differences and other similar problems. Due to corporate databases and the Internet database broad impact, large-scale, it is normally concentrated in the database development team has received professional training to develop adatabase of experts.1. Customers layerA desktop or notebook also known as that layer, which specialized management user interface and system localization data in this layer can be implemented on the Web scripting tasks.2. Server / Web serverHTTP protocol handling, scripting tasks, the implementation of computing and provide data access, the layer known as processing services layer.3. Enterprise Server (Minicomputer or mainframe> layerThe implementation of complex computing and inter-organizational management from multiple data sources of data integration, also known as data services layer.In an organization, hierarchical database and information system architecture for distributed computing and the client / server architecture of the concept of correlation. Client / server architecture based on a LAN environment, including servers (referred to as database server or database engine> database software implementation from the client workstation database orders, each customer applications focus on their user interface functions. In fact, the whole concept of the database (as well as the application of these databases to handle routine> as a distributed database or the separate but related physical database distribution in the local PC workstation, server intermediate (working group or sector> and one center server (departments or enterprises >. Simply said that the use of client / server architecture for:●it can handle multiple processors on the same application at the same time, improve application response time and data processing speed.●It can use each computer platform of the best data processing (such as PC Minicom Advanced user interface with the mainframe and computing speed>.●can mix various client technology (Intel or Motorola processor assembly of personal computers, computer networks, information kiosks, etc.> and public data sharing. In addition, you can change the technology at any layer and other layers only a small influence on the system module.● able to handle close to the data source to be addressed to improve response time and reduce network traffic.● accept it to allow and encourage open systems standards.For database development, the use of a multi-layered client / server database architecture development is the most meaningful of the database will be easy to develop and maintain database module to the end-user and that the contents of the database information system module separated. That routine can be used as PowerBuilder, Java, and Visual Basic language to provide this easy-to-use graphical user interface. Through middleware that routine interaction between layers can be passed to access routine, the routine visit to the necessary data and analysis of these data in order to form the required information. As a database developers and programmers, you can in this three-tier level of any of the work, developing the necessary software.申明:所有资料为本人收集整理,仅限个人学习使用,勿做商业用途。

信息系统和数据库开发中英文对照外文翻译文献

信息系统和数据库开发中英文对照外文翻译文献

中英文对照外文翻译文献(文档含英文原文和中文翻译)Information System Development and DatabaseDevelopmentIn many organizations, database development from the beginning of enterprise data modeling, data modeling enterprises determine the scope of the database and the general content. This step usually occurs in an organization's information system planning process, it aims to help organizations create an overall data description or explanation, and not the design of a specific database. A specific database for one or more information systems provide data and the corporate data model (which may involve a number of databases) described by the organization maintaining the scope of the data. Data modeling in the enterprise, you review of the current system, the need to support analysis of the nature of the business areas, the need for further description of the abstract data, and planning one or more database developmentproject. Figure 1 shows Pine Valley furniture company's enterprise data model of a part.1.1 Information System ArchitectureSenior data model is only general information system architecture (ISA) or a part of an organization's information system blueprint. In the information system planning, you can build an enterprise data model as a whole information system architecture part. According to Zachman (1987), Sowa and Zachman (1992) views of an information system architecture consists of the following six key components:DataManipulation of data processing (of a data flow diagram can be used, with the object model methods, or other symbols that).Networks, which organizations and in organizations with its main transmission of data between business partners (it can connect through the network topology map and to demonstrate).People who deal with the implementation of data and information and is the source and receiver (in the process model for the data shows that the sender and the receiver).Implementation of the events and time points (they can use state transition diagram and other means.)The reasons for the incident and data processing rules (often in the form of text display, but there are also a number of charts for the planning tools such as decision tables).1.2 Information EngineeringInformation systems planners in accordance with the specific information system planning methods developed information system architecture. Information engineering is a popular and formal methods. Information engineering is a data-oriented creation and maintenance of the information system. Information engineering is because the data-oriented, so when you begin to understand how the database is defined by the logo and when information engineering a concise explanation is very helpful. Information Engineering follow top-down planning approach, in which specific information systems from a wide range of informationneeds in the understanding derived from (for example, we need about customers, products, suppliers, sales and processing of the data center), rather than merging many detailed information requested ( orders such as a screen or in accordance with the importation of geographical sales summary report). Top-down planning will enable developers to plan more comprehensive information system, consider system components provide an integrated approach to enhance the information system and the relationship between the business objectives of the understanding, deepen their understanding of information systems throughout the organization in understanding the impact.Information Engineering includes four steps: planning, analysis, design and implementation. The planning stage of project information generated information system architecture, including enterprise data model.1.3 Information System PlanningInformation systems planning objective is to enable IT organizations and the business strategy closely integrated, such integration for the information systems and technology to make the most of the investment interest is very important. As the table as a description, information engineering approach the planning stage include three steps, we in the follow-up of three sections they discussed.1. Critical factors determining the planningPlanning is the key factor that organizational objectives, critical success factors and problem areas. These factors determine the purpose of the establishment of planning and environment planning and information systems linked to strategic business planning. Table 2 shows the Pine Valley furniture company's key planning a number of possible factors, these factors contribute to the information systems manager for the new information systems and databases clubs top priority to deal with the demand. For example, given the imprecise sales forecasts this problem areas, information systems managers in the organization may be stored in the database additional historical sales data, new market research data and new product test data.2. The planning organizations set targetsOrganizations planning targets defined scope of business, and business scope will limit the subsequent analysis and information systems may change places. Five key planning targets as follows:● organizational units in the various sectors.● organizations location of the place of business operations.● functions of the business support organizations handling mission of the relevant group. Unlike business organizations function modules, in fact a function can be assigned to various organizations modules (for example, product development function is the production and sale of the common responsibility of the Ministry).● types of entities managed by the organization on the people, places and things of the major types of data.● Information System data set processing software applications and support procedures.3. To set up a business modelA comprehensive business model including the functions of each enterprise functional decomposition model, the enterprise data model and the various planning matrix. Functional decomposition is the function of the organization for a more detailed decomposition process, the functional decomposition is to simplify the analysis of the issue, distracted and identify components and the use of the classical approach. Pine Valley furniture company in order to function in the functional decomposition example in figure 2 below. In dealing with business functions and support functions of the full set, multiple databases, is essential to a specific database therefore likely only to support functions (as shown in Figure 2) provide a subset of support. In order to reduce data redundancy and to make data more meaningful, has a complete, high-level business view is very helpful.The use of specific enterprise data model to describe the symbol. Apart from the graphical description of this type of entity, a complete enterprise data model should also include a description of each entity type description of business operations and a summary of that business rules. Business rules determine the validity of the data.An enterprise data model includes not only the types of entities, including the link between the data entities, as well as various other objects planning links. Showed that the linkage between planning targets a common form of matrix. Because of planning matrix need not be explicit modeling database can be clearly described business needs, planning matrix is an important function. Regular planning matrix derived from theoperational rules, it will help social development activities that top priority will be sorting and development activities under the top-down view through an enterprise-wide approach for the development of these activities. There are many types of planning matrix is available, their commonalities are:● locations - features show business function in which the implementation of operational locations.● unit - functions which showed that business function or business unit responsible for implementation.● Information System - data entities to explain how each information system interact with each data entity (for example, whether or not each system in each entity have the data to create, retrieve, update and delete).● support functions - data in each functional entities in the data set for the acquisition, use, update and delete.● Information System - target indication for each information system to support business objectives.Data entities matrix. Such a matrix can be used for a variety of purposes, including the following three objectives:1) identify gaps in the data entities to indicate the types of entities not use any function or functions which do not use any entity.2) found that the loss of each functional entities involved in the inspection staff through the matrix to identify any possible loss of the entity.3) The distinction between development activities if the priority to the top of a system development function for a high-priority (probably because it important organizational objectives related), then this area used by entities in the development of the database has a high priority. Hoffer, George and Valacich (2002) are the works of the matrix on how to use the planning and completion of the Information Engineering.The planning system more complete description.2 database development processBased on information engineering information systems planning database is a source of development projects. These new database development projects is usuallyin order to meet the strategic needs of organizations, such as improving customer support, improve product and inventory management, or a more accurate sales forecast. However, many more database development project is the bottom-up approach emerging, such as information system user needs specific information to complete their work, thus beginning a project request, and as other information systems experts found that organizations need to improve data management and begin new projects. Bottom-up even in the circumstances, to set up an enterprise data model is also necessary to understand the existing database can provide the necessary data, otherwise, the new database, data entities and attributes can be added to the current data resources to the organization. Both the strategic needs or operational information needs of each database development projects normally concentrated in a database. Some projects only concentrated in the database definition, design and implementation of a database, as a follow-up to the basis of the development of information systems. However, in most cases, the database and associated information processing function as a complete information systems development project was part of the development.2.1 System Development Life CycleGuide management information system development projects is the traditional process of system development life cycle (SDLC). System development life cycle is an organization of the database designers and programmers information system composed of the Panel of Experts detailed description, development, maintenance and replacement of the entire information system steps. This process is because Waterfall than for every step into the adjacent the next step, that is, the information system is a specification developed by a piece of land, every piece of the output is under an input. However shown in the figure, these steps are not purely linear, each of the steps overlap in time (and thus can manage parallel steps), but when the need to reconsider previous decisions, but also to roll back some steps ahead. (And therefore water can be put back in the waterfall!)Figure 4 on the system development life cycle and the purpose of each stage of the product can be delivered concise notes. The system development life cycle including each stage and database development-related activities, therefore, the question of database management systems throughout the entire development process. In Figure 5 we repeat of the system development life cycle stage of the seven, and outlines thecommon database at each stage of development activities. Please note that the systems development life cycle stages and database development steps一一对应exists between the relationship between the concept of modeling data in both systems development life cycle stages between.Enterprise ModelingDatabase development process from the enterprise modeling (system development life cycle stage of the project feasibility studies, and to choose a part), Organizations set the scope and general database content. Enterprise modeling in information systems planning and other activities, these activities determine which part of information systems need to change and strengthen the entire organization and outlines the scope of data. In this step, check the current database and information systems, development of the project as the main areas of the nature of the business, with a very general description of each term in the development of information systems when needed data. Each item only when it achieved the expected goals of organizations can be when the next step.Conceptual Data ModelingOne has already begun on the Information System project, the concept of data modeling phase of the information systems needs of all the data. It is divided into two stages. First, it began the project in the planning stage and the establishment of a plan similar to Figure 1. At the same time outlining the establishment of other documents to the existing database without considering the circumstances specific development projects in the scope of the required data. This category only includes high-level data (entities), and main contact. Then in the system development life-cycle analysis stage must have a management information system set the entire organization Details of the data model definition of all data attributes, listing all data types that all data inter-entity business linkages, defining description of the full data integrity rules. In the analysis phase, but also the concept of inspection data model (also called the concept behind the model) and the goal of information systems used to explain other aspects of the model of consistency categories, such as processing steps, rules and data processing time of timing. However, even if the concept is such detailed data model is only preliminary, because follow-up information system life cycle activities in the design of services, statements, display and inquiries may find that missing element or mistakes. Therefore, the concept of data often said that modeling is atop-down manner, its areas of operation from the general understanding of the driver, rather than the specific information processing activities by the driver.3. Logical Database DesignLogical database design from two perspectives database development. First, the concept of data model transform into relational database theory based on the criteria that means - between. Then, as the design of information systems, every computer procedures (including procedures for the input and output format), database support services, statements, and inquiries revealed that a detailed examination. In this so-called Bottom-up analysis, accurate verification of the need to maintain the database and the data in each affairs, statements and so on the needs of those in the nature of the data.For each separate statements, services, and so on the analysis must take into account a specific, limited but complete database view. When statements, services, and other analysis might be necessary to change the concept of data model. Especially in large-scale projects, the different analytical systems development staff and the team can work independently in different procedures or in a centralized, the details of their work until all the logic design stage may be displayed. In these circumstances, logic database design stage must be the original concept of data model and user view these independent or merged into a comprehensive design. In logic design information systems also identify additional information processing needs of these new demands at this time must be integrated into the logic of earlier identified in the database design.Logical database design is based on the final step for the formation of good data specifications and determine the rules, the combination, the data after consultation specifications or converted into basic atomic element. Most of today's database, these rules from the relational database theory and the process known as standardization. This step is the result of management of these data have not cited any database management system for a complete description of the database map. Logical database design completed, we began to identify in detail the logic of the computer program and maintenance, the report contents of the database for inquiries.4. Physical database design and definitionPhysical database design and definition phase decisions computer memory (usuallydisk) database in the organization, definition of According to the library management system for physical structure, the procedures outlined processing services, produce the desired management information and decision support statements. The objective of this stage is to design an effective and safe management of all data-processing database, the physical database design to closely integrate the information systems of other physical aspects of the design, including procedures, computer hardware, operating systems and data communications networks.5. Database ImplementationThe database prepared by the realization stage, testing and installation procedures for handling databases. Designers can use the standard programming language (such as COBOL, C or Visual Basic), the dedicated database processing languages (such as SQL), or the process of the non-exclusive language programming in order to produce a statement of the fixed format, the result will be displayed, and may also include charts. In achieving stage, but also the completion of all the database files, training users for information systems (database) user setup program. The final step is to use existing sources of information (documents legacy applications and databases and now needs new data) loading data. Loading data is often the first step in data from existing files and databases to an intermediate format (such as binary or text files) and then to turn intermediate loading data to a new database. Finally, running databases and related applications for the actual user maintenance and retrieval of data. In operation, the regular backup database and the database when damaged or affected resume database.6. Database maintenanceDuring the database in the progressive development of database maintenance. In this step, in order to meet changing business conditions, in order to correct the erroneous database design, database applications or processing speed increase, delete or change the structure of the database. When a procedure or failure of the computer database affect or damage the database may also be reconstruction. This step usually is the longest in the database development process step, as it continued to databases and related applications throughout the life cycle, the development of each database can be seen as a brief database development process and data modeling concepts arise, logical and physical database design and database to achieve dealing with the changes.2.2 Information System developed by other meansSystem Development Life Cycle minor changes in law or its variant of the often used to guide information systems and database development. Information System is a life-cycle methodology, it is highly structured approach, which includes many checks and balances to ensure that every step of produce accurate results, and new or alternative information system and it must communications or data definitions consistent existing system needs consistency. System development life cycle because of the regular need to have a working system for a long time been criticized because only work in the system until the end of the whole process generated. More and more organizations now use rapid application development method, it is a includes analysis, design and implementation of steps to repeat the rapid iterative process until convergence to users the system so far. Rapid Application Development Act required the database has been in existence, and enhance system is mainly to the application of data retrieval application, but not to those who generate and modify database applications.The most widely used method of rapid application development is one of the prototype. The prototype system is a method of iterative development process, analysts and users through close co-operation, continuing to revise the system will eventually convert all the needs of a working system. Figure 6 shows prototype of the process. In this diagram we contains notes, briefly describes each stage of the prototype of the database development activities. Normally, when information systems problems were identified, tried only a rough concept of data modeling. In the development of the initial prototype, the design of the user wants to display and statements, and that any new database needs and define a term prototype database. This is usually a new database, copy the part of the existing system, but might also added some new content. When the need for new content, these elements are usually from external data sources, such as market research data, the general economic indicators or industry standards.When a prototype of a new version to repeat the achievement and maintenance of database activities. Usually only a minimum level of security and integrity control, because at this time the focus is as soon as possible to produce a prototype version can be used. But document management project also deferred to the final, only be used in the delivery of user training. Finally, once constructed an acceptable prototype,developers, and users will be the final decision of whether to prototype delivery and the use of the database. If the system (including database) efficiency is very low, then the system and database will be re-programming and re-organization in order to achieve the desired performance.Along with visual programming tools (such as Visual Basic, Java, Visual C + + and fourth generation language) increasingly popular use of visual programming tools can easily change the user interface with the system, the prototype is becoming the choice of system development methodology. Customers using the prototype method statements and show changes to the content and layout is quite easy. In the process, the new database needs were identified, so it is the development of the use of the existing database should be amended. There is even the possibility of a need for a new database system prototype method, in such circumstances, when the system demand in the iterative process of development in the ever-changing needs access to sample data, the construction or reconstruction of the database prototype.3 database development of the three-tier architecture modelIn this article on the front of the database development process mentioned in the interpretation of a system development project on the establishment of the several different, but related database view or model:● conceptual model (in the analysis stage of the establishment).● external model or user view (in the analysis phase and the establishment of logical design phase).● physical model or internal model (in the physical design phase of the establishment).Figure 7 describes the database view that the relationship between the three, it is important to remember that they are the same organizations database view or model. In other words, each organization has a database of the physical model, a concept model and one or more users view.Therefore, the three-tier architecture model using the same data set observe the different ways definition database.Concept models on the full database structure, has nothing to do with the technical specifications. Conceptual model definition do not involve the entire database datastored in the computer how the secondary memory. Usually, the conceptual model by entities - links (E-R) map or object modeling symbols such a graphical format to describe, we have this type of concept model called the data model. In addition, the conceptual model specification as a metadata stored in the database or data dictionary.Physical models including conceptual model of how data stored in computer memory in the two specifications. Analysts and the database design is as important to the physical database (physical mode) definition, it provides information on the distribution and management of data storage and access of the physical memory space of two full database technology specifications.Database development and database technology database is among the three models divided into basis. Database development projects may have a role to only deal with these three views of a related work. For example, a beginner may be designed for one or more procedures external model, and an experienced developer will design the physical model or conceptual model. Database design issues at different levels are quite different.4 three-tier structure of the database positioning systemObviously, all the good things in the database are, and the "three"!When designing a database, you have to choose where to store data. This option in the physical database design stage. Database is divided into individual databases, the Working Group database, departmental databases, corporate databases and the Internet database. Individuals often by the end-user database design and development of their own, just by database experts to give training and advice to help, it only contains individual end-users interested in the data. Sometimes, personal database from the database or enterprise Working Group extracted from the database, such circumstances database prepared by some experts from the regular routine to create local database. Sector Working Group database and the database is often the end-user, business experts and the central database system experts development. The collaborative work of these officers is necessary because in the design of the database to be shared by a large number of issues weigh: processing speed, ease of use, data definition differences and other similar problems. Due to corporate databases and the Internet database broad impact, large-scale, it is normally concentrated in the database development team has received professional training to develop a database of experts.1. Customers layerA desktop or notebook also known as that layer, which specialized management user interface and system localization data in this layer can be implemented on the Web scripting tasks.2. Server / Web serverHTTP protocol handling, scripting tasks, the implementation of computing and provide data access, the layer known as processing services layer.3. Enterprise Server (Minicomputer or mainframe) layerThe implementation of complex computing and inter-organizational management from multiple data sources of data integration, also known as data services layer.In an organization, hierarchical database and information system architecture for distributed computing and the client / server architecture of the concept of correlation. Client / server architecture based on a LAN environment, including servers (referred to as database server or database engine) database software implementation from the client workstation database orders, each customer applications focus on their user interface functions. In fact, the whole concept of the database (as well as the application of these databases to handle routine) as a distributed database or the separate but related physical database distribution in the local PC workstation, server intermediate (working group or sector) and one center server (departments or enterprises ). Simply said that the use of client / server architecture for:● it can handle multiple processors on the same application at the same time, improve application response time and data processing speed.● It can use each computer platform of the best data processing (such as PC Minicom Advanced user interface with the mainframe and computing speed).● can mix various client technology (Intel or Motorola processor assembly of personal computers, computer networks, information kiosks, etc.) and public data sharing. In addition, you can change the technology at any layer and other layers only a small influence on the system module.● able to handle close to the data source to be addressed to improve response time and reduce network traffic.。

数据库管理系统毕业论文中英文资料对照外文翻译文献综述

数据库管理系统毕业论文中英文资料对照外文翻译文献综述

数据库管理系统毕业论文中英文资料对照外文翻译文献综述中英文资料对照外文翻译文献综述英文翻译数据库管理系统的介绍Raghu Ramakrishnan数据库(database,有时被拼作data base)又称为电子数据库,是专门组织起来的一组数据或信息,其目的是为了便于计算机快速查询及检索。

数据库的结构是专门设计的,在各种数据处理操作命令的支持下,可以简化数据的存储、检索、修改和删除。

数据库可以存储在磁盘、磁带、光盘或其他辅助存储设备上。

数据库由一个或一套文件组成,其中的信息可以分解为记录,每一条记录又包含一个或多个字段(或称为域)。

字段是数据存取的基本单位。

数据库用于描述实体,其中的一个字段通常表示与实体的某一属性相关的信息。

通过关键字以及各种分类(排序)命令,用户可以对多条记录的字段进行查询,重新整理,分组或选择,以实体对某一类数据的检索,也可以生成报表。

所有数据库(除最简单的)中都有复杂的数据关系及其链接。

处理与创建,访问以及维护数据库记录有关的复杂任务的系统软件包叫做数据库管理系统(DBMS)。

DBMS软件包中的程序在数据库与其用户间建立接口。

(这些用户可以是应用程序员,管理员及其他需要信息的人员和各种操作系统程序)DBMS可组织、处理和表示从数据库中选出的数据元。

该功能使决策者能搜索、探查和查询数据库的内容,从而对正规报告中没有的,不再出现的且无法预料的问题做出回答。

这些问题最初可能是模糊的并且(或者)是定义不恰当的,但是人们可以浏览数据库直到获得所需的信息。

简言之,DBMS将“管理”存储的数据项和从公共数据库中汇集所需的数据项用以回答非程序员的询问。

DBMS由3个主要部分组成:(1)存储子系统,用来存储和检索文件中的数据;(2)建模和操作子系统,提供组织数据以及添加、删除、维护、更新数据的方法;(3)用户和DBMS之间的接口。

在提高数据库管理系统的价值和有效性方面正在展现以下一些重要发展趋势:1.管理人员需要最新的信息以做出有效的决策。

信息管理系统中英文对照外文翻译文献

信息管理系统中英文对照外文翻译文献

中英文对照翻译信息管理系统对于“管理信息系统”并没有一致的定义。

一些作者喜欢用其他术语代替,例如:“信息处理系统”“信息与决策系统”“组织信息系统”,或者干脆将“信息系统”用组织内具有支持操作、管理、决策职能的计算机信息处理系统代替。

这篇文章使用“管理信息系统”一词,是因为它是通俗易懂的,当涉及组织信息系统时也常用“信息系统”代替“管理信息系统”。

一个管理信息系统的定义,通常被理解为:一种集成用户机器系统,为组织提供信息支持运作、管理、决策职能。

该信息系统利用计算机硬件和软件;手工处理程序;模拟分析法计划、控制和决策;和数据库。

事实上,它是一个集成系统并不意味着它是单一的,单块集成结构;相反,它意味着零件适合加入整体设计。

内容定义如下:计算机为主的用户机器系统理论上,管理信息系统可以脱离计算机上而存在,但是计算机的存在可以让管理信息系统可行。

问题不是计算机是否被使用在管理信息系统中,而是信息的使用被计算机化的程度。

用户机器系统的概念暗示了, 一些任务最好由人执行, 其他的最好由机器做。

MIS的使用者是那些负责输入输入数据、指示系统或运用系统信息产品的人。

因为许多问题,用户和计算机建立了一个联合系统,其结果通过一套在计算机和用户之间的相互作用得到。

用户机器的相互作用是由用户连接在计算机上的输入-输出设备(通常是一个视觉显示终端)推动的。

计算机可以使一台个人机器服务于一名用户或者一台大规模的机器为一定数量通过终端由通信线路连接的用户服务。

用户输入-输出设备允许直接输入数据和紧接着输出结果。

例如:一个人使用计算机交互的在金融理财上通过在终端键盘输入提交“如果什么,怎么办?”之类的问题,结果几秒钟后便被显示在屏幕上。

MIS的计算机为主的用户机器特征影响系统开发商和系统用户的知识要求。

“计算机为主”意味着管理信息系统的设计者必须拥有计算机和对处理有用的知识。

“用户机器”的概念意味着系统设计者也应该了解人作为系统组成部分(信息处理器)的能力和人作为信息使用者的行为。

计算机专业与信息管理与信息系统等专业大学外文翻译中英文对照本科学位论文

计算机专业与信息管理与信息系统等专业大学外文翻译中英文对照本科学位论文

武汉纺织大学管理学院2012届毕业生英文翻译(2011—2012学年第 2 学期)材料:英文原稿、中文打印稿──────────────────────专业:信息管理与信息系统────────────班级:────────────姓名:────────────指导教师:───────────序号:2012年 5月17 日外文资料As information technology advances, various management systems have emerged to change the daily lives of the more coherent, to the extent possible, the use of network resources can be significantly reasonable reduction of manual management inconvenience and waste of time.Accelerating the modernization of the 21st century, the continuous improvement of the scientific and cultural levels, the rapid growth of the number of students will inevitably increase the pressure information management students, the inefficient manual retrieval completely incompatible with the community\'s needs. The Student Information Management Systemis an information management one kind within system, currently information technique continuously of development, the network technique has already been applied in us extensively nearby of every trade, there is the network technical development, each high schools all make use of a calculator to manage to do to learn, the school is operated by handicraft before of the whole tedious affairs all got fast and solve high-efficiencily, especially student result management the system had in the school very big function, all can be more convenient, fast for the student and the teacher coming saying and understand accurately with management everyone noodles information.AbstractIt is a very heavy and baldness job of managing a bulky database by manpower. The disadvantage, such as great capacity of work, low efficiency and long period, exist in data inputting, demanding and modification. So the computer management system will bring us a quite change.Because there are so many students in the school, the data of students' information is huge, it makes the management of the information become a complicated and tedious work. This system aims at the school, passing by practically of demand analysis, adopt mighty VB6.0 to develop the student information management system. The whole system design process follow the principle of simple operation, beautiful and vivid interface and practical request. The student information management system including the function of system management, basic information management, study management, prize andpunishment management , print statement and so on. Through the proof of using, the student information management system which this text designed can satisfy the school to manage the demand of the aspect to students' information. The thesis introduced the background of development, the functions demanded and the process of design. The thesis mainly explained the point of the system design, the thought of design, the difficult technique and the solutions. The student managed the creation of the system to reduce the inconvenience on the manpower consumedly, let the whole student the data management is more science reasonable.The place that this system has most the special features is the backstage database to unify the management to student's information.That system mainly is divided into the system management, student profession management, student file management, school fees management, course management, result management and print the statement.The interface of the system is to make use of the vb software creation of, above few molds pieces are all make use of the vb to control a the piece binds to settle of method to carry out the conjunction toward the backstage database, the backstage database probably is divided into following few formses:Professional information form, the charges category form, student the job form, student the information form, political feature form of student, the customer logs on the form The system used Client/Server structure design, the system is in the data from one server and a number of Taiwan formed LAN workstations. Users can check the competence of different systems in different users submit personal data, background database you can quickly given the mandate to see to the content.Marks management is a important work of school,the original manual management have many insufficiencies,the reasons that,students' population are multitudinous in school,and each student's information are too complex,thus the work load are extremely big,the statistics and the inquiry have beeninconvenient.Therefore,how to solve these insufficiencies,let the marks management to be more convenient and quickly,have a higher efficiency,and become a key question.More and more are also urgent along with school automationthe marksmanagement when science and technology rapid development,therefore is essential to develop the software system of marks register to assist the school teaching management.So that can improve the marks management,enhance the efficiency of management.“We cut nature up, organize it into concepts, and ascribe significances as we do, largely because we are parties to an agreement that holds throughout our speech community and is codified in the patterns of our language …we cannot talk at all except by subscribing to the organization and classification of data which the agreement decrees.” Benjamin Lee Whorf (1897-1941)The genesis of the computer revolution was in a machine. The genesis of our programming languages thus tends to look like that machine.But computers are not so much machines as they are mind amplification tools (“bicycles for the mind,”as Steve Jobs is fond of saying) and a different kind of expressive medium. As a result, the tools are beginning to look less like machines and more like parts of our minds, and also like other forms of expression such as writing, painting, sculpture, animation, and filmmaking. Object-oriented programming (OOP) is part of this movement toward using the computer as an expressive medium.This chapter will introduce you to the basic concepts of OOP, including an overview of development methods. This chapter, and this book, assumes that you have some programming experience, although not necessarily in C. If you think you need more preparation in programming before tackling this book, you should work through the Thinking in C multimedia seminar, downloadable from .This chapter is background and supplementary material. Many people do not feel comfortable wading into object-oriented programming without understanding the big picture first. Thus, there are many concepts that are introduced here to give you a solid overview of OOP. However, other people may not get the big picture concepts until they’ve seen some of the mechanics first; these people may become boggeddown and lost without some code to get their hands on. If you’r e part of this latter group and are eager to get to the specifics of the language, feel free to jump past this chapter—skipping it at t his point will not prevent you from writing programs or learning the language. However, you will want to come back here eventually to fill in your knowledge so you can understand why objects are important and how to design with them.All programming languages provide abstractions. It can be argued that the complexity of the problems you’re able to solve is directly related to the kind and quality of abstraction. By “kind”I mean, “What is it that you are abstracting?”Assembly language is a small abstraction of the underlying machine. Many so-called “imperative”languages that followed (such as FORTRAN, BASIC, and C) were abstractions of assembly language. These languages are big improvements over assembly language, but their primary abstraction still requires you to think in terms of the structure of the computer rather than the structure of the problem you are trying to solve. The programmer must establish the association between the machine model (in the “solution space,”which is the place where you’re implementing that solution, such as a computer) and the model of the problem that is actually being solved (in the 16 Thinking in Java Bruce EckelThe object-oriented approach goes a step further by providing tools for the programmer to represent elements in the problem space. This representation is general enough that the programmer is not constrained to any particular type of problem. We refer to the elements in the problem space and their representations in the solution space as “objects.” (You will also need other objects that don’t have problem-space analogs.) The idea is that the program is allowed to adapt itself to the lingo of the problem by adding new types of objects, so when you read the code describing the solution, you’re reading words that also express the problem. This is a more flexible and powerful language abstraction than what we’ve had before.1 Thus, OOP allows you to describe the problem in terms of the problem, rather than in terms of the computer where the solution will run. There’s still a connection back to the computer:Each object looks quite a bit like a little computer—it has a state, and it has operations that you can ask it to perform. However, this doesn’t seem like such a bad analogy to objects in the real world—they all have characteristics and behaviors.Java is making possible the rapid development of versatile programs for communicating and collaborating on the Internet. We're not just talking word processors and spreadsheets here, but also applications to handle sales, customer service, accounting, databases, and human resources--the meat and potatoes of corporate computing. Java is also making possible a controversial new class of cheap machines called network computers,or NCs,which SUN,IBM, Oracle, Apple, and others hope will proliferate in corporations and our homes.The way Java works is simple, Unlike ordinary software applications, which take up megabytes on the hard disk of your PC,Java applications,or"applets",are little programs that reside on the network in centralized servers,the network that delivers them to your machine only when you need them to your machine only when you need them.Because the applets are so much smaller than conventional programs, they don't take forever to download.Say you want to check out the sales results from the southwest region. You'll use your Internet browser to find the corporate Internet website that dishes up financial data and, with a mouse click or two, ask for the numbers.The server will zap you not only the data, but also the sales-analysis applet you need to display it. The numbers will pop up on your screen in a Java spreadsheet, so you can noodle around with them immediately rather than hassle with importing them to your own spreadsheet program。

数据库 外文翻译 外文文献 英文文献 数据库安全

数据库 外文翻译 外文文献 英文文献 数据库安全

Database Security“Why do I need to secure my database server? No one can access it —it’s in a DMZ protected by the firewall!” This is often the response when it is recommended that such devices are included within a security health check. In fact, database security is paramount in defending an organizations information, as it may be indirectly exposed to a wider audience than realized.This is the first of two articles that will examine database security. In this article we will discuss general database security concepts and common problems. In the next article we will focus on specific Microsoft SQL and Oracle security concerns.Database security has become a hot topic in recent times. With more and more people becoming increasingly concerned with computer security, we are finding that firewalls and Web servers are being secured more than ever(though this does not mean that there are not still a large number of insecure networks out there). As such, the focus is expanding to consider technologies such as databases with a more critical eye.◆Common sense securityBefore we discuss the issues relating to database security it is prudent to high- light the necessity to secure the underlying operating system and supporting technologies. It is not worth spending a lot of effort securing a database if a vanilla operating system is failing to provide a secure basis for the hardening of the data- base. There are a large number of excellent documents in the public domain detailing measures that should be employed when installing various operating systems.One common problem that is often encountered is the existence of a database on the same server as a web server hosting an Internet (or Intranet) facing application. Whilst this may save the cost of purchasing a separate server, it does seriously affect the security of the solution. Where this is identified, it is often the case that the database is openly connected to the Internet. One recent example I can recall is an Apache Web server serving an organizations Internet offering, with an Oracle database available on the Internet on port 1521. When investigating this issue further it was discovered that access to the Oracle server was not protected (including lack of passwords), which allowed the server to be stopped. The database was not required from an Internet facing perspective, but the use of default settings and careless security measures rendered the server vulnerable.The points mentioned above are not strictly database issues, and could be classified as architectural and firewall protection issues also, but ultimately it is the database that is compromised. Security considerations have to be made from all parts of a public facing net- work. You cannot rely on someone or something else within your organization protecting your database fr om exposur e.◆ Attack tools are now available for exploiting weaknesses in SQL and OracleI came across one interesting aspect of database security recently while carrying out a security review for a client. We were performing a test against an intranet application, which used a database back end (SQL) to store client details. The security review was proceeding well, with access controls being based on Windows authentication. Only authenticated Windows users were able to see data belonging to them. The application itself seemed to be handling input requests, rejecting all attempts to access the data- base directly.We then happened to come across a backup of the application in the office in which we were working. This media contained a backup of the SQL database, which we restored onto our laptop. All security controls which were in place originally were not restored with the database and we were able to browse the complete database, with no restrictions in place to protect the sensitive data. This may seem like a contrived way of compromising the security of the system, but does highlight an important point. It is often not the direct approach that is taken to attack a target, and ultimately the endpoint is the same; system compromise. A backup copy of the database may be stored on the server, and thus facilitates access to the data indirectly.There is a simple solution to the problem identified above. SQL 2000 can be configured to use password protection for backups. If the backup is created with password protection, this password must be used when restoring the password. This is an effective and uncomplicated method of stopping simple capture of backup data. It does however mean that the password must be remembered!◆Curr ent tr endsThere are a number of current trends in IT security, with a number of these being linked to database security.The focus on database security is now attracting the attention of the attackers. Attack tools are now available for exploiting weaknesses in SQL and Oracle. The emergence of these tools has raised the stakes and we have seen focused attacks against specific data- base ports on servers exposed to the Internet.One common theme running through the security industry is the focus on application security, and in particular bespoke Web applications. With he functionality of Web applications becoming more and more complex, it brings the potential for more security weaknesses in bespoke application code. In order to fulfill the functionality of applications, the backend data stores are commonly being used to format the content of Web pages. This requires more complex coding at the application end. With developers using different styles in code development, some of which are not as security conscious as other, this can be the source of exploitable errors.SQL injection is one such hot topic within the IT security industry at the moment. Discussions are now commonplace among technical security forums, with more and more ways and means of exploiting databases coming to light all the time. SQL injection is a misleading term, as the concept applies to other databases, including Oracle, DB2 and Sybase.◆ What is SQL Injection?SQL Injection is simply the method of communication with a database using code or commands sent via a method or application not intended by the developer. The most common form of this is found in Web applications. Any user input that is handled by the application is a common source of attack. One simple example of mishandling of user input is highlighted in Figure 1.Many of you will have seen this common error message when accessing web sites, and often indicates that the user input has not been correctly handled. On getting this type of error, an attacker will focus in with more specific input strings.Specific security-related coding techniques should be added to coding standard in use within your organization. The damage done by this type of vulnerability can be far reaching, though this depends on the level of privileges the application has in relation to the database.If the application is accessing data with full administrator type privileges, then maliciously run commands will also pick up this level of access, and system compromise is inevitable. Again this issue is analogous to operating system security principles, where programs should only be run with the minimum of permissions that is required. If normal user access is acceptable, then apply this restriction.Again the problem of SQL security is not totally a database issue. Specific database command or requests should not be allowed to pass through theapplication layer. This can be prevented by employing a “secure coding” approach.Again this is veering off-topic, but it is worth detailing a few basic steps that should be employed.The first step in securing any application should be the validation and control of user input. Strict typing should be used where possible to control specific data (e.g. if numeric data is expected), and where string based data is required, specific non alphanumeric characters should be prohibited where possible. Where this cannot be performed, consideration should be made to try and substitute characters (for example the use of single quotes, which are commonly used in SQL commands).Specific security-related coding techniques should be added to coding standard in use within your organization. If all developers are using the same baseline standards, with specific security measures, this will reduce the risk of SQL injection compromises.Another simple method that can be employed is to remove all procedures within the database that are not required. This restricts the extent that unwanted or superfluous aspects of the database could be maliciously used. This is analogous to removing unwanted services on an operating system, which is common security practice.◆ OverallIn conclusion, most of the points I have made above are common sense security concepts, and are not specific to databases. However all of these points DO apply to databases and if these basic security measures are employed, the security of your database will be greatly improved.The next article on database security will focus on specific SQL and Oracle security problems, with detailed examples and advice for DBAs and developers.There are a lot of similarities between database security and general IT security, with generic simple security steps and measures that can be (and should be) easily implemented to dramatically improve security. While these may seem like common sense, it is surprising how many times we have seen that common security measures are not implemented and so causea security exposure.◆User account and password securityOne of the basic first principals in IT security is “make su re you have a good password”. Within this statement I have assumed that a password is set in the first place, though this is often not the case.I touched on common sense security in my last article, but I think it is important to highlight this again. As with operating systems, the focus of attention within database account security is aimed at administrationaccounts. Within SQL this will be the SA account and within Oracle it may be the SYSDBA or ORACLE account.It is very common for SQL SA accounts to have a password of ‘SA’ or even worse a blank password, which is just as common. This password laziness breaks the most basic security principals, and should be stamped down on. Users would not be allowed to have a blank password on their own domain account, so why should valuable system resources such as databases be allowed to be left unprotected. For instance, a blank ‘SA’password will enable any user with client software (i.e. Microsoft query analyser or enterprise manager to ‘manage’ the SQL server and databases).With databases being used as the back end to Web applications, the lack of password control can result in a total compromise of sensitive information. With system level access to the database it is possible not only to execute queries into the database, create/modify/delete tables etc, but also to execute what are known as Stored Procedures.数据库安全“为什么要确保数据库服务安全呢?任何人都不能访问-这是一个非军事区的保护防火墙”,当我们被建议使用一个带有安全检查机制的装置时,这是通常的反应。

计算机 数据库 外文文献翻译 中英文

计算机 数据库 外文文献翻译 中英文

科技外文文献Microsoft Future "Soul" - SQL Server 2005 Exploration SecretAuthor : CHEN Bao-linSQL Server development "Brief History"At the beginning of this before, let us look at Microsoft SQL Server development "Brief History."1988 : SQL Server from Microsoft and Sybase common development, running on OS / 2 platform.1993-09-14 : SQL Server 4.2, a desktop database system contains less functional. Integration with Windows and to provide easy-to-use user interface.1994 : Microsoft and Sybase database in cooperation in the development of suspension.1995 : SQL Server 6.0, code-named "SQL95" Microsoft rewriting most of the core system. Provide a low-cost small business application database program.1996-04-16 : SQL Server 6.5, This version brings significant performance improvement and providing a wide variety of useful functions.1998-11-16 : SQL Server 7.0, code-named "Sphinx." Completely rewritten core database engine, providing small and medium business applications database program, contains the initial Web support. SQL Server starting from this version has been widely used.2000-08-07 : the birth of SQL Server 2000, code-named "Shiloh." Microsoft to produce the product has been defined as enterprise-class database system, which includes three components (DB, OLAP, English Query). Rich front-end tools, improved development tools, and XML support, the promotion of this version of the promotion and application. And contains the following several versions.Enterprise Edition : through the deployment of cluster TB-class support services giant databases and thousands of concurrent users online.Standard Edition : to support SMEs.Personal version : support desktop applications.Developer : staff development for enterprises and Windows CE build enterprise applications.Window CE Version : can be applied to any Windows CE mobile devices.2003-04-24 : SQL Server 2000, 64-bit version. Codenamed "Liberty" has been and Unix / Linux Oracle compete.2005-11-07 : SQL Server 2005, codenamed "Yukon" Microsoft SQL Server products to the latest version. Microsoft commented that the status of this product took five years of major changes, a landmark product. Microsoft SQL Server 4.2 to 2005. Microsoft since the early 1990s to enter the database market, SQL Server 2005 until the launch, behaved like an enterprise database from the market to lead the followers of the restructuring, sword was sharpened for 10 years, through many a storm, Microsoft already enterprises database management perspective extends to a broader and deeper realm, the paper attempts to explore the history, Aggregate Microsoft SQL Server formative history.1987 Sysbase developed Unix systems running SQL Server version. In 1988, Microsoft invited the then momentum in the database fields are busy Sysbase. joint development of SQL server. "Sima heart erased", Microsoft tried to enter the database market moves obviously, and, database market is bound to whip up some wind action. Sure enough, after 10 years of market access database for the intense period of the Warring States. 1993-04-12, Microsoft SQL Server version 4.2. And before the introduction of Windows NT echoed that Microsoft officially entered the enterprise applications market. And the SQL Server database and the enterprise is the most important. Although SQL Server 4.2 while still just a desktop version, but there has been considerablepotential. 1994, Microsoft and Sybase formal suspension of the database development cooperation This meaningfully.From 1995 to 2000, Microsoft has adopted 6.0, 6.5,7.0, 2000 Version 4. From the perspective view, SQL Server 2000 version has been able to provide the following services.Online Services (On-line services) : "On-Line" refers to real-time online users use data services.Online transaction processing OLTP (On-Line Transaction Processing) : OLTP operation by the order-processing services transactions, or transactions follow completion or undoes all the principles. It also did not include the type of services. This is a sector that is the most universal and most widely forms of service. Analysis of online services OLAP (On-Line Analytical Processing) : OLAP is a kind of multidimensional data display (such as data warehousing, data mart, data cube), usually to do data mining. As OLTP used to operate and SQL data definition, OLAP is used and MDX (MultiDimensional Expressions) visit and definitions of data. From the technical structure of SQL Server 2000, as follows.Data structure•physical structure of data structure.•logical framework : how to define Tables, ro ws, columns, and other data objectsData Processing• data processing storage engine : it is responsible for dealing with how the data retention.• engine : it is responsible for how the data for the visit and relations.• SQL Server Agent : it is respo nsible for task scheduling and events management.Data manipulation• DB APIs : ADO (ActiveX Data Objects).OLE DB (linking and embedding data objects).DB-Library for C + +.ODBC (Open Data Internet).ESQL (Embedded SQL.)• URLs (uniform resource locat or address).• English inquiries (English Query).SQL Server Enterprise Manager.Tools : Inquiry analyzers, DTS (Data Transformation Services), Backup and restore and replication, metadata services, storage expansion process, SQL tracking, can be used for performance tuning.Experiences from users, SQL Server 2000 version of a number of new characteristics, such as XML support, many examples of support, data warehouse and business intelligence to enhance performance and scalability will improve, operating guide, and the inquiries, DTS, Transact SQL enhancements.From the license price, Microsoft SQL Server 2000, the price and total cost of ownership (TCO) only to the Oracle or D B2 2 / 1 to 1 / 3.In summary, Microsoft high-performance low-cost access to the product concept on the market success SQL Server 2000 database can meet the OLTP and OLAP application deployment, and better performance, and prices relative Oracle, DB2 and other databases low. Meanwhile, SQL Server 2000 Enterprise Edition also includes the standard version and other versions to meet different levels of user demand, These factors prompted the SQL Server 2000 was a significant part of the SME market share Microsoft has the opportunity to enter the mainstream database vendors ranks.At the same time, we should realize that SQL Server 2000 and Oracle launched late in the G 10 high-end enterprise-level functions in surviving deficient, so bridging the gap to catch up on the historic mission to the code-named "Yukon," the new version.Killer code-named "Yukon"From the 1989 release of Microsoft SQL Server 1.0 is now a full 15 years. In that 15 years of SQL Server fromscratch, from small to large, experiencing a once legendary. It has not only eroded with IBM, Oracle database market share, and the next generation of SQL Server has begun to gradually become the next Windows operating system core. China and the Bill Gates mouth • The constant repetition of "seamless calculation" is the core of Yukon, The code-named "Yukon," the next generation of our database will be brought into what kind of world? Internet "soft" pillarIn today's era of the network, data searching,data storage, classification of data, etc. All this has become the Internet network constitutes the "soft" pillars, and the database system is the pillar of the most critical. If there is no database support, we would never be able to Google or Baidu in the search for the information they need. can not use the convenient electronic mailbox, but that Network World because it is a large database consisting of.According to IDC's latest data show that the global database software market seems to be stirring Tension 2003 total revenue reached 13.6 billion U.S. dollars, compared with 2002's 12.6 billion U.S. dollars have increased. Oracle, IBM and Microsoft now controls 75% market share. Oracle last year for a market share of 39.8%, 31.3% for IBM, Microsoft to 12.1%.What is the database? In the University's computer textbooks, the database is being interpreted in this way : The database is the computer application system in a specialized data resource management system. There are many forms of data, such as text, digital, symbols, graphics, images and voices, and so on. All computer data system to deal with the subject. People familiar approach of a document is produced, will soon compile a program processing documents, will be covered by the procedural requirements of data organized into data files, documentation of procedures to call. Data files and program files maintain a certain relationship. Computer Application in the rapid development of the situation, by means of such a document will highlight deficiencies. For example, it allows poor definitive data, facilitate transplantation, in different documents stored information much duplication and waste of storage space, Update inconvenience. Database system will solve this problem. Database systems from the application of specific procedures, but based on the data management, All data will be stored in a database, scientific organizations, and by means of the database management system, using it as an intermediary, with a variety of applications or application interface to make it easy access to the data in the database.This note describes is indeed very detailed, but you may not always seem dizziness, In fact, a simple database that is after a group of computer collation of data stored in one or more documents, and the management of the database software called on the database management system. A general database system (104217) can be divided into the database (Database ) and Data Management System (Database Management System, DBMS) in two parts, all of these constitute the Internet is a "soft" pillars all.Microsoft's SQL Server database software, as many of the upgrade from 6.5 to the 7.0 version, gradually become mainstream database software, and SQL Server 2000 also proved that the Windows operating system can bear the same high-end data application, as the mainstream business application of database management software. It broke the rule by the large Unix database software myth and the next generation of SQL Server 2005 there will be what kind of change?Live Yukon core secretsMicrosoft in the next version of SQL Server (codenamed "Yukon") at the planning stage , considered more of the future development of the database, and SQL Server programming capabilities. Microsoft's internal development staff had long been aware that the future must introduce a more unified programming model but for a different data model to provide more flexibility. The unified programming model means that the ordinary data access and operation tasks can be carried out through various channels. For example, you can choose to use XML or Framework, or Transact-S QL (T-SQL) code, and so on.Such planning will result is a new database programming platform, which in many ways a natural extension. First, host. NET Framework common language runtime (CLR) to the function of the process of expansion of database programming and managed code area. Secondly,. NET framework provides a host integration from within SQL Server powerful object database functions. XML is the in-depth support functions through the XML data typeto achieve, and It has a data type of relationship between all the functions. In addition, also added a pair of XML Query (XQuery) and XML structure definition language (XSD) standard server support. Finally, SQL Server Yukon includes T-SQL language to enhance the important function.XML in SQL Server Yukon's history really began with SQL Server 2000. SQL Server 2000 with the introduction of the XML format to relational data. large load and segmentation XML documents and databases will be open targets for XML-based Web services, and other functions, However Yukon provide a more senior XML Query function, After perfecting the Y ukon will be full play all of the advantages of XML. XML Why so critical? In fact, from the initial XML an alternative HTML said the technical development of a line format, now be seen as a storage format. XML lasting memory has drawn widespread attention, the Internet has also been a lot of XML data type applications. XML itself can be an across any platform data format, It started as a file format for use, as XML in the enterprise has been widely recognized, Users began to use XML to solve thorny business problems, such as data integration. This makes as a data storage format XML development today, Because XML can be displayed on any platform to produce the same results, XML has become a mainstream database storage format. This built-in the Yukon comprehensive XML support will trigger a new database technology revolution.These new programming models and enhanced common language to create a series of programmable, They complement and expand the current relational database model. This architecture has the ultimate aim is to build more scalable, more reliable, more robust applications, and to enhance the development of efficiency. These models Another result is a service called SQL Agent new application framework -- for Asynchronous sources delivering the Distributed Application Framework.Yukon joining century gambleConstantly talking before we say a string of technology advantages, then you may very curious, Why should we introduce this appears to be a high-end database application software technologies? Perhaps we should kick the answer.The richest on Earth doing computer predictions for the future, he believes, in the next world, every one ordinary computer will have a large enough super hard disks, At that time the hard disk is no longer simply an 80 GB is likely to be 80 TB, Although it is only a change GB TB, but that means hard disk capacity of a full upgrade of 1000 times. And the existing Windows disk data storage NTFS format, simply unable to cope with such a large capacity hard disk data search. Said an image of the example, if the 100 TB of disk space on your computer, At that time, or you use Windows XP, You collate debris disk of the time required is likely to be for two days and two nights, if you want to find a particular document, You will have waited for several hours. That feeling is like to return to 286 times.In order to solve this thorny problem, the next generation Windows operating system Longhorn decided with the previous non-Windows diametrically with the programming model. The core is Avalon (development code). Avalon is the new Windows GUI library. New Longhorn into the Indigo (Web services) and WinFS (file system) of the new function. Including Avalon, these three new function called hell. Longhorn is the founder of a new "local" API. Although now is to the Win32 API compatibility and grow, However, to use the new Longhorn functions, under normal circumstances the use of hell. Max belongs to the present. NET Framework in the city. Present. NET Framework used in the category, which has hell, DLL support for the procedural mechanisms and the operation. NET basically the same.. NET Framework in SQL Server Yukon Availability when major version upgrade ( Major VersionUp), the specific date is the end of 2004. In the Yukon. NET Framework to run. In the storage process (Stored Procedures) use. NET Framework The class library. Yukon operations. NET Framework version 2.0. Supplementary to the present. NET Framework 1.1 is no relevant category of multimedia. WinFS use Yukon engines. In other words, Longhorn, the file system will use database engine.This time you understand, the next generation Windows operating system, the whole document data management will be introduced SQL Server configuration management, when Our computer data querycapabilities, data integration capability will be greatly enhanced. This of course, that the rich keep saying that the "seamless calculation" is a critical step on Microsoft, Let database software and operating systems integration projects century is undoubtedly a gamble, which, if successful, Microsoft will gradually become the dominant database, but if it fails, The almost even harden the next generation Windows listing of the normal schedule.Microsoft has provided some tools for SQL server and client applications on the network between the transmission of data increases secret. However, the Microsoft product manager said Kirsten Ward, plans to release next year a new SQL Server database will be stored in the data encryption, Hacker attacks increase defense capabilities.Microsoft earlier this year "SQL Server 2005" release time postponed until the first half of next year. The database software will enhance the launch of Microsoft database computing power and better with Oracle and IBM compete. Microsoft will also introduce a unified storage concept, locating and retrieving data more convenient. Oracle in Windows and Unix database market has been in a leading position. However, the recently adopted this year, Microsoft SQL Server to increase more advanced functions have also made remarkable progress.In addition, Microsoft will also provide a service called "Best Practices Analyzer Tool" (best practice analyzer tool) software. Database administrators can use the software using Microsoft editor of the Guide database software debugging. This applies to software tools for Microsoft database software current version "SQL Server 2000" and to provide a database administrator in various fields Operations Guide, For example, how to improve performance and how to conduct more effective data backup and so on.Ward said that the software tool also includes an "Upgrade Advisor" procedure. This procedure can scan database programs and warned "SQL Server 2000" users to make the necessary amendments changed so that the procedures compatible with the upcoming launch of the "SQL Server 2005."(Source : China Computer Education)中文译文微软未来的“灵魂”—SQL Server 2005探密作者:陈宝林SQL Server的发展“简史”在开始本文之前,先让我们来看一下微软SQL Server的发展“简史”。

信息系统信息技术中英文对照外文翻译文献

信息系统信息技术中英文对照外文翻译文献

中英文资料外文翻译文献Information Systems Outsourcing Life Cycle And Risks Analysis 1. IntroductionInformation systems outsourcing has obtained tremendous attentions in the information technology industry.Although there are a number of reasons for companies to pursuing information systems (IS)outsourcing , the most prominent motivation for IS outsourcing that revealed in the literatures was “cost saving”. Costfactor has been a major decision factors for IS outsourcing.Other than cost factor, there are other reasons for outsourcing decision.The Outsourcing Institute surveyed outsourcing end-users from their membership in 1998 and found that top 10 reasons companies outsource were:Reduce and control operating costs,improve company focus,gain access to world-class capabilities,free internal resources for other purposes, resources are not available internally, accelerate reengineering benefits, function difficult to manage/out of control,make capital funds available, share risks, and cash infusion.Within these top ten outsourcing reasons, there are three items that related to financial concerns, they are operating costs, capital funds available, and cash infusion. Since the phenomenon of wage difference exists in the outsourced countries, it is obvious that outsourcing companies would save remarkable amount of labor cost.According to Gartner, Inc.'s report, world business outsourcing services would grow from $110 billion in 2002 to $173 billion in 2007,a proximately 9.5% annual growth rate.In addition to cost saving concern, there are other factors that influence outsourcing decision, including the awareness of success and risk factors, the outsourcing risks identification and management,and the project quality management. Outsourcing activities are substantially complicated and outsourcing project usually carries a huge array of risks. Unmanaged outsourcing risks will increase total project cost, devaluatesoftware quality, delay project completion time, and finally lower the success rate of the outsourcing project.Outsourcing risks have been discovered in areas such as unexpected transition and management costs, switching costs, costly contractual amendments, disputes and litigation, service debasement, cost escalation, loss of organizational competence, hidden service costs,and so on.Most published outsourcing studies focused on organizational and managerial issues. We believe that IS outsourcing projects embrace various risks and uncertainty that may inhibit the chance of outsourcing success. In addition to service and management related risk issues, we feel that technical issues that restrain the degree of outsourcing success may have been overlooked. These technical issues are project management, software quality, and quality assessment methods that can be used to implement IS outsourcing projects.Unmanaged risks generate loss. We intend to identify the technical risks during outsourcing period, so these technical risks can be properly managed and the cost of outsourcing project can be further reduced. The main purpose of this paper is to identify the different phases of IS outsourcing life cycle, and to discuss the implications of success and risk factors, software quality and project management,and their impacts to the success of IT outsourcing.Most outsourcing initiatives involve strategic planning and management participation, therefore, the decision process is obviously broad and lengthy. In order to conduct a comprehensive study onto outsourcing project risk analysis, we propose an IS outsourcing life cycle framework to be served as a yardstick. Each IS outsourcing phase is named and all inherited risks are identified in this life cycle framework.Furthermore,we propose to use software qualitymanagement tools and methods in order to enhance the success rate of IS outsourcing project.ISO 9000 is a series of quality systems standards developed by the International Organization for Standardization (ISO).ISO's quality standards have been adopted by many countries as a major target for quality certification.Other ISO standards such as ISO 9001, ISO 9000-3,ISO 9004-2, and ISO 9004-4 are quality standards that can be applied to the software industry.Currently, ISO is working on ISO 31000, a risk management guidance standard. These ISO quality systems and risk management standards are generic in nature, however, they may not be sufficient for IS outsourcing practice. This paper, therefore,proposes an outsourcing life cycle framework to distinguish related quality and risk management issues during outsourcing practice.The following sections start with needed theoretical foundations to IS outsourcing,including economic theories, outsourcing contracting theories, and risk theories. The IS outsourcing life cycle framework is then introduced.It continues to discuss the risk implications in precontract,contract, and post-contract phases. ISO standards on quality systems and risk management are discussed and compared in the next section. A conclusion and direction for future study are provided in the last section.2. Theoretical foundations2.1. Economic theories related to outsourcingAlthough there are a number of reasons for pursuing IS outsourcing,the cost savingis a main attraction that leads companies to search for outsourcing opportunities. In principle, five outsourcing related economic theories that lay the groundwork of outsourcing practice, theyare:(1)production cost economics,(2)transaction cost theory,(3)resource based theory,(4)competitive advantage, and(5)economies of scale.Production cost economics was proposed by Williamson, who mentioned that “a firm seeks to maximize its profit also subjects to its production function and market opportunities for selling outputs and buying inputs”. It is clear that production cost economics identifies the phenomenon that a firm may pursue the goal of low-cost production process.Transaction cost theory was proposed by Coase. Transaction cost theory implies that in an economy, there are many economic activities occurred outside the price systems. Transaction costs in business activities are the time and expense of negotiation, and writing and enforcing contracts between buyers and suppliers .When transaction cost is low because of lower uncertainty, companies are expected to adopt outsourcing.The focus of resource-based theory is “the heart of the firm centers on deployment and combination of specific inputs rather than on avoidance of opportunities”. Conner suggested that “Firms as seekers of costly-to-copy inputs for production and distribution”.Through resource-based theory, we can infer that “outsourcing decision is to seek external resources or capability for meeting firm's objectives such as cost-saving and capability improving”.Porter, in his competitive forces model, proposed the concept of competitive advantage. Besanko et al.explicated the term of competitive advantage, through economic concept, as “When a firm(or business unit within a multi-business firm) earns a higher rate of economic profit than the average rate of economic profit of other firms competing within the same market, the firm has a competitive advantage.” Outsourcing decision, therefore, is to seek cost saving that meets the goal of competitive advantage within a firm.The economies of scale is a theoretical foundation for creating and sustaining the consulting business. Information systems(IS) and information technology(IT) consulting firms, in essence, bear the advantage of economies of scale since their average costs decrease because they offer a mass amount of specialized IS/IT services in the marketplace.2.2. Economic implication on contractingAn outsourcing contract defines the provision of services and charges that need to be completed in a contracting period between two contracting parties. Since most IS/IT projects are large in scale, a valuable contract should list complete set of tasks and responsibilities that each contracting party needs to perform. The study of contracting becomes essential because a complete contract setting could eliminate possible opportunistic behavior, confusion, and ambiguity between two contracting parties.Although contracting parties intend to reach a complete contract,in real world, most contracts are incomplete. Incomplete contracts cause not only implementing difficultiesbut also resulting in litigation action. Business relationship may easily be ruined by holding incomplete contracts. In order to reach a complete contract, the contracting parties must pay sufficient attention to remove any ambiguity, confusion, and unidentified and immeasurable conditions/ terms from the contract. According to Besanko et al., incomplete contracting stems from the following three factors: bounded rationality, difficulties on specifying or measuring performance, and asymmetric information.Bounded rationality describes human limitation on information processing, complexity handling, and rational decision-making. An incomplete contract stems from unexpected circumstances that may be ignored during contract negotiation. Most contracts consist of complex product requirements and performance measurements. In reality, it is difficult to specify a set of comprehensive metrics for meeting each party's right and responsibility. Therefore, any vague or open-ended statements in contract will definitely result in an incomplete contract. Lastly, it is possible that each party may not have equal access to all contract-relevant information sources. This situation of asymmetric information results in an unfair negotiation,thus it becomes an incomplete contract.2.3. Risk in outsource contractingRisk can be identified as an undesirable event, a probability function,variance of the distribution of outcomes, or expected loss. Risk can be classified into endogenous and exogenous ris ks. Exogenous risks are“risks over which we have no control and which are not affected by our actions.”. For example, natural disasters such as earthquakes and flood are exogenous risks. Endogenous risks are “risks that are dependent on our actions”.We can infer that risks occurring during outsource contracting should belong to such category.Risk (RE) can be calculated through “a function of the probability of a negative outcome and the importance of the loss due to the occurrence of this outcome:RE = ΣiP(UOi)≠L(UOi) (1) where P(UOi) is the probability of an undesirable outcome i, and L(UOi) is the loss due to the undesirable outcome i.”.Software risks can also be analyzed through two characteristics :uncertainty and loss. Pressman suggested that the best way to analyze software risks is to quantify the level of uncertainty and the degree of loss that associated with each kind of risk. His risk content matches to above mentioned Eq.(1).Pressman classified software risks into the following categories: project risks, technical risks, and business risks.Outsourcing risks stem from various sources. Aubert et al. adopted transaction cost theory and agency theory as the foundation for deriving undesirable events and their associated risk factors.Transaction cost theory has been discussed in the Section 2.2. Agency theory focuses on client's problem while choosing an agent(that is, a service provider), and working relationship building and maintenance, under the restriction of information asymmetry.Various risk factors would be produced if such agent–client relationship becomes crumble.It is evident that a complete contract could eliminate the risk that caused by an incomplete contract and/or possible opportunistic behavior prompted by any contracting party. Opportunistic behavior is one of the main sources that cause transactional risk. Opportunistic behavior occurs when a transactional partner observes away of saving cost or removing responsibility during contracting period, this company may take action to pursue such opportunity. This type of opportunistic behavior could be encouraged if such contract was not completely specified at the first place.Outsourcing risks could generate additional unexpected cost to an outsourcing project. In order to conduct a better IS outsourcing project, identifying possible risk factors and implementing matured risk management process could make information systems outsourcing more successful than ever.rmation system outsourcing life cycleThe life cycle concept is originally used to describe a period of one generation of organism in biological system. In essence, the term of life cycle is the description of all activities that a subject is involved in a period from its birth to its end. The life cycle concept has been applied into project management area. A project life cycle, according to Schwalbe, is a collection of project phases such as concept,development, implementation, and close-out. Within the above mentioned four phases, the first two phases center on “planning”activity and the last two phases focus on “delivery the actual work” Of project management.Similarly, the concept of life cycle can be applied into information systems outsourcing analysis. Information systems outsourcing life cycle describes a sequence of activities to be performed during company's IS outsourcing practice. Hirsch heim and Dibbern once described a client-based IS outsourcing life cycle as: “It starts with the IS outsourcing decision, continues with the outsourcing relationship(life of the contract)and ends with the cancellation or end of the relationship, i.e., the end of the contract. The end of the relationship forces a new outsourcing decision.” It is clear that Hirsch heim and Dibbern viewed “outsourcing relationship” as a determinant in IS outsourcing life cycle.IS outsourcing life cycle starts with outsourcing need and then ends with contract completion. This life cycle restarts with the search for a new outsourcing contract if needed. An outsourcing company may be satisfied with the same outsourcing vendor if the transaction costs remain low, then a new cycle goes on. Otherwise, a new search for an outsourcing vendor may be started. One of the main goals for seeking outsourcing contract is cost minimization. Transaction cost theory(discussed in the Section 2.1)indicates that company pursuing contract costs money, thus low transaction cost will be the driver of extending IS outsourcing life cycle.The span of IS outsourcing life cycle embraces a major portion of contracting activities. The whole IS outsourcing life cycle can be divided into three phases(see Fig.1): pre-contract phase, contract phase, and post-contract phase. Pre-contract phase includes activities before a major contract is signed, such as identifying the need for outsourcing, planning and strategic setting, and outsourcing vendor selection. Contract phase startswhile an outsourcing contract is signed and then lasted until the end of contracting period. It includes activities such as contracting process, transitioning process, and outsourcing project execution. Post-contract phase contains those activities to be done after contract expiration, such as outsourcing project assessment, and making decision for the next outsourcing contract.Fig.1. The IS outsourcing life cycleWhen a company intends to outsource its information systems projects to external entities, several activities are involved in information systems outsourcing life cycle. Specifically, they are:1. Identifying the need for outsourcing:A firm may face strict external environment such as stern market competition,competitor's cost saving through outsourcing, or economic downturn that initiates it to consider outsourcing IS projects. In addition to external environment, some internal factors may also lead to outsourcing consideration. These organizational predicaments include the need for technical skills, financial constraint, investors' request, or simply cost saving concern. A firm needs to carefully conduct a study to its internal and external positioning before making an outsourcing decision.2. Planning and strategic setting:If a firm identifies a need for IS outsourcing, it needs to make sure that the decision to outsource should meet with company's strategic plan and objectives. Later, this firm needs to integrate outsourcing plan into corporate strategy. Many tasks need to be fulfilled during planning and strategic setting stages, including determining outsourcing goals, objectives, scope, schedule, cost, business model, and processes. A careful outsourcing planning prepares a firm for pursuing a successful outsourcing project.3. Outsourcing vendor selection:A firm begins the vendor selection process with the creation of request for information (RFI) and request for proposal (RFP) documents. An outsourcing firm should provide sufficient information about the requirements and expectations for an outsourcing project. After receiving those proposals from vendors, this company needs to select a prospective outsourcing vendor, based on the strategic needs and project requirements.4. Contracting process:A contract negotiation process begins after the company selects a probable outsourcing vendor. Contracting process is critical to the success of an outsourcing project since all the aspects of the contract should be specified and covered, including fundamental, managerial, technological, pricing, financial, and legal features. In order to avoid resulting in an incomplete contract, the final contract should be reviewed by two parties' legal consultants.Most importantly, the service level agreements (SLA) must be clearly identified in the contract.5. Transitioning process:Transitioning process starts after a company signed an outsourcing contract with a vendor. Transition management is defined as “the detailed, desk-level knowledge transfer and documentation of all relevant tasks, technologies, workflows, people, and functions”.Transitioni ng process is a complicate phase in IS outsourcing life cycle since it involves many essential workloads before an outsourcing project can be actually implemented. Robinson et al.characterized transition management into the following components:“employee management, communication management, knowledge management, and quality management”. It is apparent that conducting transitioning process needs the capabilities of human resources, communication skill, knowledge transfer, and quality control.6. Outsourcing project execution:After transitioning process, it is time for vendor and client to execute their outsourcing project. There are four components within this“contract governance” stage:project management, relationship management, change management, and risk management. Any items listed in the contract and its service level agreements (SLAs) need to be delivered and implemented as requested. Especially, client and vendor relationships, change requests and records, and risk variables must be carefully managed and administered.7. Outsourcing project assessment:During the end of an outsourcing project period, vendor must deliver its final product/service for client's approval. The outsourcing client must assess the quality of product/service that provided by its client. The outsourcing client must measure his/her satisfaction level to the product/service provided by the client. A satisfied assessment and good relationship will guarantee the continuation of the next outsourcing contract.The results of the previous activity (that is, project assessment) will be the base of determining the next outsourcing contract. A firm evaluates its satisfaction level based on predetermined outsourcing goals and contracting criteria. An outsourcing company also observes outsourcing cost and risks involved in the project. If a firm is satisfied with the current outsourcing vendor, it is likely that a renewable contract could start with the same vendor. Otherwise, a new “precontract phase” would restart to s earch for a new outsourcing vendor.This activity will lead to a new outsourcing life cycle. Fig.1 shows two dotted arrowlines for these two alternatives: the dotted arrow line 3.a.indicates “renewable contract” path and the dotted arrow line 3.b.indicates “a new contract search” path.Each phase in IS outsourcing life cycle is full of needed activities and processes (see Fig.1). In order to clearly examine the dynamics of risks and outsourcing activities, the following sections provide detailed analyses. The pre-contract phase in IS outsourcing life cycle focuses on the awareness of outsourcing success factors and related risk factors. The contract phase in IS outsourcing life cycle centers on the mechanism of project management and risk management. The post-contract phase in IS outsourcing life cycle concentrates on the need of selecting suitable project quality assessment methods.4. Actions in pre-contract phase: awareness of success and risk factorsThe pre-contract period is the first phase in information systems outsourcing life cycle (see Fig.1). While in this phase, an outsourcing firm should first identify its need for IS outsourcing. After determining the need for IS outsourcing, the firm needs to carefully create an outsourcing plan. This firm must align corporate strategy into its outsourcing plan.In order to well prepare for corporate IS outsourcing, a firm must understand current market situation, its competitiveness, and economic environment. The next important task to be done is to identify outsourcing success factors, which can be used to serve as the guidance for strategic outsourcing planning. In addition to know success factors,an outsourcing firm must also recognize possible risks involved in IS outsourcing, thus allows a firm to formulate a better outsourcing strategy.Conclusion and research directionsThis paper presents a three-phased IS outsourcing life cycle and its associated risk factors that affect the success of outsourcing projects.Outsourcing life cycle is complicated and complex in nature. Outsourcing companies usually invest a great effort to select suitable service vendors However,many risks exit in vendor selection process. Although outsourcing costs are the major reason for doing outsourcing, the firms are seeking outsourcing success through quality assurance and risk control. This decision path is understandable since the outcome of project risks represents the amount of additional project cost. Therefore, carefully manage the project and its risk factors would save outsourcing companies a tremendous amount of money.This paper discusses various issues related to outsourcing success, risk factors, quality assessment methods, and project management techniques. The future research may touch alternate risk estimation methodology. For example, risk uncertainty can be used to identify the accuracy of the outsourcing risk estimation. Another possible method to estimate outsourcing risk is through the Total Cost of Ownership(TCO) method. TCO method has been used in IT management for financial portfolio analysis and investment decision making. Since the concept of risk is in essence the cost (of loss) to outsourcing clients, it thus becomes a possible research method to solve outsourcing decision.信息系统的生命周期和风险分析1.绪言信息系统外包在信息技术工业已经获得了巨大的关注。

电子信息工程数据库管理中英文对照外文翻译文献

电子信息工程数据库管理中英文对照外文翻译文献

中英文对照外文翻译文献(文档含英文原文和中文翻译)译文:数据库管理数据库(有时拼成Database)也称为电子数据库,是指由计算机特别组织的用下快速查找和检索的任意的数据或信息集合。

数据库与其它数据处理操作协同工作,其结构要有助于数据的存储、检索、修改和删除。

数据库可存储在磁盘或磁带、光盘或某些辅助存储设备上。

一个数据库由一个文件或文件集合组成。

这些文件中的信息可分解成一个个记录,每个记录有一个或多个域。

域是数据库存储的基本单位,每个域一般含有由数据库描述的属于实体的一个方面或一个特性的信息。

用户使用键盘和各种排序命令,能够快速查找、重排、分组并在查找的许多记录中选择相应的域,建立特定集上的报表。

数据库记录和文件的组织必须确保能对信息进行检索。

早期的系统是顺序组织的(如:字母顺序、数字顺序或时间顺序);直接访问存储设备的研制成功使得通过索引随机访问数据成为可能。

用户检索数据库信息的主要方法是query(查询)。

通常情况下,用户提供一个字符串,计算机在数据库中寻找相应的字符序列,并且给出字符串在何处出现。

比如,用户必须能在任意给定时间快速处理内部数据。

而且,大型企业和其它组织倾向于建立许多独立的文件,其中包含相互关联的甚至重叠的数据,这些数据、处理活动经常需要和其它文件的数据相连。

为满足这些要求,开发邮各种不同类型的数据库管理系统,如:非结构化的数据库、层次型数据库、网络型数据库、关系型数据库、面向对象型数据库。

在非结构化的数据库中,按照实体的一个简单列表组织记录;很多个人计算机的简易数据库是非结构的。

层次型数据库按树型组织记录,每一层的记录分解成更小的属性集。

层次型数据库在不同层的记录集之间提供一个单一链接。

与此不同,网络型数据库在不同记录集之间提供多个链接,这是通过设置指向其它记录集的链或指针来实现的。

网络型数据库的速度及多样性使其在企业中得到广泛应用。

当文件或记录间的关系不能用链表达时,使用关系型数据库。

地理信息系统中英文对照外文翻译文献

地理信息系统中英文对照外文翻译文献

中英文对照外文翻译(文档含英文原文和中文翻译)A Survey on Spatio-Temporal Data WarehousingAbstractGeographic Information Systems (GIS) have been extensively used in various application domains, ranging from economical, ecological and demographic analysis,to city and route planning. Nowadays, organizations need sophisticated GIS-based Decision Support System (DSS) to analyze their data with respect to geographic information, represented not only as attribute data, but also in maps. Thus, vendors are increasingly integrating their products, leading to the concept of SOLAP (Spatial OLAP). Also, in the last years, and motivated by the explosive growth in the use of PDA devices, the field of moving object data has been receiving attention from the GIS community. However, not much has been done in providing moving object databases with OLAP functionality. In the first part of this paper we survey theSOLAP literature. We then move to Spatio-Temporal OLAP, in particular addressing the problem of trajectory analysis. We finally provide an in-depth comparative analysis between two proposals introduced in the context of the GeoPKDD EU project: the Hermes-MDC system,and Piet, a proposal for SOLAP and moving objects,developed at the University of Buenos Aires, Argentina.Keywords: GIS, OLAP, Data Warehousing, MovingObjects, Trajectories, AggregationINTRODUCTIONGeographic Information Systems (GIS) have been extensively used in various application domains, ranging from economical, ecological and demographic analysis, to city and route planning (Rigaux, Scholl, & V oisard, 2001; Worboys, 1995). Spatial information in a GIS is typically stored in different so-called thematic layers (also called themes). Information in themes can be stored in data structures according to different data models, the most usual ones being the raster model and the vector model. In a thematic layer, spatial data is annotated with classical relational attribute information, of (in general) numeric or string type. While spatial data is stored in data structures suitable for these kinds of data, associated attributes are usually stored in conventional relational databases. Spatial data in the different thematic layers of a GIS system can be mapped univocally to each other using a common frame of reference, like a coordinate system.These layers can be overlapped or overlayed to obtain an integrated spatial view.On the other hand, OLAP (On Line Analytical Processing) (Kimball,1996; Kimball & Ross, 2002) comprises a set of tools and algorithms that allow efficiently querying multidimensional databases, containing large amounts of data, usually called Data Warehouses. In OLAP, data is organized as a set of dimensions and fact tables. In the multidimensional model, data can be perceived as a data cube, where each cell contains a measure or set of (probably aggregated) measures of interest. As we discuss later, OLAP dimensions are further organized in hierarchies that favor the data aggregation process (Cabibbo & Torlone, 1997). Several techniques and algorithms have been developed for query processing, most of them involving some kind of aggregate precomputation (Harinarayan, Rajaraman, & Ullman, 1996).The need for OLAP in GISDifferent data models have been proposed for representing objects in a GIS. ESRI () first introduced the Coverage data model to bind geometric objects to non-spatial attributes that describe them. Later, they extended this model with object-oriented support, in a way that behavior can be defined for geographic features (Zeiler,1999). The idea of the Coverage data model is also supported by the Reference Model proposed by the Open Geospatial Consortium (). Thus, in spite of the model of choice,there is always the underlying idea of binding geometric objects to objects or attributes stored in (mostly) object-relational databases (Stonebraker & Moore, 1996). In addition, query tools in commercial GIS allow users to overlap several thematic layers in order to locate objects of interest within an area, like schools or fire stations.For this, they use indexing structures based on R-trees (Gutman, 1984).GIS query support sometimes includes aggregation of geographic measures, for example, distances or areas (e.g., representing different geological zones). However, these aggregations are not the only ones that are required, as we discuss below.Nowadays, organizations need sophisticated GIS-based Decision Support System (DSS) to analyze their data with respect to geographic information, represented not only as attribute data, but also in maps, probably in different thematic layers. In this sense, OLAP and GIS vendors are increasingly integrating their products (see, for instance,Microstrategy and MapInfo integration in /, and /). In this sense, aggregate queries are central to DSSs. Classical aggregate OLAP queries (like “total sales of cars in California”), and aggregation combined with complex queries involving geometric components (“total sales in all villages crossed by the Mississippi river and within a radius of 100 km around New Orleans”) must be efficiently supported. Moreover, navigation of the results using typical OLAP operations like roll-up or drill-down is also required. These operations are not supported by commercial GIS in a straightforward way. One of the reasons is that the GIS data models discussed above were developed with “transactional” queries in mind. Thus, the databases storing nonspatial attributes or objects are designed to support those (nonaggregate) kinds of queries. Decision support systems need a different data model, where non-spatial data, probably consolidated from different sectors in an organization, is stored in a data warehouse. Here,numerical data are stored in fact tables built along several dimensions.For instance, if we are interested in the sales of certain products in stores in a given region, we may consider the sales amounts in a fact table over the three dimensions Store, Time and Product. In order to guarantee summarizability (Lenz & Shoshani, 1997), dimensions are organized into aggregation hierarchies. For example, stores can aggregate over cities which in turn can aggregate into regions and countries. Each of these aggregation levels can also hold descriptive attributes like city population, the area of a region, etc. To fulfill the requirements of integrated GIS-DSS, warehouse data must be linked to geographic data. For instance, a polygon representing a region must be associated to the region identifier in the warehouse. Besides, system integration in commercial GIS is not an easy task. In the current commercial applications, the GIS and OLAP worlds are integrated in an ad-hoc fashion, probably in a different way (and using different data models) each time an implementation is required, even when a data warehouse is available for non-spatial data.An Introductory Example. We present now a real-world example for illustrating some issues in the spatial warehousing problematic. We selected four layers with geographic and geological features obtained from the National Atlas Website (). Theselayers contain the following information: states, cities, and rivers in North America, and volcanoes in the northern hemisphere (published by the Global V olcanism Program - GVP). Figure 1 shows a detail of the layers containing cities and rivers in North America, displayed using the graphic interface of the Piet implementation we discuss later in the paper. Note the density of the points representing cities (particularly in the eastern region). Rivers are represented as polylines. Figure 2 shows a portion of two overlayed layerscontaining states (represented as polygons) and volcanoes in the northern hemisphere.There is also non-spatial information stored in a conventional data warehouse. In this data warehouse, dimension tables contain customer,stores and product information, and a fact table contains stores sales across time. Also, numerical and textual information on the geographic components exist (e.g., population, area), stored as usual as attributes of the GIS layers.In the scenario above, conventional GIS and organizational data can be integrated for decision support analysis. Sales information could be analyzed in the light of geographical features, conveniently displayed in maps. This analysis could benefit from the integration of both worlds in a single framework. Even though this integration could be possible with existing technologies, ad-hoc solutions are expensive because,besides requiring lots of complex coding, they are hardly portable. To make things more difficult, ad-hoc solutions require data exchange between GIS and OLAP applications to be performed. This implies that the output of a GIS query must be probably exported as members in dimensions of a data cube, and merged for further analysis. For example, suppose that a business analyst is interested in studying the sales of nautical goods in stores located in cities crossed by rivers. She could first query the GIS, to obtain the cities of interest. She probably has stored sales in a data cube containing a dimension Store or Geography with city as a dimension level. She would need to“manually” select the cities of interest (i.e., the ones returned by the GIS query) in the cube, to be able to go on with the analysis (in the best case, an ad-hoc customized middleware could help her). Of course, she must repeat this for each query involving a (geographic) dimension inthe data cube.Figure 1. Two overlayed layers containing cities and rivers in North America.On the contrary, GIS/Data warehousing integration can provide a more natural solution. The second part of this survey is devoted to spatio-temporal datawarehousing and OLAP. Moving objects databases (MOD) have been receiving increasing attention from the database community in recent years, mainly due to the wide variety of applications that technology allows nowadays. Trajectories of moving objects like cars or pedestrians, can be reconstructed by means of samples describing the locations of these objects at certain points in time. Although thereFigure 2. Two overlayed layers containing states in North America and volcanoes in thenorthern hemisphere.exist many proposals for modeling and querying moving objects, only a small part of them address the problem of aggregation of moving objects data in a GIS (Geographic Information Systems) scenario. Many interesting applications arise, involving moving objects aggregation, mainly regarding traffic analysis, truck fleet behavior analysis, commuter traffic in a city, passenger traffic in an airport, or shopping behavior in a mall. Building trajectory data warehouses that can integrate with a GIS is an open problem that is starting to attract database researchers. Finally, the MOD setting is appropriate for data mining tasks, and we also comment on this in the paper. In this paper, we first provide a brief background on GIS, data warehousing and OLAP, and a review of the state-of-the-art in spatial OLAP. After this, we move on to study spatio-temporal data warehousing, OLAP and mining. We then provide a detailed analysis of the Piet framework, aimed at integrating GIS, OLAP and moving object data, and conclude with a comparison between this proposal, and the Hermes data cartrridge and trajectory datawarehouse developed in the context of the GeoPKDD project (Information about the GoePKDD project can be found at http://www.geopkdd.eu).A SHORT BACKGROUNDGISIn general, information in a GIS application is divided over several thematic layers. The information in each layer consists of purely spatial data on the one hand, that is combined with classical alpha-numeric attribute data on the other hand (usually stored in a relational database). Two main data models are used for the representation of the spatial part of the information within one layer, the vector model and the raster model. The choice of model typically depends on the data source from which the information is imported into the GIS.The Vector Model. The vector model is used the most in current GIS (Kuper & Scholl, 2000). In the vector model, infinite sets of points in space are represented as finite geometric structures, or geometries, like, for example, points, polylines and polygons. More concretely, vector data within a layer consists in a finite number of tuples of the form (geometry, attributes) where a geometry can be a point, a polyline or a polygon. There are several possible data structures to actually store these geometries (Worboys, 1995).The Raster Model. In the raster model, the space is sampled into pixels or cells, each one having an associated attribute or set of attributes. Usually, these cells form a uniform grid in the plane. For each cell or pixel, the sample value of some function is computed and associated to the cell as an attribute value, e.g., a numeric value or a color. In general, information represented in the raster model is organized intozones, where the cells of a zone have the same value for some attribute(s). The raster model has very efficient indexing structures and it is very well-suited to model continuous change but its disadvantages include its size and the cost of computing the zones.Spatial information in the different thematic layers in a GIS is often joined or overlayed. Queries requiring map overlay are more difficult to compute in the vector model than in the raster model. On the other hand, the vector model offers a concise representation of the data, independent on the resolution. For a uniform treatment of different layers given in the vector or the raster model, in this paper we treat the raster model as a special case of the vector model. Indeed, conceptually, each cell is, and each pixel can be regarded as, a small polygon; also, the attribute value associated to the cell or pixel can be regarded as an attribute in the vector model.Data Warehousing and OLAPThe importance of data analysis has increased significantly in recent years as organizations in all sectors are required to improve their decision-making processes in order to maintain their competitive advantage. We said before that OLAP (On Line Analytical Processing) (Kimball, 1996; Kimball & Ross, 2002) comprises a set of tools and algorithms that allow efficiently querying databases that contain large amounts of data. These databases, usually designed for read-only access (in general, updating isperformed off-line), are denoted data warehouses. Data warehouses are exploited in different ways. OLAP is one of them. OLAP systems are based on a multidimensional model, which allows a better understanding of data for analysis purposes and provides better performance for complex analytical queries. The multidimensional model allows viewing data in an n-dimensional space, usually called a data cube (Kimball & Ross,2002). In this cube, each cell contains a measure or set of (probably aggregated) measures of interest. This factual data can be analyzed along dimensions of interest, usually organized in hierarchies (Cabibbo & Torlone, 1997). Three typical ways of OLAP tools implementation exist: MOLAP (standing for multidimensional OLAP), where data is stored in proprietary multidimensional structures, ROLAP (relational OLAP), where data is stored in (object) relational databases, and HOLAP (standing for hybrid OLAP, which provides both solutions. In a ROLAP environment, data is organized as a set of dimension tables and fact tables, and we assume this organization in the remainder of the paper.There are a number of OLAP operations that allow exploiting the dimensions and their hierarchies, thus providing an interactive data analysis environment. Warehouse databases are optimized for OLAP operations which, typically, imply data aggregation or de-aggregation along a dimension, called roll-up and drill-down, respectively. Other operations involve selecting parts of a cube (slice and dice) and reorienting the multidimensional view of data (pivoting). In addition to the basic operations described above, OLAP tools provide a great variety of mathematical, statistical, and financial operators for computing ratios, variances, ranks,etc.It is an accepted fact that data warehouse (conceptual) design is still an open issue in the field (Rizzi & Golfarelli, 2000). Most of the data models either provide a graphical representation based on the Entity- Relationship (E/R) model or UML notations, or they just provide some formal definitions without user-oriented graphical support. Recently, Malinowsky and Zimányi (2006) propose the MultiDim model. This model is based on the E/R model and provides an intuitive graphical notation. Also recently, Vaisman (Vaisman, 2006a, 2006b) introduced a methodology for requirement elicitation in Decision Support Systems, arguing that methodologies used for OLTP systems are not appropriate for OLAP systems.Temporal Data WarehousesThe relational data model as proposed by Codd (1970), is not wellsuited for handling spatial and/or temporal data. Data evolution over time must be treated in this model, in the same way as ordinary data. This is not enough for applications that require past, present, and/or future data values to be dealt with by the database. In real life such applications abound. Therefore, in the last decades, much research has been done in the field of temporal databases. Snodgrass (1995) describes the design of the TSQL2 Temporal Query Language, an upward compatible extension of SQL-92. The book, written as a result of a Dagstuhl seminar organized in June 1997 by Etzion, Jajodia, andSripada (1998), contains comprehensive bibliography, glossaries for both temporal database and time granularity concepts, and summaries of work around 1998. The same author (Snodgrass, 1999), in other work, discusses practical research issues on temporal database design and implementation.Regarding temporal data warehousing and OLAP, Mendelzon and Vaisman (2000, 2003) proposed a model, denoted TOLAP, and developed a prototype and a datalog-like query language, based on a (temporal) star schema. Vaisman, Izquierdo, and Ktenas (2006) also present a Web-based implementation of this model, along with a query language, called TOLAP-QL. Eder, Koncilia, and Morzy (2002) also propose a data model for temporal OLAP supporting structural changes. Although these efforts, little attention has been devoted to the problem of conceptual and logical modeling for temporal data warehouses. SPATIAL DATA WAREHOUSING AND OLAPSpatial database systems have been studied for a long time (Buchmann, Günther, Smith, & Wang, 1990; Paredaens, Van Den Bussche, & Gucht, 1994). Rigaux et al. (2001) survey various techniques, such as spatial data models, algorithms, and indexing methods, developed to address specific features of spatial data that are not adequately handled by mainstream DBMS technology.Although some authors have pointed out the benefits of combining GIS and OLAP, not much work has been done in this field. Vega López,Snodgrass, and Moon (2005) present a comprehensive survey on spatiotemporal aggregation that includes a section on spatial aggregation. Also, Bédard, Rivest, and Proulx (2007) present a review of the efforts for integrating OLAP and GIS. As we explain later, efficient data aggregation is crucial for a system with GIS-OLAP capabilities.Conceptual Modeling and SOLAPRivest, Bédard, and Marchand (2001) introduced the concept of SOLAP (standing for Spatial OLAP), a paradigm aimed at being able to explore spatial data by drilling on maps, in a way analogous to what is performed in OLAP with tables and charts. They describe the desirable features and operators a SOLAP system should have.Although they do not present a formal model for this, SOLAP concepts and operators have been implemented in a commercial tool called JMAP, developed by the Centre for Research in Geomatics and KHEOPS, see /en/jmap/solap.jsp. Stefanovic, Han, and Koperski (2000) and Bédard, Merret, and Han (2001), classify spatial dimension hierarchies according to their spatial references in: (a) non-geometric;(b) geometric to non-geometric; and (c) fully geometric. Dimensions of type (a) can be treated as any descriptive dimension (Rivest et al., 2001). In dimensions of types (b) and (c), a geometry is associated to members of the hierarchies. Malinowski and Zimányi (2004) extend this classification to consider that even in the absence of several related spatial levels, a dimension can be considered spatial. Here, a dimension level is spatial if it is represented as a spatial data type (e.g., point, region), allowing them to link spatial levels through topological relationships (e.g., contains, overlaps). Thus, a spatial dimension is a dimension that contains at least one spatial hierarchy. A critical point inspatial dimension modeling is the problem of multiple-dependencies, meaning that an element in one level can be related to more than one element in a level above it in the hierarchy. Jensen, Kligys, Pedersen, and Timko (2004)address this issue, and propose a multidimensional data model for mobile services, i.e., services that deliver content to users, depending on their location.This model supports different kinds of dimension hierarchies, most remarkably multiple hierarchies in the same dimension, i.e., multiple aggregation paths. Full and partial containment hierarchies are also supported. However, the model does not consider the geometry, limiting the set of queries that can be addressed. This means that spatial dimensions are standard dimensions referring to some geographical element (like cities or roads).Malinowski and Zimányi (2006) also propose a model supporting multiple aggregation paths. Pourabbas (2003) introduces a conceptual model that uses binding attributes to bridge the gap between spatial databases and a data cube. The approach relies on the assumption that all the cells in the cube contain a value, which is not the usual case in practice, as the author expresses. Also, the approach requires modifying the structure of the spatial data to support the model. No implementation is presented.Shekhar, Lu, Tan, Chawla, & Vatsavai (2001) introduced MapCube, a visualization tool for spatial data cubes. MapCube is an operator that, given a so-called base map, cartographic preferences and an aggregation hierarchy, produces an album of maps that can be navigated via roll-up and drill-down operations.Spatial Measures. Measures are characterized in two ways in the literature, namely: (a) measures representing a geometry, which can be aggregated along the dimensions; (b) a numerical value, using a topological or metric operator. Most proposals support option (a), either as a set of coordinates (Bédard et al., 2001; Rivest et al., 2001; Malinowski & Zimányi, 2004; Bimonte, Tchounikine, & Miquel, 2005), or a set of pointers to geometric objects (Stefanovic et al., 2000). Bimonte et al. (Bimonte et al., 2005) define measures as complex objects (a measure is thus an object containing several attributes). Malinowski and Zimányi (2004) follow a similar approach, but defining measures as attributes of an n-ary fact relationship between dimensions.Damiani and Spaccapietra (2006) propose MuSD, a model allowing defining spatial measures at different granularities. Here, a spatial measure can represent the location of a fact at multiple levels of (spatial) granularity. Also, an algebra of SOLAP operators is proposed.Spatial AggregationIn light of the discussion above, it should be clear that aggregation is a crucial issue in spatial OLAP. Moreover, there is not yet a consensus about a complete set of aggregate operators for spatial OLAP. We now discuss the classic approaches to spatial aggregation. Han et al. (1998) use OLAP techniques for materializing selected spatial objects, and proposed a so-called Spatial Data Cube, and the set of operations that can be performed on this data cube. The model only supports aggregation of spatial objects.Pedersen and Tryfona (2001) propose the pre-aggregation of spatial facts. First, they pre-process these facts, computing their disjoint parts in order to be able to aggregate them later. This pre-aggregation works if the spatial properties of the objects are distributive over some aggregate function. Again, the spatial measures are geometric objects.Given that this proposal ignores the geometries, queries like “total population of cities crossed by a river” are not supported. The paper does not address forms other than polygons, although the authors claim that other more complex forms are supported by the method, and the authors do not report experimental results.With a different approach, Rao, Zhang, Yu, Li, and Chen (2003), and Zhang, Li, Rao, Yu, Chen, and Liu (2003) combine OLAP and GIS for querying so-called spatial data warehouses, using R-trees for accessing data in fact tables. The data warehouse is then exploited in the usualOLAP way. Thus, they take advantage of OLAP hierarchies for locating information in the R-tree which indexes the fact table.Although the measures here are not only spatial objects, the proposal also ignores the geometric part of the model, limiting the scope of the queries that can be addressed. It is assumed that some fact table, containing the identifiers of spatial objects exists. Finally, these objects happen to be points, which is quite unrealistic in a GIS environment, where different types of objects appear in the different layers. Some interesting techniques have been recently introduced to address the data aggregation problem. These techniques are based on the combined use of (R-tree-based) indexes, materialization (or preaggregation) of aggregate measures, and computational geometry algorithms.Papadias, Tao, Kalnis, and Zhang (2002) introduce the Aggregation Rtree (aR-tree), combining indexing with pre-aggregation. The aR-tree is an R-tree that annotates each MBR (Minimal Bounding Rectangle) with the value of the aggregate function for all the objects that are enclosed by it. They extend this proposal in order to handle historic information (see the section on moving object data below), denoting this extension aRB-tree (Papadias, Tao, Zhang, Mamoulis, Shen, and & Sun, 2002). The approach basically consists in two kinds of indexes: a host index, which is an R-tree with the summarized information, and a B-tree containing time-varying aggregate data. In the most general case, each region has a B-tree associated, with the historical information of the measures of interest in the region. This is a very efficient solution for some kinds of queries, for example, window aggregate queries (i.e., for the computation of the aggregate measure of the regions which intersect a spatio-temporal window). In addition, the method is very effective when a query is posed over a query region whose intersection with the objects in a map must be computed on-thefly,and these objects are totally enclosed in the query region. However, problems may appear when leaf entries partially overlap the query window. In this case, the result must be estimated, or the actual results computed using the base tables. In fact, Tao, Kollios, Considine, Li,and Papadias (2004), show that the aRB-tree can suffer from the distinct counting problem, if the object remains in the same region for several timestamps.时空数据仓库的调查摘要地理信息系统已被广泛应用于不同的应用领域,包括经济,生态和人口统计分析,城市和路线规划。

信息系统开发和数据库开发(中英文对照)

信息系统开发和数据库开发(中英文对照)

信息系统开发和数据库开发在许多组织中,数据库开发是从企业数据建模开始的,企业数据建模确定了组织数据库的范围和一般内容。

这一步骤通常发生在一个组织进行信息系统规划的过程中,它的目的是为组织数据创建一个整体的描述或解释,而不是设计一个特定的数据库.一个特定的数据库为一个或多个信息系统提供数据,而企业数据模型(可能包含许多数据库)描述了由组织维护的数据的范围.在企业数据建模时,你审查当前的系统,分析需要支持的业务领域的本质,描述需要进一步抽象的数据,并且规划一个或多个数据库开发项目。

图1显示松谷家具公司的企业数据模型的一个部分。

1.1 信息系统体系结构如图1所示,高级的数据模型仅仅是总体信息系统体系结构(ISA)一个部分或一个组织信息系统的蓝图.在信息系统规划期间,你可以建立一个企业数据模型作为整个信息系统体系结构的一部分。

根据Zachman(1987)、Sowa和Zachman(1992)的观点,一个信息系统体系结构由以下6个关键部分组成:数据(如图1所示,但是也有其他的表示方法)。

操纵数据的处理(着系可以用数据流图、带方法的对象模型或者其他符号表示)。

网络,它在组织内并在组织与它的主要业务伙伴之间传输数据(它可以通过网络连接和拓扑图来显示)。

人,人执行处理并且是数据和信息的来源和接收者(人在过程模型中显示为数据的发送者和接收者).执行过程的事件和时间点(它们可以用状态转换图和其他的方式来显示)。

事件的原因和数据处理的规则(经常以文本形式显示,但是也存在一些用于规划的图表工具,如决策表)。

1。

2 信息工程信息系统的规划者按照信息系统规划的特定方法开发出信息系统的体系结构。

信息工程是一种正式的和流行的方法。

信息工程是一种面向数据的创建和维护信息系统的方法。

因为信息工程是面向数据的,所以当你开始理解数据库是怎样被标识和定义时,信息工程的一种简洁的解释是非常有帮助的.信息工程遵循自顶向下规划的方法,其中,特定的信息系统从对信息需求的广泛理解中推导出来(例如,我们需要关于顾客、产品、供应商、销售员和加工中心的数据),而不是合并许多详尽的信息请求(如一个订单输入屏幕或按照地域报告的销售汇总).自顶向下规划可使开发人员更全面地规划信息系统,提供一种考虑系统组件集成的方法,增进对信息系统与业务目标的关系的理解,加深对信息系统在整个组织中的影响的理解.信息工程包括四个步骤:规划、分析、设计和实现.信息工程的规划阶段产生信息系统体系结构,包括企业数据模型。

信息系统外文文献翻译---系统的分析与设计

信息系统外文文献翻译---系统的分析与设计

附录1 外文翻译(原文)Systems Analysis and DesignWorking under control of a stored program, a computer processes data into information. Think about that definition for a minute. Any given computer application involves at least three components: hardware, software, and data. Merely writing a program isn't enough; because the program is but one component in a system.A system is a group of components that work together to accomplish an objective. For example, consider a payroll system. Its objective is paying employees. What components are involved? Each day,employees record their hours worked on time cards. At the end of each week, the time cards are collected and delivered to the computer center, where they are read into a payroll program. As it runs, the program accesses data files. Finally, the paychecks are printed and distributed. For the system to work, people, procedures, input and output media, files, hardware, and software must be carefully coordinated. Note that the program is but one component in a system.Computer-based systems are developed because people need information. Those people, called users, generally know what is required, but may lack the expertise to obtain it. Technical professionals, such as programmers, have the expertise, but may lack training in the user's field. To complicate matters, users and programmers often seem to speak different languages, leading to communication problems. A systems analyst is a professional who translates user needs into technical terms, thus serving as a bridge between users and technical professionals.Like an engineer or an architect, a systems analyst solves problems by combining solid technical skills with insight, imagination, and a touch of art. Generally, the analyst follows a well-defined, methodical process that includes at least the following steps;1.Problem definition2.Analysis3.Design4.Implementation5.MaintenanceAt the end of each step, results are documented and shared with both the user and the programmers. The idea is to catch and correct errors and misunderstandings as early as possible. Perhaps the best way to illustrate the process is through example.Picture a small clothing store that purchases merchandise at wholesale, displays this stock, and sells it to customers at retail. On the one hand, too much stock represents an unnecessary expense. On the other hand, a poor selection discourages shoppers. Ideally, a balance can be achieved: enough, but not too much.Complicating matters is the fact that inventory is constantly changing, with customer purchases depleting stock, and returns and reorders adding to it. [1] The owner would like to track inventory levels and reorder and given item just before the store runs out. For a single item, the task is easy-just count the stock-on-hand. Unfortunately, the store has hundreds of different items, and keeping track of each one is impractical. Perhaps a computer might help.2-1 Problem DefinitionThe first step in the systems analysis and design process is problem definition. The analyst's objective is determining what the user (in this case, the store's owner) needs. Note that, as the process begins, the user possesses the critical information, and the analyst must listen and learn. Few users are technical experts. Most see the computer as a "magic box, "and are not concerned with how it works. At this stage, the analyst has no business even thinking about programs, files, and computer hardware, but must communicate with the user on his or her own term.The idea is to ensure that both the user and the analyst are thinking about the same thing-Thus, a clear, written statement expressing the analyst's understanding of the problem is essential. The user should review and correct this written statement. The time to catch misunderstandings and oversights is now, before time, money and effort are wasted.Often, following a preliminary problem definition, the analyst performs a feasibility study. The study a brief capsule version of the entire systems analysis and design process, attempts to answer three questions:1.Can the problem be solved?2.Can it be salved in the user's environment?3.Can it be solved at a reasonable cost?If the answer to any one of these questions is no, the system should not be developed. Given a good problem definition and a positive feasibility study, theanalyst can turn to planning and developing a problem solution.2- 2 AnalysisAs analysis begins, the analyst understands the problem. The next step is determining what must be done to solve it. The user knows what must be done 1 during analysis; this knowledge is extracted and formally documented. Most users think in terms of the functions to be performed and the data elements to be manipulated. The objective is to identify and link these key functions and data elements, yielding a logical system design.Start with the system's basic functions. The key is keeping track of the stock-on-hand for each product in inventory. Inventory changes because customers purchase, exchange, and return products, so the system will have to process customer transactions. The store's owner wants to selectively look at the inventory level for any product in short supply and, if appropriate, order replacement stock, so the system must be able to communicate with management. Finally, following management authorization, the system should generate a reorder ready to send to a supplier.Fig 1Given the system's basic functions, the analyst's next task is gaining a sense of their logical relationship. A good way to start is by describing how data flow between the functions. As the name implies, data flow diagrams are particularly useful for graphically describing these data flows. Four symbols are used (Fig. 1). Data sources and destinations are represented by squares; input data enter the system from a source, and output data flow to a destination. Once in the system, the data are manipulated or change by processes, represented by round-corner rectangles. A process might be a program, a procedure, or anything else that changes or moves data. Data can be held for later processing in data stores, symbolized by open-ended rectangles. A data store might be a disk file, a tape file, a database, written notes, or even a person's memory.Finally, data flow between sources, destinations, processes, end data stores over data flows, which are represented by arrows.Fig 2Figure 2 shows a preliminary data flow diagram for the inventory system. Start with CUSTOMER. Transactions flow from a customer f into the system, where they are handled by Process transaction. A data store, STOCK, holds data on each item in inventory. Process transaction changes the data to reflect the new transaction. Meanwhile, MANAGEMENT accesses the system through Communicate, evaluating the data in STOCK and, if necessary, requesting a reorder. Once, a reorder is authorized. Generate reorder sends necessary data to the SUPPLIER, who ships the items to the store. Note that, because the reorder represents a change in the inventory level of a particular product or products it is handled as a transaction.The data flow diagram describes the logical system. The next step is tracing the data flows. Start with the destination SUPPLIER. Reorders flow to suppliers; for example, the store might want 25 pairs of jeans. To fill the order, the supplier needs the product description and the reorder quantity. Where do these data elements come from? Since they are output by Generate reorder, they must either be Input to or generated by this process. Data flow into Generate reorder for STOCK; thus, product descriptions and reorder quantities must be stored in STOCK.Other data elements, such as the item purchased and the purchase quantity are generated by CUSTOMER. Still others, for example selling price and reorder point, are generated by or needed by MANAGEMENT. The current stock-on-hand for a given item is an example of a data element generated by an algorithm in one of the procedures. Step by step, methodically, the analyst identifies the data elements to be input to .stored by, manipulated by, generated by, or output by the system.To keep track of the data elements, the analyst might list each one in a datadictionary. A simple data dictionary can be set up on index cards, but computerized data dictionaries have become increasingly popular. The data dictionary, a collection of data describing and defining the data, is useful throughout the systems analysis and design process, and is often used to build a database during the implementation stage.The idea of analysis is to define the system's major functions and data elements methodically. Remember that the objective is translating user needs into technical terms. Since the system starts with the user, the first step is defining the user's needs. Users think in terms of functions and data. They do not visualize programs, or files, or hardware .and during this initial, crucial analysis stage it is essential that the analyst think like a user, not like a programmer.Data flow diagrams and data dictionaries are useful tools. They provide a format for recording key information about the proposed system. Also, they jog the analyst's memory) for example, if the analyst doesn't have sufficient information to complete a data dictionary entry, he or she has probably missed something. Perhaps most importantly, the data flow diagram and the data dictionary document the analyst's understanding of the system requirements. By reviewing these documents, the user can correct misunderstandings or oversights. Finally, they represent an excellent starting point the next step, design.2-3 DesignAs we enter the design stage, we know what the system must do, and thus can begin thinking about how to do it. The objective is to develop a strategy for solving the problem. At this stage, we are not interested in writing code or in defining precise data structures; instead, we want to identify, at a black box level, necessary programs, files, procedures, and other components.The data flow diagram defines the system's necessary functions; how might they be implemented? One possibility is writing one program for each process. Another is combining two or more processes in a single program; there are dozens of alternative solutions. Let's focus on one option and document it.A system flowchart uses symbols to represent programs, procedures, hardware devices, and the other components of a physical system (Fig. 3). Our flowchart (.Fig.4) shows that transaction data enter the system through a terminal, are processed by a data collection program, and then are stored on an inventory file. Eventually, the inventory file is processed by a Report and reorder program. Through it, management manipulates the data and authorizes reorders.Fig. 4 on a system flowchart, symbols represent programs, procedures, hardware devices, and the other components of a physical system.Fig 3Look at the system flowchart. It identifies several hardware components, including a computer, a disk drive, a data entry terminal, a printer, and a display terminal. Two programs are needed; Process transaction and Report and reorder. In addition to t he hardware and the programs, we’ll need data structures for the inventory file and for data flaws between the I/O devices and the software. Note that this system flowchart illustrates one possible solution; a good analyst will develop several feasible alternatives before choosing one.Fig 4The flowchart maps the system, highlighting its major physical components. Since the data link the components, the next task is defining the data structures.Consider, for example, the inventory file. It contains all the data elements from the data store STOCK. The data elements are listed in the data dictionary. Using them, the file's data structure can be planned,How should the file be organized? That depends on how it will be accessed. For example, in some applications, data are processed at regular, predictable intervals. Typically, the data are collected over time and processed together, as a batch. If batch processing is acceptable, a sequential file organization is probably best.It is not always possible to wait until a batch of transactions is collected, however. For example, consider an air defense early warning system. If an unidentified aircraft is spotted it must be identified immediately the idea of waiting until 5 _ 00 p.m. because "that's when the air defense program is run" is absurd. Instead, because of the need for quick response, each transaction must be processed as it occurs. Generally such transaction processing systems call for direct access file.Our inventory system has two programs. One processes transactions. A direct access inventory file seems a reasonable choice. The other allows management to study inventory data occasionally; batch processing would certainly do. Should the inventory file be organized sequentially or directly? Faced with such a choice a good analyst considers both options. One possible system might accept transactions and process them as they occur. As an alternative, sales slips might be collected throughout the day and processed as a batch after the store closes. In the first system, the two programs would deal with direct access files; in the second system, they would be linked to sequential files. A program to process direct access data is different from a program to process sequential data. The data drive the system. The choice of a data structure determines the program’s structure. Note that the program is defined and planned in the context of the system.2- 4 ImplementationOnce the system's major components have been identified .we can begin to develop them. Our system includes two programs, several pieces of equipment, and a number of data structures. During implementation, each program is planned and written using the techniques described in Chapter 7. Files are created, and their contents checked. New hardware is purchased, installed, and tested. Additionally, operating procedures are written and evaluated. Once all the component parts are ready, the system is tested. Assuming the user is satisfied, the finished system is released.2- 5 MaintenanceMaintenance begins after the system is released. As people use it, they will suggest minor improvements and enhancements. Occasionally, bugs slip through debug and testing, and removing them is another maintenance task. Finally, conditions change, and a program must be updated; for example, if the government passes a low changing the procedure for collecting income taxes, the payroll program must be modified. Maintenance continues for the life of a system, and its cost can easily match or exceed the original development cost. Good planning, solid documentation, and well-structured programs can help to minimize maintenance cost.附录2 外文翻译(译文)系统的分析与设计在存储程序的控制下,计算机把数据处理成信息。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

中英文对照翻译信息系统开发和数据库开发在许多组织中,数据库开发是从企业数据建模开始的,企业数据建模确定了组织数据库的范围和一般内容。

这一步骤通常发生在一个组织进行信息系统规划的过程中,它的目的是为组织数据创建一个整体的描述或解释,而不是设计一个特定的数据库。

一个特定的数据库为一个或多个信息系统提供数据,而企业数据模型(可能包含许多数据库)描述了由组织维护的数据的范围。

在企业数据建模时,你审查当前的系统,分析需要支持的业务领域的本质,描述需要进一步抽象的数据,并且规划一个或多个数据库开发项目。

图1显示松谷家具公司的企业数据模型的一个部分。

1.1 信息系统体系结构如图1所示,高级的数据模型仅仅是总体信息系统体系结构(ISA)一个部分或一个组织信息系统的蓝图。

在信息系统规划期间,你可以建立一个企业数据模型作为整个信息系统体系结构的一部分。

根据Zachman(1987)、Sowa和Zachman (1992)的观点,一个信息系统体系结构由以下6个关键部分组成: 数据(如图1所示,但是也有其他的表示方法)。

操纵数据的处理(着系可以用数据流图、带方法的对象模型或者其他符号表示)。

网络,它在组织内并在组织与它的主要业务伙伴之间传输数据(它可以通过网络连接和拓扑图来显示)。

人,人执行处理并且是数据和信息的来源和接收者(人在过程模型中显示为数据的发送者和接收者)。

执行过程的事件和时间点(它们可以用状态转换图和其他的方式来显示)。

事件的原因和数据处理的规则(经常以文本形式显示,但是也存在一些用于规划的图表工具,如决策表)。

1.2 信息工程信息系统的规划者按照信息系统规划的特定方法开发出信息系统的体系结构。

信息工程是一种正式的和流行的方法。

信息工程是一种面向数据的创建和维护信息系统的方法。

因为信息工程是面向数据的,所以当你开始理解数据库是怎样被标识和定义时,信息工程的一种简洁的解释是非常有帮助的。

信息工程遵循自顶向下规划的方法,其中,特定的信息系统从对信息需求的广泛理解中推导出来(例如,我们需要关于顾客、产品、供应商、销售员和加工中心的数据),而不是合并许多详尽的信息请求(如一个订单输入屏幕或按照地域报告的销售汇总)。

自顶向下规划可使开发人员更全面地规划信息系统,提供一种考虑系统组件集成的方法,增进对信息系统与业务目标的关系的理解,加深对信息系统在整个组织中的影响的理解。

信息工程包括四个步骤:规划、分析、设计和实现。

信息工程的规划阶段产生信息系统体系结构,包括企业数据模型。

1.3 信息系统规划信息系统规划的目标是使信息技术与组织的业务策略紧密结合,这种结合对于从信息系统和技术的投资中获取最大利益是非常重要的。

正如表1所描述的那样,信息工程方法的规划阶段包括3个步骤,我们在后续的3个小节中讨论它们。

1.确定关键性的规划因素关键性的规划因素是指组织目标、关键的成功因素和问题领域。

确定这些因素的目的是建立规划的环境并且将信息系统规划与战略业务规划联系起来。

表2显示了松谷家具公司的一些可能的关键规划因素,这些因素有助于信息系统的管理者为新的信息系统和数据库社顶优先级以处理需求。

例如,考虑到不精确的销售预测这个问题领域,信息系统的管理者可能在组织数据库中存放额外的历史销售数据、新的市场研究数据和新产品的测试数据。

2.确定组织的规划对象组织规划对象定义了业务范围,业务范围会限制后来的系统分析和信息系统可能发生改变的地方。

五个关键的规划对象如下所示:●组织单元组织中的各种部门。

●组织地点业务操作的发生地。

●业务功能支持组织使命的业务处理的相关组。

业务功能不同于组织单元,事实上一个功能可以分配给多个组织单元(例如,产品开发功能可能是销售部和生产部共同的责任)。

●实体类型关于组织所管理的人,地点和事物的数据的主要类别。

●信息系统处理数据集的应用软件和支持程序。

3.建立企业模型一个全面的企业模型包括每个企业功能的功能分解模型、企业数据模型和各种规划矩阵。

功能分解是把组织的功能进行更详细的分解过程,功能分解是在系统分析中为了简化问题、分散注意力和确定组件而使用的经典处理方法。

在松谷家具公司中订单履行功能的功能分解的例子如图2所示。

对于处理业务功能和支持功能的全部集合而言,多个数据库是必须的,因此一个特定的数据库可能仅仅对支持功能(如图2所示)的一个子集提供支持。

为了减少数据冗余和使数据更有意义,拥有完整的、高层次的企业视图是非常有帮助的。

企业数据模型使用特定的符号来描述。

除了实体类型这种图形描述外,一个完整的企业数据模型还应包括每个实体类型的描述和描述业务操作的提要,即业务规则。

业务规则决定数据的有效性。

一个企业数据模型不仅包括实体类型,还包括数据实体间的联系,以及各种规划对象间的其他联系。

显示规划对象间联系的一种常见形式是矩阵。

由于规划矩阵不需要数据库被明确的建模就可以明确描述业务需求,因此规划矩阵是一种重要的功能。

规划矩阵经常从业务规则中导出,它有助于社顶开发活动优先级、将开发活动排序和根据自顶向下视图通过一种企业范围的方法安排这些开发活动。

有许多种规划矩阵可供使用,它们的共同之处是:●地点-功能显示业务功能在哪个业务地点执行。

●单元-功能显示业务功能由哪个业务单元执行或负责。

●信息系统-数据实体解释每个信息系统如何与每个数据实体相互作用(例如,是否每个系统都对每个实体中的数据进行创建、检索、更新和删除)。

●支持功能-数据实体确定每个功能中数据的获取、使用、更新和删除。

●信息系统-目标显示信息系统支持的每个业务目标图3举例说明了一个可能的功能-数据实体矩阵。

这样的矩阵可以用于多种目的,包括以下三个目的:1)确定空白实体显示哪些数据实体没有被任何功能使用或哪个功能没有使用任何实体。

2)发现丢失的实体每个功能涉及的员工通过检查矩阵能够确认任何可能丢失的实体。

3)区分开发活动的优先级如果一个给顶的功能对于系统开发有高优先级(可能因为它与重要的组织目标相关),那么这个领域所使用的实体在数据库开发中拥有高优先级。

Hoffer、George和Valacich(2002)的著作中有关于怎样使用规划矩阵完成信息工程和系统规划的更完整的描述。

2数据库开发过程基于信息工程的信息系统规划是数据库开发项目的一个来源。

这些开发新数据库的项目通常是为了满足组织的战略需求,例如改善客户支持、提高产品和库存管理或进行更精确的销售预测。

然而许多数据库开发项目更多的是以自底向上的方式出现的,例如信息系统的用户需要特定的信息来完成他们的工作,从而请求开始一个项目,又如其他信息系统的专家发现组织需要改进数据管理而开始新的项目。

即使在自底向上的情况下,建立企业数据模型也是必须的,以便理解现有的数据库是否可以提供所需的数据,否则,新的数据库、数据实体和属性都应该加到当前的组织数据资源中去。

无论是战略需求还是操作信息的需求,每个数据库开发项目通常集中在一个数据库上。

一些数据库项目仅仅集中在定义、设计和实现一个数据库,以作为后续信息系统开发的基础。

然而在大多数情况下,数据库及其相关信息处理功能是作为一个完整的信息系统开发项目的一部分而被开发的。

2.1 系统开发生命周期指导管理信息系统开发项目的传统过程是系统开发生命周期(SDLC)。

系统开发生命周期是指一个组织中由数据库设计人员和程序员组成的信息系统专家小组详细说明、开发、维护和替换信息系统的全部步骤。

这个过程比作瀑布是因为每一步都流到相邻的下一步,即信息系统的规格说明是一块一块地开发出来的,每一块的输出是下一块的输入。

然而如图所示,这些步骤并不是纯线性的,每个步骤在时间上有所重叠(因此可以并行地管理步骤),而且当需要重新考虑先前的决策时,还可以回滚到前面某些步骤。

(因而水可以在瀑布中倒流!)图4对系统开发生命周期每一阶段的目的和可交付的产品进行了简明注解。

系统开发生命周期的每一阶段都包括与数据库开发相关的活动,所以,数据库管理的问题遍布整个系统开发过程。

我们在图5中重复了系统开发生命周期的七个阶段,并概述了每个阶段常见的数据库开发活动。

请注意,系统开发生命周期的阶段和数据库开发步骤之间不存在一一对应的关系,概念数据建模发生在两个系统开发生命周期阶段之间。

企业建模数据库开发过程从企业建模(系统开发生命周期中项目论证和选择阶段的一部分)开始设定组织数据库的范围和一般内容。

企业建模发生在信息系统规划和其他活动期间,这些活动确定信息系统的哪个部分需要改变和加强并概述出全部组织数据的范围。

在这一步中,检查当前数据库和信息系统,分析作为开发项目主体的业务领域的本质,用非常一般的术语描述每个信息系统在开发时所需要的数据。

每个项目只有当它达到组织的预期目标时才可以进行下一步。

概念数据建模对一个已经开始的信息系统项目而言,概念数据建模阶段分析信息系统的全部数据需求。

它分为两个阶段。

首先,它在项目开始和规划阶段建立一张类似于图1的图。

同时建立其他文档来概述不考虑现存数据库的情况下特定开发项目中所需的数据范围。

此时仅仅包括高层类别的数据(实体)和主要联系。

然后在系统开发生命周期的分析阶段产生确定信息系统必须管理的全部组织数据的详细数据模型,定义所有数据属性,列出全部数据类别,表示数据实体间所有的业务联系,确定描述数据完整性的全部规则。

在分析阶段,还要检查概念数据模型(在后面也称作概念模式)与用来解释目标信息系统其他方面的模型类别的一致性,例如处理步骤、处理数据的规则以及时间的时序。

然而,即使是这样详细的概念数据模型也只是初步的,因为后续的信息系统生命周期中的活动在设计事务、报表、显示和查询时可能会发现遗漏的元素或错误。

因此,经常说到的概念数据建模是以一种自顶向下的方式完成的,它由业务领域的一般理解所驱动,而不是由特定的信息处理活动所驱动。

3.逻辑数据库设计逻辑数据库设计从两个角度进行数据库开发。

首先,将概念数据模型变换成基于关系数据库理论的标准表示方法——关系。

然后像设计信息系统的每个计算机程序(包括程序的输入和输出格式)那样,对数据库支持的事务、报表、显示和查询进行详细的检查。

在这个所谓的自底向上的分析中,精确地验证数据库中需要维护的数据和在每个事务、报表等等中需要的那些数据的性质。

对于每个单独的报表、事务等等的分析都要考虑一个特定的、有限制的但是完全的数据库视图。

当报表、事务等被分析时有可能根据需要而改变概念数据模型。

尤其在大型的项目中,不同的分析人员和系统开发者的团队可以独立地工作在不同的程序或程序集中,他们所有工作的细节直到逻辑设计阶段才可能会显示出来。

在这种情况下,逻辑数据库设计阶段必须将原始的概念数据模型和这些独立的用户视图合并或集成到一个全面的设计中。

在进行逻辑信息系统设计时也可以确定额外的信息处理需求,此时这些新的需求必须集成到前面确定的逻辑数据库设计中。

相关文档
最新文档