数据库管理中英文对照外文翻译文献

合集下载

数据库中英文对照外文翻译文献

数据库中英文对照外文翻译文献

中英文对照外文翻译Database Management SystemsA database (sometimes spelled data base) is also called an electronic database , referring to any collection of data, or information, that is specially organized for rapid search and retrieval by a computer. Databases are structured to facilitate the storage, retrieval , modification, and deletion of data in conjunction with various data-processing operations .Databases can be stored on magnetic disk or tape, optical disk, or some other secondary storage device.A database consists of a file or a set of files. The information in these files may be broken down into records, each of which consists of one or more fields. Fields are the basic units of data storage , and each field typically contains information pertaining to one aspect or attribute of the entity described by the database . Using keywords and various sorting commands, users can rapidly search , rearrange, group, and select the fields in many records to retrieve or create reports on particular aggregate of data.Complex data relationships and linkages may be found in all but the simplest databases .The system software package that handles the difficult tasks associated with creating ,accessing, and maintaining database records is called a database management system(DBMS).The programs in a DBMS package establish an interface between the database itself and the users of the database.. (These users may be applications programmers, managers and others with information needs, and various OS programs.)A DBMS can organize, process, and present selected data elements form the database. This capability enables decision makers to search, probe, and query database contents in order to extract answers to nonrecurring and unplanned questions that aren’t available in regular reports. These questions might initially be vague and/or poorly defined ,but people can “browse” through the database until they have the needed information. In short, the DBMS will “manage” the stored data items and assemble the needed items from the common database in response to the queries of those who aren’t programmers.A database management system (DBMS) is composed of three major parts:(1)a storage subsystemthat stores and retrieves data in files;(2) a modeling and manipulation subsystem that provides the means with which to organize the data and to add , delete, maintain, and update the data;(3)and an interface between the DBMS and its users. Several major trends are emerging that enhance the value and usefulness of database management systems;Managers: who require more up-to-data information to make effective decisionCustomers: who demand increasingly sophisticated information services and more current information about the status of their orders, invoices, and accounts.Users: who find that they can develop custom applications with database systems in a fraction of the time it takes to use traditional programming languages.Organizations : that discover information has a strategic value; they utilize their database systems to gain an edge over their competitors.The Database ModelA data model describes a way to structure and manipulate the data in a database. The structural part of the model specifies how data should be represented(such as tree, tables, and so on ).The manipulative part of the model specifies the operation with which to add, delete, display, maintain, print, search, select, sort and update the data.Hierarchical ModelThe first database management systems used a hierarchical model-that is-they arranged records into a tree structure. Some records are root records and all others have unique parent records. The structure of the tree is designed to reflect the order in which the data will be used that is ,the record at the root of a tree will be accessed first, then records one level below the root ,and so on.The hierarchical model was developed because hierarchical relationships are commonly found in business applications. As you have known, an organization char often describes a hierarchical relationship: top management is at the highest level, middle management at lower levels, and operational employees at the lowest levels. Note that within a strict hierarchy, each level of management may have many employees or levels of employees beneath it, but each employee has only one manager. Hierarchical data are characterized by this one-to-many relationship among data.In the hierarchical approach, each relationship must be explicitly defined when the database is created. Each record in a hierarchical database can contain only one key field and only one relationship is allowed between any two fields. This can create a problem because data do not always conform to such a strict hierarchy.Relational ModelA major breakthrough in database research occurred in 1970 when E. F. Codd proposed a fundamentally different approach to database management called relational model ,which uses a table asits data structure.The relational database is the most widely used database structure. Data is organized into related tables. Each table is made up of rows called and columns called fields. Each record contains fields of data about some specific item. For example, in a table containing information on employees, a record would contain fields of data such as a person’s last name ,first name ,and street address.Structured query language(SQL)is a query language for manipulating data in a relational database .It is nonprocedural or declarative, in which the user need only specify an English-like description that specifies the operation and the described record or combination of records. A query optimizer translates the description into a procedure to perform the database manipulation.Network ModelThe network model creates relationships among data through a linked-list structure in which subordinate records can be linked to more than one parent record. This approach combines records with links, which are called pointers. The pointers are addresses that indicate the location of a record. With the network approach, a subordinate record can be linked to a key record and at the same time itself be a key record linked to other sets of subordinate records. The network mode historically has had a performance advantage over other database models. Today , such performance characteristics are only important in high-volume ,high-speed transaction processing such as automatic teller machine networks or airline reservation system.Both hierarchical and network databases are application specific. If a new application is developed ,maintaining the consistency of databases in different applications can be very difficult. For example, suppose a new pension application is developed .The data are the same, but a new database must be created.Object ModelThe newest approach to database management uses an object model , in which records are represented by entities called objects that can both store data and provide methods or procedures to perform specific tasks.The query language used for the object model is the same object-oriented programming language used to develop the database application .This can create problems because there is no simple , uniform query language such as SQL . The object model is relatively new, and only a few examples of object-oriented database exist. It has attracted attention because developers who choose an object-oriented programming language want a database based on an object-oriented model. Distributed DatabaseSimilarly , a distributed database is one in which different parts of the database reside on physically separated computers . One goal of distributed databases is the access of informationwithout regard to where the data might be stored. Keeping in mind that once the users and their data are separated , the communication and networking concepts come into play .Distributed databases require software that resides partially in the larger computer. This software bridges the gap between personal and large computers and resolves the problems of incompatible data formats. Ideally, it would make the mainframe databases appear to be large libraries of information, with most of the processing accomplished on the personal computer.A drawback to some distributed systems is that they are often based on what is called a mainframe-entire model , in which the larger host computer is seen as the master and the terminal or personal computer is seen as a slave. There are some advantages to this approach . With databases under centralized control , many of the problems of data integrity that we mentioned earlier are solved . But today’s personal computers, departmental computers, and distributed processing require computers and their applications to communicate with each other on a more equal or peer-to-peer basis. In a database, the client/server model provides the framework for distributing databases.One way to take advantage of many connected computers running database applications is to distribute the application into cooperating parts that are independent of one anther. A client is an end user or computer program that requests resources across a network. A server is a computer running software that fulfills those requests across a network . When the resources are data in a database ,the client/server model provides the framework for distributing database.A file serve is software that provides access to files across a network. A dedicated file server is a single computer dedicated to being a file server. This is useful ,for example ,if the files are large and require fast access .In such cases, a minicomputer or mainframe would be used as a file server. A distributed file server spreads the files around on individual computers instead of placing them on one dedicated computer.Advantages of the latter server include the ability to store and retrieve files on other computers and the elimination of duplicate files on each computer. A major disadvantage , however, is that individual read/write requests are being moved across the network and problems can arise when updating files. Suppose a user requests a record from a file and changes it while another user requests the same record and changes it too. The solution to this problems called record locking, which means that the first request makes others requests wait until the first request is satisfied . Other users may be able to read the record, but they will not be able to change it .A database server is software that services requests to a database across a network. For example, suppose a user types in a query for data on his or her personal computer . If the application is designed with the client/server model in mind ,the query language part on the personal computer simple sends the query across the network to the database server and requests to be notified when the data are found.Examples of distributed database systems can be found in the engineering world. Sun’s Network Filing System(NFS),for example, is used in computer-aided engineering applications to distribute data among the hard disks in a network of Sun workstation.Distributing databases is an evolutionary step because it is logical that data should exist at the location where they are being used . Departmental computers within a large corporation ,for example, should have data reside locally , yet those data should be accessible by authorized corporate management when they want to consolidate departmental data . DBMS software will protect the security and integrity of the database , and the distributed database will appear to its users as no different from the non-distributed database .In this information age, the data server has become the heart of a company. This one piece of software controls the rhythm of most organizations and is used to pump information lifeblood through the arteries of the network. Because of the critical nature of this application, the data server is also the one of the most popular targets for hackers. If a hacker owns this application, he can cause the company's "heart" to suffer a fatal arrest.Ironically, although most users are now aware of hackers, they still do not realize how susceptible their database servers are to hack attacks. Thus, this article presents a description of the primary methods of attacking database servers (also known as SQL servers) and shows you how to protect yourself from these attacks.You should note this information is not new. Many technical white papers go into great detail about how to perform SQL attacks, and numerous vulnerabilities have been posted to security lists that describe exactly how certain database applications can be exploited. This article was written for the curious non-SQL experts who do not care to know the details, and as a review to those who do use SQL regularly.What Is a SQL Server?A database application is a program that provides clients with access to data. There are many variations of this type of application, ranging from the expensive enterprise-level Microsoft SQL Server to the free and open source mySQL. Regardless of the flavor, most database server applications have several things in common.First, database applications use the same general programming language known as SQL, or Structured Query Language. This language, also known as a fourth-level language due to its simplistic syntax, is at the core of how a client communicates its requests to the server. Using SQL in its simplest form, a programmer can select, add, update, and delete information in a database. However, SQL can also be used to create and design entire databases, perform various functions on the returned information, and even execute other programs.To illustrate how SQL can be used, the following is an example of a simple standard SQL query and a more powerful SQL query:Simple: "Select * from dbFurniture.tblChair"This returns all information in the table tblChair from the database dbFurniture.Complex: "EXEC master..xp_cmdshell 'dir c:\'"This short SQL command returns to the client the list of files and folders under the c:\ directory of the SQL server. Note that this example uses an extended stored procedure that is exclusive to MS SQL Server.The second function that database server applications share is that they all require some form of authenticated connection between client and host. Although the SQL language is fairly easy to use, at least in its basic form, any client that wants to perform queries must first provide some form of credentials that will authorize the client; the client also must define the format of the request and response.This connection is defined by several attributes, depending on the relative location of the client and what operating systems are in use. We could spend a whole article discussing various technologies such as DSN connections, DSN-less connections, RDO, ADO, and more, but these subjects are outside the scope of this article. If you want to learn more about them, a little Google'ing will provide you with more than enough information. However, the following is a list of the more common items included in a connection request.Database sourceRequest typeDatabaseUser IDPasswordBefore any connection can be made, the client must define what type of database server it is connecting to. This is handled by a software component that provides the client with the instructions needed to create the request in the correct format. In addition to the type of database, the request type can be used to further define how the client's request will be handled by the server. Next comes the database name and finally the authentication information.All the connection information is important, but by far the weakest link is the authentication information—or lack thereof. In a properly managed server, each database has its own users with specifically designated permissions that control what type of activity they can perform. For example, a user account would be set up as read only for applications that need to only access information. Another account should be used for inserts or updates, and maybe even a third account would be used for deletes.This type of account control ensures that any compromised account is limited in functionality. Unfortunately, many database programs are set up with null or easy passwords, which leads to successful hack attacks.译文数据库管理系统介绍数据库(database,有时拼作data base)又称为电子数据库,是专门组织起来的一组数据或信息,其目的是为了便于计算机快速查询及检索。

计算机外文翻译---网格中的数据库管理

计算机外文翻译---网格中的数据库管理

毕业设计(论文)——外文翻译(原文)Database management moves into the GridDatabase management software (DBMS) has been the backbone of enterprise computing for the past many years. The market is growing bigger in terms of size, and will continue to gain prominence in 2004. With the consolidation, standardisation and centralisation of IT systems underway in most organisations, the demand for highly scalable and reliable database systems is on the rise.According to reliable industry estimates, the Indian database market is currently at about $100 million, and the top three players put together have a market share of more than 70 percent. IDC expects the information and data management software segment to grow at a compounded annual growth rate (CAGR) of 17 percent till 2006. “There will be independent solutions like business intelligence that are largely going to drive the use and adoption of databases,” says Tarun Malik, product marketing manager, Microsoft India.The importance of having a database and data warehouses for various specific applications will also be a factor of growth to drive the market. Early adapters of sophisticated database management and business intelligence tools would be large computing verticals like the government, the banking, financial services and insurance (BFSI) sector, telecom, IT services, manufacturing and the retail sector.Current statusFour or five years ago DBMS was just like a data store, with medium and large companies only looking at it as a tool for storing data. Then around three years ago it really moved into what is called the relational database space. This is where the concept of applications on databases came into the picture.In terms of users there has been a shift from meagre database administrators to developers to data warehouse managers and also towards business intelligence usage that involves a whole lot of people and not just CIOs. This means users have also evolved with the evolution of the product, its usage and market. Till the time it was a data store, database administrators could have managed it. But when it became a data warehouse, CIOs and skilled technical experts got involved.That is why DBMS is now an integral and crucial part of the overall IT policy of large enterprises. The importance of DBMS has come to fore especially after the adoption of ERP and CRM solutions. If you look at the top of the pyramid, for the top few IT spenders, DBMS has become as important as network infrastructure. “As a matter of fact, that is why it is also driving the platform strategy of vendors,” says Malik. However, the trend is still evolving in the SME space.One can now see a very strong momentum in the marketplace. As data continues to grow exponentially, one witnesses the type of information changing from record-oriented to content-oriented data. Databases have become content or information repositories. Handling that and supporting applications is not only transaction-oriented but analysis-oriented. Mixed content is going to be a way in which databases differentiate themselves. There is the trend to push more analytics into the database, with abilities like data mining in real-time to support new applications.XML will be important as users now store and build content repositories to represent that kind of content. In terms of topology of database performance, the ability to get performance, scalability and high availability in different environments is also gaining importance.Another clear trend in the database space is towards building infrastructure that is robust, secure and low-cost. That is why almost all vendors are looking at offering unlimited scalability and reliability on low-cost computers.DriversApart from the increasing adoption of databases in different verticals, the return on investment (RoI) and functionality of databases are also fuelling the growth of DBMS in the country. Consumers, especially after the dot-com debacle, have started looking at spending less and deriving more RoI from new technology, products and software. Any vendor who relates his offering to RoI would be a successful vendor.Open SourceNo one has so far dumped a clustered Oracle 9i database and replaced it with a free, open source database downloaded from the Web and running on a bunch of Intel-based Linux/free OS servers. But a growing number of users are pioneering these freely available databases. These users say that open source databases are reaching a stage where they can become the latest addition to their inventory of open source tools, including the Linux operating system, the Apache Web server and the Tomcat Java servlet engine According to these users, the main attractions of an open source database are:•V ery fast performance, especially in read-only applications.•No or nominal licensing costs.•Low administrative and operational costs.As to the back-end servers, users are still ingrained with Oracle or DB2, which has a fair amount of support for Linux.It is a typical pattern in companies that are experimenting with open source databases. High-volume database updates, which are the essence of transaction-processing applications, remain anchored on products such as Oracle‟s 9i and IBM‟s DB2 Universal Database, and increasingly Microsoft‟s SQL Server. But there are a host of new application areas that don‟t require t he complex and equally expensive features of conventional databases.MySQL open source database from MySQL has spread from being used by a few groups to the core infrastructure of the Internet portal. MySQL is a core piece of the content-generation system for many large users. Open source databases are typically available for free or for a nominal charge and include the complete source code. Finally, in accordance with the terms of the GNU General Public License (GPL), users typically have the freedom to change any part of the source code and use it without charge as long as they publish the change. Once published, the change can be used by anyone.An alternative arrangement is the Berkeley Software Development licence which is used by . Developers can use, copy, modify, and distribute this software free of cost.There is an array of open source databases. Firebird, based on Borland‟s venerable Inter Base database is one of the few that have the support and blessings of vendors and the well-organised community of coders.MySQL is also proving to be popular among open source communities. Every time a new programming language comes out, the first thing that developers usually do is add database connectivity to MySQL. PostgreSQL is the most matured of the open databases, and maintains an extensive Web presence for its developer community. It is a Canadian company that offers applications along with support services. Red Hat bases its product offerings on PostgreSQL.The open databases are often storehouses of innovation. MySQL has an architecture that has a core relational manager that can be used by different kinds of plug-in data handlers. These open databases tend to be far simpler than their conventional counterparts in all these areas. They also have low operational overheads.A common criticism of open source databases is that they don‟t support transactions or don‟t do as well as commercial products. For example, MySQL has a fast database for content store, but it is still immature in terms of transaction processing at the back-end. However, immaturity in some areas of an open database might not be a problem if the software has what you need in other areas, or has a credible track record of delivering new features on a regular basis.ConclusionThe database segment will continue to grow as businesses rely more and more on information as a source of competitive advantage. However, the market has definitely evolved over the years though it has not yet reached high maturity levels. As the SME segment has started adopting the technology, experts opine that there is going to be huge momentum in the market. The Indian SME market is no longer just a PC market; rather, it has become a well-networked and well-connected segment, which is why it has also started using servers. On the enterprise side one will witness a lot of momentum coming around solutions like applicationintegration, business intelligence and reporting services. It is expected that three factors are going to drive the Indian DBMS market in this fiscal: solutions, RoI and functionality. With vendors focusing on these aspects, one expects the market to experience good growth this fiscal.Oracle IndiaOracle feels that by adopting Grid computing (the recently announced 10G enablement) with databases like Oracle 9i, organisations can reduce the cost of IT by running it on low-cost commodity hardware. Oracle has the ability in terms of delivering all elements of the information architecture. On one hand are the development tools and database and application servers, and on other hand are the comprehensive suite of applications in the Oracle E-Business Suite. Moreover, being based on open standards, customers can adopt a hybrid model, which has a mix of legacy and customised applications, and offers a stepping-stone for organisations to move into an infrastructure with a common data model.In terms of technology, Oracle‟s focus is all on the components of the Oracle 10g infrastructure software. Oracle Database and Oracle Application Server provide a powerful deployment platform for enterprise applications, starting from companies with turnover of Rs 10 crore to the largest corporates . It has immense applicability in BFSI, manufacturing, telecom, and the government sector. It has also one of the most secure database technologies. Currently, a number of state governments are implementing Oracle-based solutions. Oracle has already launched the next release of its infrastructure software: Oracle 10g. Oracle 10g is the infrastructure software for Grid computing, which lets the user combine the power of multiple low-cost computers to work as a single powerful and reliable computer.Apart from enabling Grid computing, Oracle Database 10g includes new self-management and tuning capabilities that empower a DBA to focus on higher value-added jobs rather than the day to day management of a database. It allows database administrators to work with the consumers of technology to determine service level agreements and use policy-based database management capability to manage the system. With the release of Oracle 10g Infrastructure software, Oracle hopes to further increase its market share in India. MicrosoftMicrosoft is very aggressively growing its base for SQL Server 2000. It promises to meet the demands of customers‟ data management systems. The company has also gained strength with the promise of ease of manageability and better RoI. Again, as a corporation, the kind of support Microsoft offers to its consumers is unmatched. It involves its customers in the development of its new products. For example, development of the next version of SQL Server 2000 called …Y ukon‟ has involved not only Microsoft partners but also prime customers worldwide. The kind of investment that Microsoft puts into R&D is huge.In the days to come, Microsoft will be focusing more on business value to consumers. The consumer understands the business value of a solution, be it Business Intelligence or application integration. To increase its focus on the mid-tier and the SME market, the company is also going to enhance its channels. Microsoft is also looking at evolving its product with its new version coming up by the end of this calendar year.Bettering RoI is at the top of Microsoft‟s agenda. It believes that the b iggest RoI is going to come through the deployment of the solution, which is going to help drive the customer‟s business. Microsoft, all across its server lines, is known for ease of use and manageability.The company recently released Reporting Services in SQL Server 2000 and that too at no additional cost. Last year it had introduced a 64-bit version of SQL Server at no additional cost. The kind of rich product functionalities that the company is bringing in will clearly help users in realising better RoI. Microsoft will continue to focus on segments like government, BFSI, telecom, IT services, manufacturing and retail. Sybase-SAP allianceIn a move to provide customers with greater choice, SAP has started offering its business applications for small com panies on Sybase‟s database platform, in addition to Microsoft‟s SQL Server database. Under the agreement, SAP and Sybase will integrate SAP‟s …Business One‟ product suite for small and mid-size businesses (SMEs) into Sybase‟s Adaptive Server Enterprise (A SE) database system.Previously, SAP‟s Business One application was available on Microsoft‟s SQL Server database only.SAP will market its combined offering with Sybase through its partner distribution channels. Both SAP and Sybase will dedicate marketing, alliance and training resources to the partnership. In addition, SAP and Sybase plan to develop and market Sybase mobile solutions for Business One customers.本文来源于:/flk.aspx?id=191779&fn=OA00338786.mht&url=http%3a%2f%2fwww.expresscomp %2f20040329%2fdms01.shtml毕业设计(论文)——外文翻译(译文)网格中的数据库管理在过去的几年时间里,数据库管理系统(DBMS)已成为企业计算机的运行中枢。

外文翻译---数据库管理

外文翻译---数据库管理

英文资料翻译资料出处:From /china/ database英文原文:Database ManagementDatabase (sometimes spelled database) is also called an electronic database, referring to any collections of data, or information, that is specially organized for rapid search and retrieval by a computer. Databases are structured to facilitate the storage, retrieval modification and deletion of data in conjunction with various data-processing operations. Database can be stored on magnetic disk or tape, optical disk, or some other secondary storage device.A database consists of a file or a set of files. The information in the these files may be broken down into records, each of which consists of one or more fields are the basic units of data storage, and each field typically contains information pertaining to one aspect or attribute of the entity described by the database. Using keywords and various sorting commands, users can rapidly search, rearrange, group, and select the fields in many records to retrieve or create reports on particular aggregates of data.Database records and files must be organized to allow retrieval of the information. Early system were arranged sequentially (i.e., alphabetically, numerically, or chronologically); the development of direct-access storage devices made possible random access to data via indexes. Queries are the main way users retrieve database information. Typically the user provides a string of characters, and the computer searches the database for a corresponding sequence and provides the source materials in which those characters appear. A user can request, for example, all records in which the content of the field for a person’s last name is the word Smith.The many users of a large database must be able to manipulate the information within it quickly at any given time. Moreover, large business and other organizations tend to build up many independent files containing related and even overlapping data, and their data, processing activities often require the linking of data from several files.Several different types of database management systems have been developed to support these requirements: flat, hierarchical, network, relational, and object-oriented.In flat databases, records are organized according to a simple list of entities; many simple databases for personal computers are flat in structure. The records in hierarchical databases are organized in a treelike structure, with each level of records branching off into a set of smaller categories. Unlike hierarchical databases, which provide single links between sets of records at different levels, network databases create multiple linkages between sets by placing links, or pointers, to one set of records in another; the speed and versatility of network databases have led to their wide use in business. Relational databases are used where associations among files or records cannot be expressed by links; a simple flat list becomes one table, or “relation”, and multiple relations can be mathematically associated to yield desired information. Object-oriented databases store and manipulate more complex data structures, called “objects”, which are organized into hierarchical classes that may inherit properties from classes higher in the chain; this database structure is the most flexible and adaptable.The information in many databases consists of natural-language texts of documents; number-oriented database primarily contain information such as statistics, tables, financial data, and raw scientific and technical data. Small databases can be maintained on personal-computer systems and may be used by individuals at home. These and larger databases have become increasingly important in business life. Typical commercial applications include airline reservations, production management, medical records in hospitals, and legal records of insurance companies. The largest databases are usually maintained by governmental agencies, business organizations, and universities. These databases may contain texts of such materials as catalogs of various kinds. Reference databases contain bibliographies or indexes that serve as guides to the location of information in books, periodicals, and other published literature. Thousands of these publicly accessible databases now exist, covering topics ranging from law, medicine, and engineering to news and current events, games, classified advertisements, and instructional courses. Professionals such as scientists,doctors, lawyers, financial analysts, stockbrokers, and researchers of all types increasingly rely on these databases for quick, selective access to large volumes of information.一、DBMS Structuring TechniquesSequential, direct, and other file processing approaches are used to organize and structure data in single files. But a DBMS is able to integrate data elements from several files to answer specific user inquiries for information. That is, the DBMS is able to structure and tie together the logically related data from several large files.Logical Structures. Identifying these logical relationships is a job of the data administrator. A data definition language is used for this purpose. The DBMS may then employ one of the following logical structuring techniques during storage access, and retrieval operations.List structures. In this logical approach, records are linked together by the use of pointers. A pointer is a data item in one record that identifies the storage location of another logically related record. Records in a customer master file, for example, will contain the name and address of each customer, and each record in this file is identified by an account number. During an accounting period, a customer may buy a number of items on different days. Thus, the company may maintain an invoice file to reflect these transactions. A list structure could be used in this situation to show the unpaid invoices at any given time. Each record in the customer in the invoice file. This invoice record, in turn, would be linked to later invoices for the customer. The last invoice in the chain would be identified by the use of a special character as a pointer.Hierarchical (tree) structures. In this logical approach, data units are structured in multiple levels that graphically resemble an “upside down”tree with the root at the top and the branches formed below. There’s a superior-subordinate relationship in a hierarchical (tree) structure. Below the single-root data component are subordinate elements or nodes, each of which, in turn, “own”one or more other elements (or none). Each element or branch in this structure below the root has only a single owner. Thus, a customer owns an invoice, and the invoice has subordinate items. Thebranches in a tree structure are not connected.Network Structures. Unlike the tree approach, which does not permit the connection of branches, the network structure permits the connection of the nodes in a multidirectional manner. Thus, each node may have several owners and may, in turn, own any number of other data units. Data management software permits the extraction of the needed information from such a structure by beginning with any record in a file.Relational structures. A relational structure is made up of many tables. The data are stored in the form of “relations”in these tables. For example, relation tables could be established to link a college course with the instructor of the course, and with the location of the class.To find the name of the instructor and the location of the English class, the course/instructor relation is searched to get the name (“Fitt”), and the course/locati on relation is a relatively new database structuring approach that’s expected to be widely implemented in the future.Physical Structures. People visualize or structure data in logical ways for their own purposes. Thus, records R1 and R2 may always be logically linked and processed in sequence in one particular application. However, in a computer system it’s quite possible that these records that are logically contiguous in one application are not physically stored together. Rather, the physical structure of the records in media and hardware may depend not only on the I/O and storage devices and techniques used, but also on the different logical relationships that users may assign to the data found in R1and R2. For example, R1 and R2 may be records of credit customers who have shipments send to the same block in the same city every 2 weeks. From the shipping department manager’s perspective, then, R1 and R2 are sequential entries on a geographically organized shipping report. But in the A/R application, the customers represented by R1 and R2 may be identified, and their accounts may be processed, according to their account numbers which are widely separated. In short, then, the physical location of the stored records in many computer-based information systems is invisible to users.二、Database Management Features of OracleOracle includes many features that make the database easier to manage. We’ve divided the discussion in this section into three categories: Oracle Enterprise Manager, add-on packs, backup and recovery.1.Oracle Enterprise ManagerAs part of every Database Server, Oracle provides the Oracle Enterprise Manager (EM), a database management tool framework with a graphical interface used to manage database users, instances, and features (such as replication) that can provide additional information about the Oracle environment.Prior to the Oracle8i database, the EM software had to be installed on Windows 95/98 or NT-based systems and each repository could be accessed by only a single database manager at a time. Now you can use EM from a browser or load it onto Windows 95/98/2000 or NT-based systems. Multiple database administrators can access the EM repository at the same time. In the EM repository for Oracle9i, the super administrator can define services that should be displayed on other administrators’consoles, and management regions can be set up.2. Add-on packsSeveral optional add-on packs are available for Oracle, as described in the following sections. In addition to these database-management packs, management packs are available for Oracle Applications and for SAP R/3.(1) standard Management PackThe Standard Management Pack for Oracle provides tools for the management of small Oracle databases (e.g., Oracle Server/Standard Edition). Features include support for performance monitoring of database contention, I/O, load, memory use and instance metrics, session analysis, index tuning, and change investigation and tracking.(2) Diagnostics PackYou can use the Diagnostic Pack to monitor, diagnose, and maintain the health of Enterprise Edition databases, operating systems, and applications. With both historical and real-time analysis, you can automatically avoid problems before theyoccur. The pack also provides capacity planning features that help you plan and track future system-resource requirements.(3)Tuning PackWith the Tuning Pack, you can optimise system performance by identifying and tuning Enterprise Edition databases and application bottlenecks such as inefficient SQL, poor data design, and the improper use of system resources. The pack can proactively discover tuning opportunities and automatically generate the analysis and required changes to tune the systems.(4) Change Management PackThe Change Management Pack helps eliminate errors and loss of data when upgrading Enterprise Edition databases to support new applications. It impact and complex dependencies associated with application changes and automatically perform database upgrades. Users can initiate changes with easy-to-use wizards that teach the systematic steps necessary to upgrade.(5) AvailabilityOracle Enterprise Manager can be used for managing Oracle Standard Edition and/or Enterprise Edition. Additional functionality is provided by separate Diagnostics, Tuning, and Change Management Packs.3. Backup and RecoveryAs every database administrator knows, backing up a database is a rather mundane but necessary task. An improper backup makes recovery difficult, if not impossible. Unfortunately, people often realize the extreme importance of this everyday task only when it is too late –usually after losing business-critical data due to a failure of a related system.The following sections describe some products and techniques for performing database backup operations.(1) Recovery ManagerTypical backups include complete database backups (the most common type), database backups, control file backups, and recovery of the database. Previously,Oracle’s Enterprise Backup Utility (EBU) provided a similar solution on some platforms. However, RMAN, with its Recovery Catalog stored in an Oracle database, provides a much more complete solution. RMAN can automatically locate, back up, restore, and recover databases, control files, and archived redo logs. RMAN for Oracle9i can restart backups and restores and implement recovery window policies when backups expire. The Oracle Enterprise Manager Backup Manager provides a GUI-based interface to RMAN.(2) Incremental backup and recoveryRMAN can perform incremental backups of Enterprise Edition databases. Incremental backups back up only the blocks modified since the last backup of a datafile, tablespace, or database; thus, they’re smaller and faster than complete backups. RMAN can also perform point-in-time recovery, which allows the recovery of data until just prior to a undesirable event.(3) Legato Storage ManagerVarious media-management software vendors support RMAN. Oracle bundles Legato Storage Manager with Oracle to provide media-management services, including the tracking of tape volumes, for up to four devices. RMAN interfaces automatically with the media-management software to request the mounting of tapes as needed for backup and recovery operations.(4)AvailabilityWhile basic recovery facilities are available for both Oracle Standard Edition and Enterprise Edition, incremental backups have typically been limited to Enterprise Edition.Data IndependenceAn important point about database systems is that the database should exist independently of any of the specific applications. Traditional data processing applications are data dependent. COBOL programs contain file descriptions and record descriptions that carefully describe the format and characteristics of the data.Users should be able to change the structure of the database without affecting the applications that use it. For example, suppose that the requirements of yourapplications change. A simple example would be expanding ZIP codes from five digits to nine digits. On a traditional approach using COBOL programs each individual COBOL application program that used that particular field would have to be changed, recompiled, and retested. The programs would be unable to recognize or access a file that had been changed and contained a new data description; this, in turn, might cause disruption in processing unless the change were carefully planned.Most database programs provide the ability to change the database structure by simply changing the ZIP code field and the data-entry form. In this case, data independence allows for minimal disruption of current and existing applications. Users can continue to work and can even ignore the nine-digit code if they choose. Eventually, the file will be converted to the new nine-digit ZIP code, but the ease with which the changeover takes place emphasizes the importance of data independence.Data IntegrityData integrity refers to the accuracy, correctness, or validity of the data in the database. In a database system, data integrity means safeguarding the data against invalid alteration or destruction arise. The first has to do with many users accessing the database concurrently. For example, if thousands of travel agents and airline reservation clerks are accessing the database concurrently. For example, if thousands of travel agents and airline reservation clerks are accessing the same database at once, and two agents book the same seat on the same flight, the first agent’s booking will be lost. In such case the technique of locking the record or field provides the means for preventing one user from accessing a record while another user is updating the same record.The second complication relates to hardwires, software, or human error during the course of processing and involves database transactions treated as a single . For example, an agent booking an airline reservation involves several database updates (i.e., adding the passenger’s name and address and updating the seats-available field), which comprise a single transaction. The database transaction is not considered to be completed until all updates have been completed; otherwise, none of the updates will be allowed to take place.Data SecurityData security refers to the protection of a database against unauthorized or illegal access or modification. For example, a high-level password might allow a user to read from, write to, and modify the database structure, whereas a low-level password history of the modifications to a database-can be used to identify where and when a database was tampered with and it can also be used to restore the file to its original condition.三、Choosing between Oracle and SQL ServerI have to decide between using the Oracle database and WebDB vs. Microsoft SQL Server with Visual Studio. This choice will guide our future Web projects. What are the strong points of each of these combinations and what are the negatives?Lori: Making your decision will depend on what you already have. For instance, if you want to implement a Web-based database application and you are a Windows-only shop, SQL Server and the Visual Studio package would be fine. But the Oracle solution would be better with mixed platforms.There are other things to consider, such as what extras you get and what skills are required. WebDB is a content management and development tool that can be used by content creators, database administrators, and developers without any programming experience. WebDB is a browser-based tool that helps ease content creation and provides monitoring and maintenance tools. This is a good solution for organizations already using Oracle. Oracle also scales better than SQL Server, but you will need to have a competent Oracle administrator on hand.The SQL Sever/Visual Studio approach is more difficult to use and requires an experienced object-oriented programmer or some extensive training. However, you do get a fistful of development tools with Visual Studio: Visual Basic, Visual C++, and Visual InterDev for only $1,619. Plus, you will have to add the cost of the SQL Server, which will run you $1,999 for 10 clients or $3,999 for 25 clients-a less expensive solution than Oracle’s.Oracle also has a package solution that starts at $6,767, depending on the platform selected. The suite includes not only WebDB and Oracle8i butalso other tools for development such as the Oracle application server, JDeveloper, and Workplace Templates, and the suite runs on more platforms than the Microsoft solution does. This can be a good solution if you are a start-up or a small to midsize business. Buying these tools in a package is less costly than purchasing them individually.Much depends on your skill level, hardware resources, and budget. I hope this helps in your decision-making.Brooks: I totally agree that this decision depends in large part on what infrastructure and expertise you already have. If the decision is close, you need to figure out who’s going to be doing the work and what your priorities are.These two products have different approaches, and they reflect the different personalities of the two vendors. In general, Oracle products are designed for very professional development efforts by top-notch programmers and project leaders. The learning period is fairly long, and the solution is pricey; but if you stick it out you will ultimately have greater scalability and greater reliability.If your project has tight deadlines and you don’t have the time and/or money to hire a team of very expensive, very experienced developers, you may find that the Oracle solution is an easy way to get yourself in trouble. There’s nothing worse than a poorly developed Oracle application.What Microsoft offers is a solution that’s aimed at rapid development and low-cost implementation. The tools are cheaper, the servers you’ll run it on are cheaper, and the developers you need will be cheaper. Choosing SQL Sever and Visual Studio is an excellent way to start fast.Of course, there are trade-offs. The key problem I have with Visual Studio and SQL Server is that you’ll be tied to Microsoft operating systems and Intel hardware. If the day comes when you need to support hundreds of thousands of users, you really don’t have anywhere to go other than buying hundreds of servers, which is a management nightmare.If you go with the Microsoft approach, it sounds like you may not need morethan Visual Interdev. If you already know that you’re going to be developing ActiveX components in Visual Basic or Visual C++, that’s warning sign that maybe you should look at the Oracle solution more closely.I want to emphasize that, although these platforms have their relative strengths and weaknesses, if you do it right you can build a world-class application on either one. So if you have an organizational bias toward one of the vendors, by all means go with it. If you’re starting out from scratch, you’re going to have to ask yourself whether your organization leans more toward perfectionism or pragmatism, and realize that both “isms”have their faults.中文译文:数据库管理数据库(也称DataBase)也称为电子数据库,是指由计算机特别组织的用下快速查找和检索的任意的数据或信息集合。

计算机专业外文翻译+原文-数据库管理系统介绍知识讲解

计算机专业外文翻译+原文-数据库管理系统介绍知识讲解

计算机专业外文翻译+原文-数据库管理系统介绍外文资料Database Management SystemsA database (sometimes spelled data base) is also called an electronic database , referring to any collection of data, or information, that is specially organized for rapid search and retrieval by a computer. Databases are structured to facilitate the storage, retrieval , modification, and deletion of data in conjunction with various data-processing operations .Databases can be stored on magnetic disk or tape, optical disk, or some other secondary storage device.A database consists of a file or a set of files. The information in these files may be broken down into records, each of which consists of one or more fields. Fields are the basic units of data storage , and each field typically contains information pertaining to one aspect or attribute of the entity described by the database . Using keywords and various sorting commands, users can rapidly search , rearrange, group, and select the fields in many records to retrieve or create reports on particular aggregate of data.Complex data relationships and linkages may be found in all but the simplest databases .The system software package that handles the difficult tasks associated with creating ,accessing, and maintaining database records is called a database management system(DBMS).The programs in a DBMS package establish an interface between the database itself and the users of the database.. (These users may be applications programmers, managers and others with information needs, and various OS programs.)A DBMS can organize, process, and present selected data elements form the database. This capability enables decision makers to search, probe, and query database contents in order to extract answers to nonrecurring and unplanned questions that aren’t available in regular reports. These questions might initially be vague and/or poorly defined ,but people can “browse” through the database until they have the needed information. In short, the DBMS will “manage” the stored data items and assemble the needed items from the common database in response to the queries of those who aren’t programmers.A database management system (DBMS) is composed of three major parts:(1)a storage subsystem that stores and retrieves data in files;(2) a modeling and manipulation subsystem that provides the means with which to organize the data and to add , delete, maintain, and update the data;(3)and an interface between the DBMS and its users. Severalmajor trends are emerging that enhance the value and usefulness of database management systems;Managers: who require more up-to-data information to make effective decisionCustomers: who demand increasingly sophisticated information services and more current information about the status of their orders, invoices, and accounts.Users: who find that they can develop custom applications with database systems in a fraction of the time it takes to use traditional programming languages.Organizations : that discover information has a strategic value; they utilize theirdatabase systems to gain an edge over their competitors.The Database ModelA data model describes a way to structure and manipulate the data in a database. The structural part of the model specifies how data should be represented(such as tree, tables, and so on ).The manipulative part of the model specifies the operation with which to add, delete, display, maintain, print, search, select, sort and update the data.Hierarchical ModelThe first database management systems used a hierarchical model-that is-they arranged records into a tree structure. Some records are root records and all others have unique parent records. The structure of the tree is designed to reflect the order in which the data will be used that is ,the record at the root of a tree will be accessed first, then records one level below the root ,and so on.The hierarchical model was developed because hierarchical relationships are commonly found in business applications. As you have known, an organization char often describes a hierarchical relationship: top management is at the highest level, middle management at lower levels, and operational employees at the lowest levels. Note that within a strict hierarchy, each level of management may have many employees or levels of employees beneath it, but each employee has only one manager. Hierarchical data are characterized by this one-to-many relationship among data.In the hierarchical approach, each relationship must be explicitly defined when the database is created. Each record in a hierarchical database can contain only one key field and only one relationship is allowed between any two fields. This can create a problem because data do not always conform to such a strict hierarchy.Relational ModelA major breakthrough in database research occurred in 1970 when E. F. Codd proposed a fundamentally different approach to database management called relational model ,which uses a table as its data structure.The relational database is the most widely used database structure. Data is organized into related tables. Each table is made up of rows called and columns called fields. Each record contains fields of data about some specific item. For example, in a table containing information on employees, a record would contain fields of data such as a person’s last name ,first name ,and street address.Structured query language(SQL)is a query language for manipulating data in a relational database .It is nonprocedural or declarative, in which the user need only specify an English-like description that specifies the operation and the described record or combination of records. A query optimizer translates the description into a procedure to perform the database manipulation.Network ModelThe network model creates relationships among data through a linked-list structure in which subordinate records can be linked to more than one parent record. This approach combines records with links, which are called pointers. The pointers are addresses that indicate the location of a record. With the network approach, a subordinate record can be linked to a key record and at the same time itself be a key record linked to other sets of subordinate records. The network mode historically has had a performance advantage over other database models. Today , such performance characteristics are only important in high-volume ,high-speed transaction processing such as automatic teller machine networks or airline reservation system.Both hierarchical and network databases are application specific. If a new application is developed ,maintaining the consistency of databases in different applications can be very difficult. For example, suppose a new pension application is developed .The data are the same, but a new database must be created.Object ModelThe newest approach to database management uses an object model , in which records are represented by entities called objects that can both store data and provide methods or procedures to perform specific tasks.The query language used for the object model is the same object-oriented programming language used to develop the database application .This can create problems because there is no simple , uniform query language such as SQL . The object model isrelatively new, and only a few examples of object-oriented database exist. It has attracted attention because developers who choose an object-oriented programming language want a database based on an object-oriented model.Distributed DatabaseSimilarly , a distributed database is one in which different parts of the database reside on physically separated computers . One goal of distributed databases is the access of information without regard to where the data might be stored. Keeping in mind that once the users and their data are separated , the communication and networking concepts come into play .Distributed databases require software that resides partially in the larger computer. This software bridges the gap between personal and large computers and resolves the problems of incompatible data formats. Ideally, it would make the mainframe databases appear to be large libraries of information, with most of the processing accomplished on the personal computer.A drawback to some distributed systems is that they are often based on what is called a mainframe-entire model , in which the larger host computer is seen as the master and the terminal or personal computer is seen as a slave. There are some advantages to this approach . With databases under centralized control , many of the problems of data integrity that we mentioned earlier are solved . But today’s personal computers, departmental computers, and distributed processing require computers and their applications to communicate with each other on a more equal or peer-to-peer basis. In a database, the client/server model provides the framework for distributing databases.One way to take advantage of many connected computers running database applications is to distribute the application into cooperating parts that are independent of one anther. A client is an end user or computer program that requests resources across a network. A server is a computer running software that fulfills those requests across a network . When the resources are data in a database ,the client/server model provides the framework for distributing database.A file serve is software that provides access to files across a network. A dedicated file server is a single computer dedicated to being a file server. This is useful ,for example ,if the files are large and require fast access .In such cases, a minicomputer or mainframe would be used as a file server. A distributed file server spreads the files around on individual computers instead of placing them on one dedicated computer.Advantages of the latter server include the ability to store and retrieve files on other computers and the elimination of duplicate files on each computer. A major disadvantage , however, is that individual read/write requests are being moved across the network and problems can arise when updating files. Suppose a user requests a record from a file and changes it while another user requests the same record and changes it too. The solution to this problems called record locking, which means that the first request makes others requests wait until the first request is satisfied . Other users may be able to read the record, but they will not be able to change it .A database server is software that services requests to a database across a network. For example, suppose a user types in a query for data on his or her personal computer . If the application is designed with the client/server model in mind ,the query language part on the personal computer simple sends the query across the network to the database server and requests to be notified when the data are found.Examples of distributed database systems can be found in the engineering world. Sun’s Network Filing System(NFS),for example, is used in computer-aided engineering applications to distribute data among the hard disks in a network of Sun workstation.Distributing databases is an evolutionary step because it is logical that data should exist at the location where they are being used . Departmental computers within a large corporation ,for example, should have data reside locally , yet those data should be accessible by authorized corporate management when they want to consolidate departmental data . DBMS software will protect the security and integrity of the database , and the distributed database will appear to its users as no different from the non-distributed database .In this information age, the data server has become the heart of a company. This one piece of software controls the rhythm of most organizations and is used to pump information lifeblood through the arteries of the network. Because of the critical nature of this application, the data server is also the one of the most popular targets for hackers. If a hacker owns this application, he can cause the company's "heart" to suffer a fatal arrest.Ironically, although most users are now aware of hackers, they still do not realize how susceptible their database servers are to hack attacks. Thus, this article presents a description of the primary methods of attacking database servers (also known as SQL servers) and shows you how to protect yourself from these attacks.You should note this information is not new. Many technical white papers go into great detail about how to perform SQL attacks, and numerous vulnerabilities have beenposted to security lists that describe exactly how certain database applications can be exploited. This article was written for the curious non-SQL experts who do not care to know the details, and as a review to those who do use SQL regularly.What Is a SQL Server?A database application is a program that provides clients with access to data. There are many variations of this type of application, ranging from the expensive enterprise-level Microsoft SQL Server to the free and open source mySQL. Regardless of the flavor, most database server applications have several things in common.First, database applications use the same general programming language known as SQL, or Structured Query Language. This language, also known as a fourth-level language due to its simplistic syntax, is at the core of how a client communicates its requests to the server. Using SQL in its simplest form, a programmer can select, add, update, and delete information in a database. However, SQL can also be used to create and design entire databases, perform various functions on the returned information, and even execute other programs.To illustrate how SQL can be used, the following is an example of a simple standard SQL query and a more powerful SQL query:Simple: "Select * from dbFurniture.tblChair"This returns all information in the table tblChair from the database dbFurniture.Complex: "EXEC master..xp_cmdshell 'dir c:\'"This short SQL command returns to the client the list of files and folders under the c:\ directory of the SQL server. Note that this example uses an extended stored procedure that is exclusive to MS SQL Server.The second function that database server applications share is that they all require some form of authenticated connection between client and host. Although the SQL language is fairly easy to use, at least in its basic form, any client that wants to perform queries must first provide some form of credentials that will authorize the client; the client also must define the format of the request and response.This connection is defined by several attributes, depending on the relative location of the client and what operating systems are in use. We could spend a whole article discussing various technologies such as DSN connections, DSN-less connections, RDO, ADO, and more, but these subjects are outside the scope of this article. If you want to learn more about them, a little Google'ing will provide you with more than enough information.However, the following is a list of the more common items included in a connection request.Database sourceRequest typeDatabaseUser IDPasswordBefore any connection can be made, the client must define what type of database server it is connecting to. This is handled by a software component that provides the client with the instructions needed to create the request in the correct format. In addition to the type of database, the request type can be used to further define how the client's request will be handled by the server. Next comes the database name and finally the authentication information.All the connection information is important, but by far the weakest link is the authentication information—or lack thereof. In a properly managed server, each database has its own users with specifically designated permissions that control what type of activity they can perform. For example, a user account would be set up as read only for applications that need to only access information. Another account should be used for inserts or updates, and maybe even a third account would be used for deletes. This type of account control ensures that any compromised account is limited in functionality. Unfortunately, many database programs are set up with null or easy passwords, which leads to successful hack attacks.译文数据库管理系统介绍数据库(database,有时拼作data base)又称为电子数据库,是专门组织起来的一组数据或信息,其目的是为了便于计算机快速查询及检索。

SQL数据库中英文对照外文翻译文献

SQL数据库中英文对照外文翻译文献

SQL数据库中英文对照外文翻译文献中英文对照外文翻译文献(文档含英文原文和中文翻译)Working with DatabasesThis chapter describes how to use SQL statements in embedded applications to control databases. There are three database statements that set up and open databases for access: SET DATABASE declares a database handle, associates the handle with an actual database file, and optionally assigns operational parameters for the database.SET NAMES optionally specifies the character set a client application uses for CHAR, VARCHAR, and text Blob data. The server uses this information to transliterate from a database?s default character set to the client?s character set on SELECT operations, and to transliterate from a client application?s character set to the database character set on INSERT and UPDATE operations.g CONNECT opens a database, allocates system resources for it, and optionally assigns operational parameters for the database.All databases must be closed before a program ends. A database can be closed by using DISCONNECT, or by appending the RELEASE option to the final COMMIT or ROLLBACK in a program.Declaring a databaseBefore a database can be opened and used in a program, it must first be declared with SET DATABASE to:CHAPTER 3 WORKING WITH DATABASES. Establish a database handle. Associate the database handle with a database file stored on a local or remote node.A database handle is aunique, abbreviated alias for an actual database name. Database handles are used in subsequent CONNECT, COMMIT RELEASE, and ROLLBACK RELEASE statements to specify which databases they should affect. Except in dynamic SQL (DSQL) applications, database handles can also be used inside transaction blocks to qualify, or differentiate, table names when two or more open databases contain identically named tables.Each database handle must be unique among all variables used in a program. Database handles cannot duplicate host-language reserved words, and cannot be InterBase reserved words.The following statement illustrates a simple database declaration:EXEC SQLSET DATABASE DB1 = ?employee.gdb?;This database declaration identifies the database file, employee.gdb, as a database the program uses, and assigns the database a handle, or alias, DB1.If a program runs in a directory different from the directory that contains the database file, then the file name specification in SET DATABASE must include a full path name, too. For example, the following SET DATABASE declaration specifies the full path to employee.gdb:EXEC SQLSET DATABASE DB1 = ?/interbase/examples/employee.gdb?;If a program and a database file it uses reside on different hosts, then the file name specification must also include a host name. The following declaration illustrates how a Unix host name is included as part of the database file specification on a TCP/IP network:EXEC SQLSET DATABASE DB1 = ?jupiter:/usr/interbase/examples/employee.gdb?;On a Windows network that uses the Netbeui protocol, specify the path as follows: EXEC SQLSET DATABASE DB1 = ?//venus/C:/Interbase/examples/employee.gdb?; DECLARING A DATABASEEMBEDDED SQL GUIDE 37Declaring multiple databasesAn SQL program, but not a DSQL program, can access multiple databases at the same time. In multi-database programs, database handles are required. A handle is used to:1. Reference individual databases in a multi-database transaction.2. Qualify table names.3. Specify databases to open in CONNECT statements.Indicate databases to close with DISCONNECT, COMMIT RELEASE, and ROLLBACK RELEASE.DSQL programs can access only a single database at a time, so database handle use is restricted to connecting to and disconnecting from a database.In multi-database programs, each database must be declared in a separate SET DATABASE statement. For example, the following code contains two SET DATABASE statements: . . .EXEC SQLSET DATABASE DB2 = ?employee2.gdb?;EXEC SQLSET DATABASE DB1 = ?employee.gdb?;. . .4Using handles for table namesWhen the same table name occurs in more than one simultaneously accessed database, a database handle must be used to differentiate one table name from another. The database handle is used as a prefix to table names, and takes the form handle.table.For example, in the following code, the database handles, TEST and EMP, are used to distinguish between two tables, each named EMPLOYEE:. . .EXEC SQLDECLARE IDMATCH CURSOR FORSELECT TESTNO INTO :matchid FROM TEST.EMPLOYEEWHERE TESTNO > 100;EXEC SQLDECLARE EIDMATCH CURSOR FORSELECT EMPNO INTO :empid FROM EMP.EMPLOYEEWHERE EMPNO = :matchid;. . .CHAPTER 3 WORKING WITH DATABASES38 INTERBASE 6IMPORTANTThis use of database handles applies only to embedded SQL applications. DSQL applications cannot access multiple databases simultaneously.4Using handles with operationsIn multi-database programs, database handles must be specified in CONNECT statements to identify which databases among several to open and prepare for use in subsequent transactions.Database handles can also be used with DISCONNECT, COMMIT RELEASE, and ROLLBACKRELEASE to specify a subset of open databases to close.To open and prepare a database with CONNECT, see “Opening a database” on page 41.To close a database with DISCONNECT, COMMIT RELEASE, or ROLLBACK RELEASE, see“Closing a database” on page 49. To learn more about using database handles in transactions, see “Accessing an open database” on page 48.Preprocessing and run time databasesNormally, each SET DATABASE statement specifies a single database file to associate with a handle. When a program is preprocessed, gpre uses the specified file to validate the prog ram?s table and column references. Later, when a user runs the program, the same database file is accessed. Different databases can be specified for preprocessing and run time when necessary.4Using the COMPILETIME clause A program can be designed to run against any one of several identically structured databases. In other cases, the actual database that a program will use at runtime is not available when a program is preprocessed and compiled. In such cases, SET DATABASE can include a COMPILETIME clause to specify a database for gpre to test against during preprocessing. For example, the following SET DATABASE statement declares that employee.gdb is to be used by gpre during preprocessing: EXEC SQLSET DATABASE EMP = COMPILETIME ?employee.gdb?;IMPORTANTThe file specification that follows the COMPILETIME keyword must always be a hard-coded, quoted string.DECLARING A DATABASEEMBEDDED SQL GUIDE 39When SET DATABASE uses the COMPILETIME clause, but no RUNTIME clause, and does not specify a different database file specification in a subsequent CONNECT statement, the same database file is used both for preprocessing and run time. To specify different preprocessing and runtime databases with SET DATABASE, use both the COMPILETIME andRUNTIME clauses.4Using the RUNTIME clauseWhen a database file is specified for use during preprocessing, SET DATABASE can specify a different database to use at run time by including the RUNTIME keyword and a runtime file specification:EXEC SQLSET DATABASE EMP = COMPILETIME ?employee.gdb?RUNTIME ?employee2.gdb?;The file specification that follows the RUNTIME keyword can be either ahard-coded, quoted string, or a host-language variable. For example, the following C code fragment prompts the user for a database name, and stores the name in a variable that is used later in SET DATABASE:. . .char db_name[125];. . .printf("Enter the desired database name, including node and path):\n");gets(db_name);EXEC SQLSET DATABASE EMP = COMPILETIME ?employee.gdb?RUNTIME : db_name; . . .Note host-language variables in SET DATABASE must be preceded, as always, by a colon.Controlling SET DATABASE scopeBy default, SET DATABASE creates a handle that is global to all modules in an application.A global handle is one that may be referenced in all host-language modules comprising the program. SET DATABASE provides two optional keywords to change the scope of a declaration:g STATIC limits declaration scope to the module containing the SET DATABASE statement. No other program modules can see or use a database handle declared STATIC.CHAPTER 3 WORKING WITH DATABASES40 INTERBASE 6EXTERN notifies gpre that a SET DATABASE statement in a module duplicates a globally-declared database in another module. If the EXTERN keyword is used, then another module must contain the actual SET DATABASE statement, or an error occurs during compilation.The STATIC keyword is used in a multi-module program to restrict database handle access to the single module where it is declared. The following example illustrates the use of the STATIC keyword:EXEC SQLSET DATABASE EMP = STATIC ?employee.gdb?;The EXTERN keyword is used in a multi-module program to signal that SET DATABASE in one module is not an actual declaration, but refers to a declaration made in a different module. Gpre uses this information during preprocessing. Thefollowing example illustrates the use of the EXTERN keyword: EXEC SQLSET DATABASE EMP = EXTERN ?employee.gdb?;If an application contains an EXTERN reference, then when it is used at run time, the actual SET DATABASE declaration must be processed first, and the database connected before other modules can access it.A single SET DATABASE statement can contain either the STATIC or EXTERN keyword, but not both. A scope declaration in SET DATABASE applies to both COMPILETIME and RUNTIME databases.Specifying a connection character setWhen a client application connects to a database, it may have its own character set requirements. The server providing database access to the client does not know about these requirements unless the client specifies them. The client application specifies its character set requirement using the SET NAMES statement before it connects to the database.SET NAMES specifies the character set the server should use when translating data from the database to the client application. Similarly, when the client sends data to the database, the server translates the data from the client?s character set to the database?s default character set (or the character set for an individual column if it differs from the database?s default character set). For example, the followingstatements specify that the client is using the DOS437 character set, then connect to the database:EXEC SQLOPENING A DATABASEEMBEDDED SQL GUIDE 41SET NAMES DOS437;EXEC SQLCONNECT ?europe.gdb? USER ?JAMES? PASSWORD ?U4EEAH?;For more information about character sets, see the Data Definition Guide. For the complete syntax of SET NAMES and CONNECT, see the Language Reference. Opening a database After a database is declared, it must be attached with a CONNECT statement before it can be used. CONNECT:1. Allocates system resources for the database.2. Determines if the database file is local, residing on the same host where the application itself is running, or remote, residing on a different host.3. Opens the database and examines it to make sure it is valid.InterBase provides transparent access to all databases, whether local or remote. If the database structure is invalid, the on-disk structure (ODS) number does not correspond to the one required by InterBase, or if the database is corrupt, InterBase reports an error, and permits no further access. Optionally, CONNECT can be used to specify:4. A user name and password combination that is checked against the server?s security database before allowing the connect to succeed. User names can be up to 31 characters.Passwords are restricted to 8 characters.5. An SQL role name that the user adopts on connection to the database, provided that the user has previously been granted membership in the role. Regardless of role memberships granted, the user belongs to no role unless specified with this ROLE clause.The client can specify at most one role per connection, and cannot switch roles except by reconnecting.6. The size of the database buffer cache to allocate to the application when the default cache size is inappropriate.Using simple CONNECT statementsIn its simplest form, CONNECT requires one or more database parameters, each specifying the name of a database to open. The name of the database can be a: Database handle declared in a previous SET DATABASE statement.CHAPTER 3 WORKING WITH DATABASES42 INTERBASE 61. Host-language variable.2. Hard-coded file name.4Using a database handleIf a program uses SET DATABASE to provide database handles, those handles should be used in subsequent CONNECT statements instead of hard-coded names. For example, . . .EXEC SQLSET DATABASE DB1 = ?employee.gdb?;EXEC SQLSET DATABASE DB2 = ?employee2.gdb?;EXEC SQLCONNECT DB1;EXEC SQLCONNECT DB2;. . .There are several advantages to using a database handle with CONNECT:1. Long file specifications can be replaced by shorter, mnemonic handles.2. Handles can be used to qualify table names in multi-database transactions. DSQL applications do not support multi-database transactions.3. Handles can be reassigned to other databases as needed.4. The number of database cache buffers can be specified as an additional CONNECT parameter.For more information about setting the number of databas e cache buffers, see “Setting database cache buffers” on page 47. 4Using strings or host-language variables Instead of using a database handle, CONNECT can use a database name supplied at run time. The database name can be supplied as either a host-language variable or a hard-coded, quoted string.The following C code demonstrates how a program accessing only a single database might implement CONNECT using a file name solicited from a user at run time:. . .char fname[125];. . .printf(?Enter the desired database name, including nodeand path):\n?);OPENING A DATABASEEMBEDDED SQL GUIDE 43gets(fname);. . .EXEC SQLCONNECT :fname;. . .TipThis technique is especially useful for programs that are designed to work with many identically structured databases, one at a time, such as CAD/CAM or architectural databases.MULTIPLE DATABASE IMPLEMENTATIONTo use a database specified by the user as a host-language variable in a CONNECT statement in multi-database programs, follow these steps:1. Declare a database handle using the following SET DATABASE syntax:。

(完整word版)数据库管理系统介绍 外文翻译

(完整word版)数据库管理系统介绍 外文翻译

外文资料Database Management SystemsA database (sometimes spelled data base) is also called an electronic database , referring to any collection of data, or information, that is specially organized for rapid search and retrieval by a computer. Databases are structured to facilitate the storage, retrieval , modification, and deletion of data in conjunction with various data-processing operations .Databases can be stored on magnetic disk or tape, optical disk, or some other secondary storage device.A database consists of a file or a set of files. The information in these files may be broken down into records, each of which consists of one or more fields. Fields are the basic units of data storage , and each field typically contains information pertaining to one aspect or attribute of the entity described by the database . Using keywords and various sorting commands, users can rapidly search , rearrange, group, and select the fields in many records to retrieve or create reports on particular aggregate of data.Complex data relationships and linkages may be found in all but the simplest databases .The system software package that handles the difficult tasks associated with creating ,accessing, and maintaining database records is called a database management system(DBMS).The programs in a DBMS package establish an interface between the database itself and the users of the database.. (These users may be applications programmers, managers and others with information needs, and various OS programs.)A DBMS can organize, process, and present selected data elements form the database. This capability enables decision makers to search, probe, and query database contents in order to extract answers to nonrecurring and unplanned questions that aren’t available in regular reports. These questions might initially be vague and/or poorly defined ,but people can “browse” through the database until they have the needed information. In short, the DBMS will “manage” the stored data items and assemble the needed items from the common database in response to the queries of those who aren’t programmers.A database management system (DBMS) is composed of three major parts:(1)a storage subsystem that stores and retrieves data in files;(2) a modeling and manipulation subsystem that provides the means with which to organize the data and to add , delete, maintain, and update the data;(3)and an interface between the DBMS and its users. Several major trends are emerging that enhance the value and usefulness of database management systems;Managers: who require more up-to-data information to make effective decisionCustomers: who demand increasingly sophisticated information services and more current information about the status of their orders, invoices, and accounts.Users: who find that they can develop custom applications with database systems in a fraction of the time it takes to use traditional programming languages.Organizations : that discover information has a strategic value; they utilize their database systems to gain an edge over their competitors.The Database ModelA data model describes a way to structure and manipulate the data in a database. The structural part of the model specifies how data should be represented(such as tree, tables, and so on ).The manipulative part of the model specifies the operation with which to add, delete, display, maintain, print, search, select, sort and update the data.Hierarchical ModelThe first database management systems used a hierarchical model-that is-they arranged records into a tree structure. Some records are root records and all others have unique parent records. The structure of the tree is designed to reflect the order in which the data will be used that is ,the record at the root of a tree will be accessed first, then records one level below the root ,and so on.The hierarchical model was developed because hierarchical relationships are commonly found in business applications. As you have known, an organization char often describes a hierarchical relationship: top management is at the highest level, middle management at lower levels, and operational employees at the lowest levels. Note that within a strict hierarchy, each level of management may have many employees or levels of employees beneath it, but each employee has only one manager. Hierarchical data are characterized by this one-to-many relationship among data.In the hierarchical approach, each relationship must be explicitly defined when the database is created. Each record in a hierarchical database can contain only one key field and only one relationship is allowed between any two fields. This can create a problem because data do not always conform to such a strict hierarchy.Relational ModelA major breakthrough in database research occurred in 1970 when E. F. Codd proposed a fundamentally different approach to database management called relational model ,which uses a table as its data structure.The relational database is the most widely used database structure. Data is organized into related tables. Each table is made up of rows called and columns called fields. Each record contains fields of data about some specific item. For example, in a table containing information on employees, a recordwould contain fields of data such as a person’s last name ,first name ,and street address.Structured query language(SQL)is a query language for manipulating data in a relational database .It is nonprocedural or declarative, in which the user need only specify an English-like description that specifies the operation and the described record or combination of records. A query optimizer translates the description into a procedure to perform the database manipulation.Network ModelThe network model creates relationships among data through a linked-list structure in which subordinate records can be linked to more than one parent record. This approach combines records with links, which are called pointers. The pointers are addresses that indicate the location of a record. With the network approach, a subordinate record can be linked to a key record and at the same time itself be a key record linked to other sets of subordinate records. The network mode historically has had a performance advantage over other database models. Today , such performance characteristics are only important in high-volume ,high-speed transaction processing such as automatic teller machine networks or airline reservation system.Both hierarchical and network databases are application specific. If a new application is developed ,maintaining the consistency of databases in different applications can be very difficult. For example, suppose a new pension application is developed .The data are the same, but a new database must be created.Object ModelThe newest approach to database management uses an object model , in which records are represented by entities called objects that can both store data and provide methods or procedures to perform specific tasks.The query language used for the object model is the same object-oriented programming language used to develop the database application .This can create problems because there is no simple , uniform query language such as SQL . The object model is relatively new, and only a few examples of object-oriented database exist. It has attracted attention because developers who choose an object-oriented programming language want a database based on an object-oriented model. Distributed DatabaseSimilarly , a distributed database is one in which different parts of the database reside on physically separated computers . One goal of distributed databases is the access of information without regard to where the data might be stored. Keeping in mind that once the users and their data are separated , the communication and networking concepts come into play .Distributed databases require software that resides partially in the larger computer. This software bridges the gap between personal and large computers and resolves the problems of incompatible dataformats. Ideally, it would make the mainframe databases appear to be large libraries of information, with most of the processing accomplished on the personal computer.A drawback to some distributed systems is that they are often based on what is called a mainframe-entire model , in which the larger host computer is seen as the master and the terminal or personal computer is seen as a slave. There are some advantages to this approach . With databases under centralized control , many of the problems of data integrity that we mentioned earlier are solved . But today’s personal computers, departmental computers, and distributed processing require computers and their applications to communicate with each other on a more equal or peer-to-peer basis. In a database, the client/server model provides the framework for distributing databases.One way to take advantage of many connected computers running database applications is to distribute the application into cooperating parts that are independent of one anther. A client is an end user or computer program that requests resources across a network. A server is a computer running software that fulfills those requests across a network . When the resources are data in a database ,the client/server model provides the framework for distributing database.A file serve is software that provides access to files across a network. A dedicated file server is a single computer dedicated to being a file server. This is useful ,for example ,if the files are large and require fast access .In such cases, a minicomputer or mainframe would be used as a file server. A distributed file server spreads the files around on individual computers instead of placing them on one dedicated computer.Advantages of the latter server include the ability to store and retrieve files on other computers and the elimination of duplicate files on each computer. A major disadvantage , however, is that individual read/write requests are being moved across the network and problems can arise when updating files. Suppose a user requests a record from a file and changes it while another user requests the same record and changes it too. The solution to this problems called record locking, which means that the first request makes others requests wait until the first request is satisfied . Other users may be able to read the record, but they will not be able to change it .A database server is software that services requests to a database across a network. For example, suppose a user types in a query for data on his or her personal computer . If the application is designed with the client/server model in mind ,the query language part on the personal computer simple sends the query across the network to the database server and requests to be notified when the data are found.Examples of distributed database systems can be found in the engineering world. Sun’s Network Filing System(NFS),for example, is used in computer-aided engineering applications to distribute data among the hard disks in a network of Sun workstation.Distributing databases is an evolutionary step because it is logical that data should exist at thelocation where they are being used . Departmental computers within a large corporation ,for example, should have data reside locally , yet those data should be accessible by authorized corporate management when they want to consolidate departmental data . DBMS software will protect the security and integrity of the database , and the distributed database will appear to its users as no different from the non-distributed database .In this information age, the data server has become the heart of a company. This one piece of software controls the rhythm of most organizations and is used to pump information lifeblood through the arteries of the network. Because of the critical nature of this application, the data server is also the one of the most popular targets for hackers. If a hacker owns this application, he can cause the company's "heart" to suffer a fatal arrest.Ironically, although most users are now aware of hackers, they still do not realize how susceptible their database servers are to hack attacks. Thus, this article presents a description of the primary methods of attacking database servers (also known as SQL servers) and shows you how to protect yourself from these attacks.You should note this information is not new. Many technical white papers go into great detail about how to perform SQL attacks, and numerous vulnerabilities have been posted to security lists that describe exactly how certain database applications can be exploited. This article was written for the curious non-SQL experts who do not care to know the details, and as a review to those who do use SQL regularly.What Is a SQL Server?A database application is a program that provides clients with access to data. There are many variations of this type of application, ranging from the expensive enterprise-level Microsoft SQL Server to the free and open source mySQL. Regardless of the flavor, most database server applications have several things in common.First, database applications use the same general programming language known as SQL, or Structured Query Language. This language, also known as a fourth-level language due to its simplistic syntax, is at the core of how a client communicates its requests to the server. Using SQL in its simplest form, a programmer can select, add, update, and delete information in a database. However, SQL can also be used to create and design entire databases, perform various functions on the returned information, and even execute other programs.To illustrate how SQL can be used, the following is an example of a simple standard SQL query and a more powerful SQL query:Simple: "Select * from dbFurniture.tblChair"This returns all information in the table tblChair from the database dbFurniture.Complex: "EXEC master..xp_cmdshell 'dir c:\'"This short SQL command returns to the client the list of files and folders under the c:\ directory of the SQL server. Note that this example uses an extended stored procedure that is exclusive to MS SQL Server.The second function that database server applications share is that they all require some form of authenticated connection between client and host. Although the SQL language is fairly easy to use, at least in its basic form, any client that wants to perform queries must first provide some form of credentials that will authorize the client; the client also must define the format of the request and response.This connection is defined by several attributes, depending on the relative location of the client and what operating systems are in use. We could spend a whole article discussing various technologies such as DSN connections, DSN-less connections, RDO, ADO, and more, but these subjects are outside the scope of this article. If you want to learn more about them, a little Google'ing will provide you with more than enough information. However, the following is a list of the more common items included in a connection request.Database sourceRequest typeDatabaseUser IDPasswordBefore any connection can be made, the client must define what type of database server it is connecting to. This is handled by a software component that provides the client with the instructions needed to create the request in the correct format. In addition to the type of database, the request type can be used to further define how the client's request will be handled by the server. Next comes the database name and finally the authentication information.All the connection information is important, but by far the weakest link is the authentication information—or lack thereof. In a properly managed server, each database has its own users with specifically designated permissions that control what type of activity they can perform. For example, a user account would be set up as read only for applications that need to only access information. Another account should be used for inserts or updates, and maybe even a third account would be used for deletes. This type of account control ensures that any compromised account is limited in functionality. Unfortunately, many database programs are set up with null or easy passwords, which leads to successful hack attacks.译文数据库管理系统介绍数据库也可以称为电子数据库,是专门组织起来的一组数据或信息,其目的是为了便于计算机快速查询及检索。

SQL Server数据库管理外文翻译文献

SQL Server数据库管理外文翻译文献

SQL Server数据库管理外文翻译文献本文翻译了一篇关于SQL Server数据库管理的外文文献。

摘要该文献介绍了SQL Server数据库管理的基本原则和策略。

作者指出,重要的决策应该基于独立思考,避免过多依赖外部帮助。

对于非可确认的内容,不应进行引用。

文献还强调了以简单策略为主、避免法律复杂性的重要性。

内容概述本文详细介绍了SQL Server数据库管理的基本原则和策略。

其中包括:1. 独立决策:在数据库管理中,决策应该基于独立思考。

不过多依赖用户的帮助或指示,而是依靠数据库管理员的专业知识和经验进行决策。

独立决策:在数据库管理中,决策应该基于独立思考。

不过多依赖用户的帮助或指示,而是依靠数据库管理员的专业知识和经验进行决策。

2. 简单策略:为了避免法律复杂性和错误的决策,应采用简单策略。

这意味着避免引用无法确认的内容,只使用可靠和可验证的信息。

简单策略:为了避免法律复杂性和错误的决策,应采用简单策略。

这意味着避免引用无法确认的内容,只使用可靠和可验证的信息。

3. 数据库管理准则:文献提出了一些SQL Server数据库管理的准则,包括:规划和设计数据库结构、有效的数据备份和恢复策略、用户权限管理、性能优化等。

数据库管理准则:文献提出了一些SQL Server数据库管理的准则,包括:规划和设计数据库结构、有效的数据备份和恢复策略、用户权限管理、性能优化等。

结论文献通过介绍SQL Server数据库管理的基本原则和策略,强调了独立决策和简单策略的重要性。

数据库管理员应该依靠自己的知识和经验,避免过度依赖外部帮助,并采取简单策略来管理数据库。

此外,遵循数据库管理准则也是确保数据库安全和性能的重要手段。

以上是对于《SQL Server数据库管理外文翻译文献》的详细内容概述和总结。

如果需要更多详细信息,请阅读原文献。

毕业设计数据库管理外文文献

毕业设计数据库管理外文文献

毕业设计(论文)外文参考资料及译文译文题目:学生姓名:学号:专业:所在学院:指导教师:职称:年月日1. Database management system1. Database management systemA Database Management System (DBMS) is a set of computer programs that controls the creation, maintenance, and the use of a database. It allows organizations to place control of database development in the hands of database administrators (DBAs) and other specialists. A DBMS is a system software package that helps the use of integrated collection of data records and files known as databases. It allows different user application programs to easily access the same database. DBMSs may use any of a variety of database models, such as the network model or relational model. In large systems, a DBMS allows users and other software to store and retrieve data in a structured way. Instead of having to write computer programs to extract information, user can ask simple questions in a query language. Thus, many DBMS packages provide Fourth-generation programming language (4GLs) and other application development features. It helps to specify the logical organization for a database and access and use the information within a database. It provides facilities for controlling data access, enforcing data integrity, managing concurrency, and restoring the database from backups. A DBMS also provides the ability to logically present database information to users.2. OverviewA DBMS is a set of software programs that controls the organization, storage, management, and retrieval of data in a database. DBMSs are categorized according to their data structures or types. The DBMS accepts requests for data from an application program and instructs the operating system to transfer the appropriate data. The queries and responses must be submitted and received according to a format that conforms to one or more applicable protocols. When a DBMS is used, information systems can be changed much more easily as the organization's information requirements change. New categories of data can be added to the database without disruption to the existing system.Database servers are computers that hold the actual databases and run only the DBMS and related software. Database servers are usually multiprocessor computers, with generous memory and RAID disk arrays used for stable storage. Hardware database accelerators, connected to one or more servers via a high-speed channel, are also used in large volume transaction processing environments. DBMSs are found at the heart of most database applications. DBMSs may be built around a custom multitasking kernel with built-in networking support, but modern DBMSs typically rely on a standard operating system to provide these functions.3. HistoryDatabases have been in use since the earliest days of electronic computing. Unlike modern systems which can be applied to widely different databases and needs, the vast majority of older systems were tightly linked to the custom databases in order to gain speed at the expense of flexibility. Originally DBMSs were found only in large organizations with the computer hardware needed to support large data sets.3.1 1960s Navigational DBMSAs computers grew in speed and capability, a number of general-purpose database systems emerged; by the mid-1960s there were a number of such systems in commercial use. Interest in a standard began to grow, and Charles Bachman, author of one such product, Integrated Data Store (IDS), founded the "Database Task Group" within CODASYL, the group responsible for the creation and standardization of COBOL. In 1971 they delivered their standard, which generally became known as the "Codasyl approach", and soon there were a number of commercial products based on it available.The Codasyl approach was based on the "manual" navigation of a linked data set which was formed into a large network. When the database was first opened, the program was handed back a link to the first record in the database, which also contained pointers to other pieces of data. To find any particular record the programmer had to step through these pointers one at a time until the required record was returned. Simple queries like "find all the people in India" required the programto walk the entire data set and collect the matching results. There was, essentially, no concept of "find" or "search". This might sound like a serious limitation today, but in an era when the data was most often stored on magnetic tape such operations were too expensive to contemplate anyway.IBM also had their own DBMS system in 1968, known as IMS. IMS was a development of software written for the Apollo program on the System/360. IMS was generally similar in concept to Codasyl, but used a strict hierarchy for its model of data navigation instead of Codasyl's network model. Both concepts later became known as navigational databases due to the way data was accessed, and Bachman's 1973 Turing Award award presentation was The Programmer as Navigator. IMS is classified as a hierarchical database. IMS and IDMS, both CODASYL databases, as well as CINCOMs TOTAL database are classified as network databases.3.2 1970s Relational DBMSEdgar Codd worked at IBM in San Jose, California, in one of their offshoot offices that was primarily involved in the development of hard disk systems. He was unhappy with the navigational model of the Codasyl approach, notably the lack of a "search" facility which was becoming increasingly useful. In 1970, he wrote a number of papers that outlined a new approach to database construction that eventually culminated in the groundbreaking A Relational Model of Data for Large Shared Data Banks.[1]In this paper, he described a new system for storing and working with large databases. Instead of records being stored in some sort of linked list of free-form records as in Codasyl, Codd's idea was to use a "table" of fixed-length records. A linked-list system would be very inefficient when storing "sparse" databases where some of the data for any one record could be left empty. The relational model solved this by splitting the data into a series of normalized tables, with optional elements being moved out of the main table to where they would take up room only if needed.For instance, a common use of a database system is to track information about users, their name, login information, various addresses and phone numbers. In the navigational approach all of these data would be placed in a single record, and unused items would simply not be placed in the database. In the relational approach, the data would be normalized into a user table, an address table and a phone number table (for instance). Records would be created in these optional tables only if the address or phone numbers were actually provided.Linking the information back together is the key to this system. In the relational model, some bit of information was used as a "key", uniquely defining a particular record. When information was being collected about a user, information stored in the optional (or related) tables would be found by searching for this key. For instance, if the login name of a user is unique, addresses and phone numbers for that user would be recorded with the login name as its key. This "re-linking" of related data back into a single collection is something that traditional computer languages are not designed for.Just as the navigational approach would require programs to loop in order to collect records, the relational approach would require loops to collect information about any one record. Codd's solution to the necessary looping was a set-oriented language, a suggestion that would later spawn the ubiquitous SQL. Using a branch of mathematics known as tuple calculus, he demonstrated that such a system could support all the operations of normal databases (inserting, updating etc.) as well as providing a simple system for finding and returning sets of data in a single operation.Codd's paper was picked up by two people at the Berkeley, Eugene Wong and Michael Stonebraker. They started a project known as INGRES using funding that had already been allocated for a geographical database project, using studentprogrammers to produce code. Beginning in 1973, INGRES delivered its first test products which were generally ready for widespread use in 1979. During this time, a number of people had moved "through" the group — perhaps as many as 30 people worked on the project, about five at a time. INGRES was similar to System R in a number of ways, including the use of a "language" for data access, known as QUEL — QUEL was in fact relational, having been based on Codd's own Alpha language, but has since been corrupted to follow SQL, thus violating much the same concepts of the relational model as SQL itself.IBM itself did one test implementation of the relational model, PRTV, and a production one, Business System 12, both now discontinued. Honeywell did MRDS for Multics, and now there are two new implementations: Alphora Dataphor and Rel. All other DBMS implementations usually called relational are actually SQL DBMSs. In 1968, the University of Michigan began development of the Micro DBMS relational database management system. It was used to manage very large data sets by the US Department of Labor, the Environmental Protection Agency and researchers from University of Alberta, the University of Michigan and Wayne State University. It ran on mainframe computers using Michigan Terminal System. The system remained in production until 1996.3.3 End 1970s SQL DBMSIBM started working on a prototype system loosely based on Codd's concepts as System R in the early 1970s. The first version was ready in 1974/5, and work then started on multi-table systems in which the data could be split so that all of the data for a record (much of which is often optional) did not have to be stored in a single large "chunk". Subsequent multi-user versions were tested by customers in 1978 and 1979, by which time a standardized query language, SQL, had been added. Codd's ideas were establishing themselves as both workable and superior to Codasyl, pushing IBM to develop a true production version of System R, known as SQL/DS, and, later, Database 2 (DB2).Many of the people involved with INGRES became convinced of the future commercial success of such systems, and formed their own companies to commercialize the work but with an SQL interface. Sybase, Informix, NonStop SQL and eventually Ingres itself were all being sold as offshoots to the original INGRES product in the 1980s. Even Microsoft SQL Server is actually a re-built version of Sybase, and thus, INGRES. Only Larry Ellison's Oracle started from a different chain, based on IBM's papers on System R, and beat IBM to market when the first version was released in 1978.Stonebraker went on to apply the lessons from INGRES to develop a new database, Postgres, which is now known as PostgreSQL. PostgreSQL is often used for global mission critical applications (the .org and .info domain name registries use it as their primary data store, as do many large companies and financial institutions).In Sweden, Codd's paper was also read and Mimer SQL was developed from the mid-70s at Uppsala University. In 1984, this project was consolidated into an independent enterprise. In the early 1980s, Mimer introduced transaction handling for high robustness in applications, an idea that was subsequently implemented on most other DBMS.3.4 1980s Object Oriented DatabasesThe 1980s, along with a rise in object oriented programming, saw a growth in how data in various databases were handled. Programmers and designers began to treat the data in their databases as objects. That is to say that if a person's data were in a database, that person's attributes, such as their address, phone number, and age, were now considered to belong to that person instead of being extraneous data. This allows for relationships between data to be relation to objects and their attributes and not to individual fields.Another big game changer for databases in the 1980s was the focus on increasing reliability and access speeds. In 1989, two professors from the University of Michigan at Madison, published an article at an ACM associated conference outlining their methods on increasing database performance. The idea was to replicate specific important, and often queried information, and store it in a smaller temporary database that linked these key features back to the main database. This meant that a query could search the smaller database much quicker, rather than search the entire dataset. This eventually leads to the practice of indexing, which is used by almost every operating system from Windows to the system that operates Apple iPod devices.4. DBMS building blocksA DBMS includes four main parts: modeling language, data structure, database query language, and transaction mechanisms:4.1 Components of DBMS∙DBMS Engine accepts logical request from the various other DBMS subsystems, converts them into physical equivalents, and actually accesses thedatabase and data dictionary as they exist on a storage device.∙Data Definition Subsystem helps user to create and maintain the data dictionary and define the structure of the files in a database.∙Data Manipulation Subsystem helps user to add, change, and delete information in a database and query it for valuable information. Software tools within the data manipulation subsystem are most often the primary interfacebetween user and the information contained in a database. It allows user tospecify its logical information requirements.∙Application Generation Subsystem contains facilities to help users to develop transaction-intensive applications. It usually requires that userperform a detailed series of tasks to process a transaction. It facilitateseasy-to-use data entry screens, programming languages, and interfaces.∙Data Administration Subsystem helps users to manage the overall database environment by providing facilities for backup and recovery, security management, query optimization, concurrency control, and change management.4.2 Modeling languageA data modeling language to define the schema of each database hosted in the DBMS, according to the DBMS database model. The four most common types of models are the:•hierarchical model,•network model,•relational model, and•object model.Inverted lists and other methods are also used. A given database management system may provide one or more of the four models. The optimal structure dependson the natural organization of the application's data, and on the application's requirements (which include transaction rate (speed), reliability, maintainability, scalability, and cost).The dominant model in use today is the ad hoc one embedded in SQL, despite the objections of purists who believe this model is a corruption of the relational model, since it violates several of its fundamental principles for the sake of practicality and performance. Many DBMSs also support the Open Database Connectivity API that supports a standard way for programmers to access the DBMS.Before the database management approach, organizations relied on file processing systems to organize, store, and process data files. End users became aggravated with file processing because data is stored in many different files and each organized in a different way. Each file was specialized to be used with a specific application. Needless to say, file processing was bulky, costly and nonflexible when it came to supplying needed data accurately and promptly. Data redundancy is an issue with the file processing system because the independent data files produce duplicate data so when updates were needed each separate file would need to be updated. Another issue is the lack of data integration. The data is dependent on other data to organize and store it. Lastly, there was not any consistency or standardization of the data in a file processing system which makes maintenance difficult. For all these reasons, the database management approach was produced. Database management systems (DBMS) are designed to use one of five database structures to providesimplistic access to information stored in databases. The five database structures are hierarchical, network, relational, multidimensional and object-oriented models.The hierarchical structure was used in early mainframe DBMS. Records’ relationships form a treelike model. This structure is simple but nonflexible because the relationship is confined to a one-to-many relationship. IBM’s IMS system and the RDM Mobile are examples of a hierarchical database system with multiple hierarchies over the same data. RDM Mobile is a newly designed embedded database for a mobile computer system. The hierarchical structure is used primary today for storing geographic information and file systems.The network structure consists of more complex relationships. Unlike the hierarchical structure, it can relate to many records and accesses them by following one of several paths. In other words, this structure allows for many-to-many relationships.The relational structure is the most commonly used today. It is used by mainframe, midrange and microcomputer systems. It uses two-dimensional rows and columns to store data. The tables of records can be connected by common key values. While working for IBM, E.F. Codd designed this structure in 1970. The model is not easy for the end user to run queries with because it may require a complex combination of many tables.The multidimensional structure is similar to the relational model. The dimensions of the cube looking model have data relating to elements in each cell. This structure gives a spreadsheet like view of data. This structure is easy to maintain because records are stored as fundamental attributes, the same way they’re viewed and the structure is easy to understand. Its high performance has made it the most popular database structure when it comes to enabling online analytical processing (OLAP).The object oriented structure has the ability to handle graphics, pictures, voice and text, types of data, without difficultly unlike the other database structures. This structure is popular for multimedia Web-based applications. It was designed to work with object-oriented programming languages such as Java.4.3 Data structureData structures (fields, records, files and objects) optimized to deal with very large amounts of data stored on a permanent data storage device (which implies relatively slow access compared to volatile main memory).4.4 Database query languageA database query language and report writer allows users to interactively interrogate the database, analyze its data and update it according to the users privileges on data. It also controls the security of the database. Data security prevents unauthorized users from viewing or updating the database. Using passwords, users are allowed access to the entire database or subsets of it called subschemas. For example, an employee database can contain all the data about an individual employee, but one group of users may be authorized to view only payroll data, while others are allowed access to only work history and medical data.If the DBMS provides a way to interactively enter and update the database, as well as interrogate it, this capability allows for managing personal databases. However, it may not leave an audit trail of actions or provide the kinds of controls necessary in a multi-user organization. These controls are only available when a set of application programs are customized for each data entry and updating function.4.5 Transaction mechanismA database transaction mechanism ideally guarantees ACID properties in orderto ensure data integrity despite concurrent user accesses (concurrency control), and faults (fault tolerance). It also maintains the integrity of the data in the database. The DBMS can maintain the integrity of the database by not allowing more than one user to update the same record at the same time. The DBMS can help prevent duplicate records via unique index constraints; for example, no two customers with the same customer numbers (key fields) can be entered into the database. See ACID properties for more information (Redundancy avoidance).5. DBMS topics5.1 External, Logical and Internal viewA database management system provides the ability for many different users to share data and process resources. But as there can be many different users, there are many different database needs. The question now is: How can a single, unified database meet the differing requirement of so many users?A DBMS minimizes these problems by providing two views of the database data: an external view(or User view), logical view(or conceptual view)and physical(or internal) view. The user’s view, of a database program represents data in a format that is meaningful to a user and to the software programs that process those data. That is, the logical view tells the user, in user terms, what is in the database. The physicalview deals with the actual, physical arrangement and location of data in the direct access storage devices(DASDs). Database specialists use the physical view to make efficient use of storage and processing resources. With the logical view users can see data differently from how they are stored, and they do not want to know all the technical details of physical storage. After all, a business user is primarily interested in using the information, not in how it is stored.One strength of a DBMS is that while there is typically only one conceptual (or logical) and physical (or Internal) view of the data, there can be an endless number of different External views. This feature allows users to see database information in a more business-related way rather than from a technical, processing viewpoint. Thus the logical view refers to the way user views data, and the physical view to the way the data are physically stored and processed...5.2 DBMS features and capabilitiesAlternatively, and especially in connection with the relational model of database management, the relation between attributes drawn from a specified set of domains can be seen as being primary. For instance, the database might indicate that a car that was originally "red" might fade to "pink" in time, provided it was of some particular "make" with an inferior paint job. Such higher arity relationships provide information on all of the underlying domains at the same time, with none of them being privileged above the others.5.3 DBMS simple definitionData base management system is the system in which related data is stored in an "efficient" and "compact" manner. Efficient means that the data which is stored in the DBMS is accessed in very quick time and compact means that the data which is stored in DBMS covers very less space in computer's memory. In above definition the phrase "related data" is used which means that the data which is stored in DBMS is about some particular topic.Throughout recent history specialized databases have existed for scientific, geospatial, imaging, document storage and like uses. Functionality drawn from such applications has lately begun appearing in mainstream DBMSs as well. However, the main focus there, at least when aimed at the commercial data processing market, is still on descriptive attributes on repetitive record structures.Thus, the DBMSs of today roll together frequently needed services or features of attribute management. By externalizing such functionality to the DBMS, applications effectively share code with each other and are relieved of much internal complexity. Features commonly offered by database management systems include:5.3.1 Query abilityQuerying is the process of requesting attribute information from various perspectives and combinations of factors. Example: "How many 2-door cars in Texas are green?" A database query language and report writer allow users to interactively interrogate the database, analyze its data and update it according to the users privileges on data.5.3.2 Backup and replicationCopies of attributes need to be made regularly in case primary disks or other equipment fails. A periodic copy of attributes may also be created for a distant organization that cannot readily access the original. DBMS usually provide utilities to facilitate the process of extracting and disseminating attribute sets. When data is replicated between database servers, so that the information remains consistent throughout the database system and users cannot tell or even know which server in the DBMS they are using, the system is said to exhibit replication transparency.5.3.2 Rule enforcementOften one wants to apply rules to attributes so that the attributes are clean and reliable. For example, we may have a rule that says each car can have only one engine associated with it (identified by Engine Number). If somebody tries to associate a second engine with a given car, we want the DBMS to deny such a request and display an error message. However, with changes in the model specification such as, in this example, hybrid gas-electric cars, rules may need to change. Ideally such rules should be able to be added and removed as needed without significant data layout redesign.5.3.4 SecurityOften it is desirable to limit who can see or change which attributes or groups of attributes. This may be managed directly by individual, or by the assignment of individuals and privileges to groups, or (in the most elaborate models) through the assignment of individuals and groups to roles which are then granted entitlements.5.3.5 ComputationThere are common computations requested on attributes such as counting, summing, averaging, sorting, grouping, cross-referencing, etc. Rather than have each computer application implement these from scratch, they can rely on the DBMS to supply such calculations.5.3.6 Change and access loggingOften one wants to know who accessed what attributes, what was changed, and when it was changed. Logging services allow this by keeping a record of access occurrences and changes.5.3.7 Automated optimizationIf there are frequently occurring usage patterns or requests, some DBMS can adjust themselves to improve the speed of those interactions. In some cases the DBMS will merely provide tools to monitor performance, allowing a human expert to make the necessary adjustments after reviewing the statistics collected5.4 Meta-data repositoryMetadata is data describing data. For example, a listing that describes what attributes are allowed to be in data sets is called "meta-information". The meta-data is also known as data about data.5.5 Current trendsIn 1998, database management was in need of new style databases to solve current database management problems. Researchers realized that the old trends of database management were becoming too complex and there was a need for automated configuration and management. Surajit Chaudhuri, Gerhard Weikum and Michael Stonebraker, were the pioneers that dramatically affected the thought of database management systems. They believed that database management needed a more modular approach and that there are so many specifications needs for various users. Since this new development process of database management we currently have endless possibilities. Database management is no longer limited to “monolithic entities”. Many solutions have developed to satisfy individual needs of users. Development of numerous database options has created flexible solutions in database management.Today there are several ways database management has affected the technology world as we know it. Organizations demand for directory services has become an extreme necessity as organizations grow. Businesses are now able to use directory services that provided prompt searches for their company information. Mobile devices are not only able to store contact information of users but have grown to bigger capabilities. Mobile technology is able to cache large information that is used for computers and is able to display it on smaller devices. Web searches have even been affected with database management. Search engine queries are able to locate data。

外文文献之数据库信息管理系统简介

外文文献之数据库信息管理系统简介

外文文献之数据库信息管理系统简介Introduction to database informationmanagement systemThe database is stored together a collection of the relevant data, the data is structured, non-harmful or unnecessary redundancy, and for a variety of application services, data storage independent of the use of its procedures; insert new data on the database , revised, and the original data can be retrieved by a common and can be controlled manner. When a system in the structure of a number of entirely separate from the database, the system includes a "database collection."Database management system (database management system) is a manipulation and large-scale database management software is being used to set up, use and maintenance of the database, or dbms. Its unified database management and control so as to ensure database security and integrity. Dbms users access data in the database, the database administrator through dbms database maintenance work. It provides a variety of functions, allows multiple applications and users use different methods at the same time or different time to build, modify, and asked whether the database. It allows users to easily manipulate data definition and maintenance of data security and integrity, as well as the multi-user concurrency control and the restoration of the database.Using the database can bring many benefits: such as reducing data redundancy, thus saving the data storage space; to achieve full sharing of data resources, and so on. In addition, the database technology also provides users with a very simple means to enable users to easily use the preparation of thedatabase applications. Especially in recent years introduced micro-computer relational database management system dBASELL, intuitive operation, the use of flexible, convenient programming environment to extensive (generally 16 machine, such as IBM / PC / XT, China Great Wall 0520, and other species can run software), data-processing capacity strong. Database in our country are being more and more widely used, will be a powerful tool of economic management.The database is through the database management system (DBMS-DATA BASE MANAGEMENT SYSTEM) software for data storage, management and use of dBASELL is a database management system software.Information management system is the use of data acquisition and transmission technology, computer network technology, database construction, multimediatechnology, business needs, such as the establishment of a management platform, the platform constructed on the basis of pure software business management system (to meet the business needs for the purpose), achieving operational systems of data and information sharing, and based on this structure enquiries, schedulingor decision-making system.Information system can be manual or computer-based, independent or integrated, batch or on-line. The information system is usually the various types of combination. That is, of course, it can not be independent and is integrated.1. Independent system to meet a specific application areas (eg, personnel management) design. Independent system with its own documents, which are inevitable with a certain degree of redundancy.2. Integrated information systems through their use of the data were combined. Resource sharing system using a database to achieve integrated objectives. For example, the normal wage system requirements from the human resources and accounting systems found in the data.3. Artificial system has been developed based on a variety of computer information systems. So far, in a computerized artificial, and still lacks design experience (or) lack of information between users and service personnel exchanges. That is to say, computer-based workflow system directly from the manual system workflow. Often, these systems are independent, and the computer just as a data processor. In the design of these systems, with little regard to their integrated to the end of intent.4. Information systems according to a batch, on-line processing or both combination classification. In a batch system, and data will be handled in batches or reports. For example, banks will be a large number of cheque code, and then the end of the day, in batches of cheques, sorting and processing. Also, in order to prevent an airline ticket T alas in Atlanta and another at the same time point of sale outlets from Los Angeles to San Francisco, the last one a flight tickets, airlines must be on-line booking system, in order to reflect the current status of the database. Most on-line information system is successful batch requirements. even if the information resources management (IRM) system, and the potential of the computer information system has been widely recognized, most of the systems is still independent batch system. Now most of these systems have lost value, but was re-designed as an integrated, online system. By definition, we can see the comprehensive requirements of business managers and company leaders closercooperation. Information services professionals as consultants, and the integrated information system and business areas of conflict and differences should be resolved by the user groups. Resolve these differences in order to achieve a comprehensive environmental information services staff to the challenges posed by the user manager.数据库信息管理系统简介数据库是存储在一起的相关数据的集合,这些数据是结构化的,无有害的或不必要的冗余,并为多种应用服务;数据的存储独立于使用它的程序;对数据库插入新数据,修改和检索原有数据均能按一种公用的和可控制的方式进行。

ASP和net技术及数据库管理外文原文+中文翻译

ASP和net技术及数据库管理外文原文+中文翻译
第 1 页 共 12 页
服务器上运行。将程序在服务器端首次运行时进行编译,比 ASP 即时解释程序速 度上要快很多.而且是可以用任何与 . net 兼容的语言(包括 Visual Basic . net、 C# 和 JScript . net.)创作应用程序。另外,任何 ASP. net 应用程序都可以使用 整个 . net Framework。开发人员可以方便地获得这些技术的优点,其中包括托管 的 公 共 语 言 运 行 库 环 境 、 类 型 安 全 、 继 承 等 等 。 ASP. net 可 以 无 缝 地 与 WYSIWYG HTML 编辑器和其他编程工具(包括 Microsoft Visual Studio . net) 一起工作。这不仅使得 Web 开发更加方便,而且还能提供这些工具必须提供的 所有优点, 包括开发人员可以用来将服务器控件拖放到 Web 页的 GUI 和完全集 成的调试支持。 当创建 ASP. net 应用程序时,开发人员可以使用 Web 窗体或 XML Web services,或以他们认为合适的任何方式进行组合。每个功能都能得到 同一结构的支持,使您能够使用身份验证方案,缓存经常使用的数据,或者对应 用程序的配置进行自定义. 如果你从来没有开发过网站程序,那么这不适合你,你 应该至少掌握一些 HTML 语言和简单的 Web 开发术语(不过我相信如果有兴趣的 话是可以很快的掌握的)。你不需要先前的 ASP 开发经验(当然有经验更好) ,但 是你必须了解交互式 Web 程序开发的概念, 包含窗体, 脚本, 和数据接口的概念, 如果你具备了这些条件的话,那么你就可以在 的世界开始展翅高飞了。 不仅仅是 Active Server Page (ASP) 的下一个版本,而且是一种建立 在通用语言上的程序构架,能被用于一台 Web 服务器来建立强大的 Web 应用程 序。 提供许多比现在的 Web 开发模式强大的优势。 ASP. net 运行的架构分为几个阶段: 在 IIS 与 Web 服务器中的消息流动阶段。 在 ASP. net 网页中的消息分 派。 在 ASP. net 网页中的消息处理。 ASP. net 的原始设计构想,就是要让开发人员能够像 VB 开发工具那样,可 以使用事件驱动式程序开发模式 (Event-Driven Programming Model) 的方法来 开发网页与应用程序,若要以 ASP 技术来做到这件事的话,用必须要使用大量的 辅助信息,像是查询字符串或是窗体字段数据来识别与判断对象的来源、事件流 向以及调用的函数等等,需要撰写的代码量相当的多,但 ASP. net 很巧妙利用窗 体字段和 JavaScript 脚本把事件的传递模型隐藏起来了。 在 ASP. net 运行的时候, 经常会有网页的来回动作 (round-trip), 在 ASP. net 中称为 PostBack,在传统的 ASP 技术上,判断网页的来回是需要由开发人员自 行撰写,到了 ASP. net 时,开发人员可以用 Page.IsPostBack 机能来判断是否 为第一次运行 (当 发现 HTTP POST 要求的数据是空值时), 它可以保 证 ASP. net 的控件事件只会运行一次,但是它有个缺点(基于 HTTP POST 本 身的缺陷) ,就是若用户使用浏览器的刷新功能 (按 F5 或刷新的按钮) 刷新网页 时,最后一次运行的事件会再被运行一次,若要避免这个状况,必须要强迫浏览 器清空高速缓存才可以。

库存管理外文文献及翻译

库存管理外文文献及翻译

本科毕业论文外文文献及译文文献、资料题目: Zero Inventory Approach 文献、资料来源: The IUP Journal of SupplyChain Management文献、资料发表(出版)日期: 2012.06院(部):管理工程学院专业:工业工程班级:工业112姓名:张金丰学号:2011021527指导教师:孔海花翻译日期:2015.06.14外文文献:Zero Inventory ApproachManaging optimal inventory in the supply chain is critical for an enterprise. The ability to increase inventory turns and the use of best inventory practices will reduce inventory costs across the supply chain. Moving towards zero inventory will result in effective inventory management in the business process. Inventory Optimization Solutions can be implemented easily using inventory optimization software. With Radio Frequency Identification (RFID) technology, inventory can be updated in real time without product movement, scanning or human involvement. Companies have to adopt best practices to optimize operational processes and lower their cost structure through inventory strategies.IntroductionWith supply chain planning and latest software, companies are managing their inventory in the best possible manner, keeping inventory holdings to the minimum without sacrificing the customer service needs. The zero inventory concept has been around since the 1980s. It tries to reduce inventory to a minimum and enhances profit margins by reducing the need for warehousing and expenses related to it.The concept of a supply chain is to have items flowing from one stage of supply to the next, both within the business and outside, in a seamless fashion. Any stock in the system is caused by either delay between the processes (demand, distribution, transfer, recording and production) or by the variation in the flow. Eliminating/reducing stock can be achieved by: linking processes, making the same throughput rate on processes, locating processes near each other and coordinating flows. Recent advanced software has made zero inventory strategy executable."Inventory optimization is an emerging practical approach to balancing investment and service-level goals over a very large assortment of Stock-Keeping Units (SKUS). In contrast to traditional ‘one-at-a-time’ marginal stock level setting, inventory optimization simultaneously determines all SKU stock levels to fulfill total service and investment constraints or objectives".Inventory optimization techniques provide a new logic to drive the system with information systems. To effectively manage inventory, businesses must also optimize thecosts of buying, holding, producing, moving and selling inventory.The objective of inventory optimization is to sustain minimal levels of inventory while providing the maximum possible levels of service. Supply Chain Design and Optimization (SCDO) is an inventory optimization solution which helps companies satisfy customer demands while balancing limitations on supply and the need for operational efficiency. Inventory optimization focuses on modeling uncertainty and variability and minimizing the risks they impose on the supply chain.Inventory optimization can help resolve total supply chain cost options like:•In-house manufacturing vs. contract manufacturing;•Domestic vs. off shore;•New supplier's cost vs. current suppliers' cost.Companies can benefit from inventory optimization, provided they control their supply chain processes and the complexity of supply chain. In case the supply chain is very complex, besides inventory optimization, network design has to be used to reap the benefits fully. This paper covers various inventory models that are available and then describes the technologies like Radio Frequency Identification (RFID) and networking used for the optimization of inventory. The paper also describes the software solutions available for achieving the same. It concludes by giving a few examples where inventory optimization has been successfully implemented.Inventory ModelsHexagon ModelThe hexagon model was developed due to the need to structure day-to-day work, reduce headcount and other inventory costs and improve customer satisfaction.In the first phase, operation strategies were established in alignment with inte-rnal customers. Later, continuous improvement plans and business continuity pl-ans were added. The five strategies used were: forecasting future consumption,setting financial targets to minimize inventory costs, preparing daily reports to monitor inventory operational performance,studying critical success indicators to track the accomplishments, to form inventory strategic objectives and inventor-y health and operating strategies. The hexagon model is a combination of two triangular structures (Figure 1).The upper triangle focuses on the soft management of human resources, customer orientation and supplier relations; the lower focuses on the execution of inventory plans with their success criteria, continuous improvement methodology and business continuity plans.The inventory indicators are: total inventory value, availability of spares, days of inventory, cost of inventory, cost saving and cash saving output expen-diture and quality improvement. The hexagon model combines the elements of the people involved in managing inventory with operational excellence (Figur2).Managing inventory with operational excellence was achieved by reducing the number of employees in the material department, changing the mix of people skills such as introducing engineering into the department structure and reducing the cost of ownership of the material department to the operation that it supports.Normally, this is implemented with reduction in headcount of material department, having less people with engineering skills in the department. Operation results include, improvement in raw material supply line quality indicators, competitive days of inventory and improved and stabilized spares availability. And the financial results include, increase in cost savings and reduced cost of inventory. It can be established by outsourcing some of the inventory functions as required. The level of efficiency of the inventory managed can be measured to a specific risk level, changing requirements or changes in the environment. Just-In-Time (JIT)Just-in-time (JIT) inventory system is a concept developed by the Japanese, wherein, the suppliers deliver the materials to the factory JIT for their processing, eliminating the need for storage and retrieval. The rate of output and the rate of supply of inputs are synchronized, to manage a zero inventory.The main benefits of JIT are: set up times are significantly reduced in the factory, the flow of goods from warehouse to shelves improves, employees who possess multiple skills are utilized more efficiently, better consistency of scheduling and consistency of employee work hours, increased emphasis on supplier relationships and continuous round the clock supplies keeping workers productive and businesses focused on turnover.And though a JIT system might even be a necessity, given the inventory demands of certain business types, its many advantages are realized only when some significant risks likedelays in movement of goods over long distances are mitigated.Vendor-Managed Inventory (VMI)Vendor-Managed Inventory (VMI) is a planning and management system in which the vendor is responsible for main taining the customer’s inventory levels. VMI is defined as a process or mechanism where the supplier creates the purchase orders based on the demand information. VMI is a combination of e-commerce, software and people. It has resulted in the dramatic reduction of inventory across the supply chain. VMI is categorized in the real world as collaboration, automation and cost transference.The main objectives of VMI are better, cheaper and faster transactions. In order to establish the VMI process,management commitment,data synchronization,setting up agreements,data exchange, ordering, invoice matching and measurement have to be undertaken.The benefits of VMI to an organization are reduction in inventory besides reduction of stock-outs and increase in customer satisfaction. Accurate information which is required for optimizing the supply chain is facilitated by efficient transfer of information. The concept of VMI would be successful only when there is trust between the organization and its suppliers as all the demand information is available to the suppliers which can be revealed to the competitors. VMI optimizes inventory in supply chain and reduces stock-outs by proper planning and centralized forecasting.Consignment ModelConsignment inventory model is an extension of VMI where the vendor places inventory at the customer’s location while retaining ownership of the inventory.The consignment inventory model works best in the case of new and unproven products where there is a high degree of demand uncertainty, highly expensive products and service parts for critical equipment. The types of consignment inventory ownership transfer models are: pay as sold during a pre-defined period, ownership changes after a pre-defined period, and order to order consignment.The issues that the VMI and consignment inventory model encounter are cost of developing VMI system, invoicing problems, cash flow problems, Electronic Data Interchange (EDI) problems and obsolete stock.Enabling PracticesThe decision makers have to make prudent decisions on future course of action of a project relating to the following variables: Forecasting and Inventory Management,Inventory Management practices,Inventory Planning,Optimal purchase, Multichannel Inventory, Moving towards zero inventory.To improve inventory management for better forecasting, the 14 best practices that will most likely benefit business the most are:•Synchronize promotions;•Revamp the organizational structure;•Take a longer view of item planning;•Enforce vendor compliance;•Track key inventory metrics;•Select the right systems;•Master the art of master scheduling;•Adhere to exception reporting;•Identify lost demands;•Plan by assortment;•Track inbound receipts;•Create coverage reports;•Balance under stock/overstock; and•Optimize SKUs.This will leverage the retailer’s ability to buy larger quantities across all channels while buying only what is required for a specified period in order to manage risk in a better way. In most multichannel companies, inventory is the largest asset on the balance sheet, which means that their profitability will be determined to a large degree by the way they plan, forecast, and manage inventory (Curt Barry, 2007). They can follow some steps like creating a strategy, integrating planning and forecasting, equipping with the best-laid plans and building strong vendor relationships and effective liquidation.Moving Towards Zero InventoryAt the fore is the development and widespread adoption of nimble, sophisticated software systems such as Manufacturing Resource Planning (MRP II), Enterprise Resource Planning(ERP), and Advanced Planning and Scheduling (APS) systems, as well as dedicated supply chain management software systems. These systems offer manufacturers greater functionality. To implement ‘Zero Stock’ system, companies need to have a good information system to handle customer orders, sub-contractor orders, product inventory and all issues related to production. If the company has no IT infrastructure, it will need to build it from the scratch.A good information system can help managers to get accurate data and make strategic decisions. IT infrastructure is not a cost, but an investment. A company can use RFID method, network inventory and other software tools for inventory optimization.Radio Frequency Identification (RFID)RFID is an automatic identification method, which relies on storing and remotely retrieving data using devices called RFID tags or transponders.RFID use in enterprise supply chain management increases the efficiency of inventory tracking and management. RFID application develops asset utilization by tracking reusable assets and provides visibility, improves quality control by tagging raw material, work-in-progress, and finished goods inventory, improves production execution and supply chain performance by providing accurate, timely and detailed information to enterprise resource planning and manufacturing execution system.The status of inventory can be obtained automatically by using RFID. There are many benefits of using RFID such as reduced inventory, reduced time, reduced errors, accessibility increase, high security, etc.Network InventoryA Network Inventory Management System (NIMS) tracks movement of items across the system and thus can locate malfunctioning equipment/process and provide information required to diagnose and correct problem areas. It also determines where capacity is to be added, calculates impact of market conditions, assesses impact of new products and the impact of a new customer. NIMS is very important when the complexity of a supply chain is high. It determines the manufacturing and distribution strategies for the future. It should take into consideration production, location, inventory and transportation.The NIMS software, including asset configuration information and change management, is an essential component of robust network management architecture.NIMS provideinformation that administrators can use to improve network management performance and help develop effective network asset control processes.A network inventory solution manages network resource information for multiple network technologies as well as multiple vendors in one common accurate database. It is an extremely useful tool for improving several operation processes, such as resource trouble management, service assurance, network planning and provisioning, field maintenance and spare parts management.The NIMS software, including asset configuration information and change management, is an essential component of strong network management architecture. In addition, software tools that provide planning, design and life cycle management for network assets should prominently appear on enterprise radar screens.Inventory Optimization Softwarei2 Inventory Optimizationi2 solutions enable customers to realize top and bottom-line benefits through the use of superior inventory management practices. i2 Inventory Optimization can help companies monitor, manage, and optimize strategies to decide—what to make, what to buy and from whom, what inventories to carry, where, in what form and how much—across the supply chain. It enables customers to learn and continuously improve inventory management policies and processes, strategic analysis and optimization.Product-oriented industry can install i2 Inventory Optimization and develop supply chain. Through this, the company can reduce inventory levels and overall logistics costs. It can also get higher service level performance, greater customer satisfaction, improved asset utilization, accelerated inventory turns, better product availability, reduced risk, and more precise and comprehensive supply chain visibility.Oracle Inventory OptimizationOracle Inventory Optimization considers the demand, supply, constraints and variability in extended supply chain to optimize strategic inventory investment decisions. It allows retailers to provide higher service levels to customers at a lower total cost. Oracle Inventory Optimization is part of the Oracle e-Business Suite, an integrated set of applications that are engineered to work together.Oracle Inventory Optimization provides solutions when demandand supply are in ambiguity. It provides graphic representation of the plan. It calculates cost and risk.MRO SoftwareMRO Software (now a part of IBM's Tivoli software business) announced a marketing alliance with inventory optimization specialists Xtivity to enhance the service offering of inventory management solutions for MRO Software customers. MRO offers Xtivity's Inventory Optimizer (XIO) service as an extension of its asset and service management solutions.Structured Query Language (SQL)Successful implementation of an inventory optimization solution requires significant effort and can pose certain risks to companies implementing such solutions. Structured Query Language (SQL) can be used on a common ERP platform. An optimal inventory policy can be determined by using it. Along with it, other metrics such as projected inventory levels, projected backlogs and their confidence bands can also be calculated. The only drawback of this method is that it may not be possible to obtain quick real-time results because of architectural and algorithmic complexity. However, potential scenarios can be analyzed in anticipation of results stored prior to user requests.Some ExamplesToyota’s Practice in IndiaToyota, a quality conscious company working towards zero inventory has selected Mitsui and Transport Corporation of India Ltd. (TCI) for their entire logistic solutions encompassing planning, transportation, warehousing, distribution and MIS and related documentation. Infrastructure is a bottleneck that continues to dog economic growth in India. Transystem renders services like procurement, consolidation and transportation of original equipment manufacturer's parts, through milk run operations from various suppliers all over India on a JIT basis, transportation of Complete Built-up Units (CBU) from plant to all dealers in the country and operation of CBU yards, coordination and transportation of Knock Down (KD) parts from port of entry to manufacturing plant, transportation of aftermarket parts to dealers by road and air to Toyota Kirloskar Motors Pvt. Ltd.Wal-MartWal-Mart is the largest retailer in the United States, with an estimated 20% of the retail grocery and consumables business, as well as the largest toy seller in the US, with an estimated 22% share of the toy market. Wal-Mart also operates in Argentina, Brazil, Canada, Japan, Mexico, Puerto Rico and UK.Wal-Mart keeps close track of the inventories by extensively adopting vendor-managed inventory to streamline the flow of goods from manufacturer to the store shelf. This results in more turns and therefore fewer inventories.Wal-Mart is an early adopter of RFID to monitor the movement of stocks in different stages of supply chain. The company keeps tabs on all of its merchandize by outfitting its products with RFID.Wal-Mart has indicated recently that it is moving towards the aggressive theoretical zero inventory model.Chordus Inc.Chordus Inc. has the largest division of office furniture in USA. It has advanced logistics and a model of zero inventory. It has Internet-based system for distribution network with real-time updates and low costs. Chordus determined that only SAP R/3 could accommodate this cutting-edge operational model for its network of 150 dealer-owned franchises in 44 states supported by five nationwide Distribution Centers (DCs) and a fleet of 65 delivery trucks. Small Scale Cycle Industry Around LudhianaIn and around Ludhiana, there are many small bicycle units, which are not organized.They have a sharp focus on financial and raw material management enjoying a low employee turnover. They have been practicing zero inventory models which became popular in Japan only much later. Raw material is brought into the unit in the morning, processed during the day and by evening the finished product is passed on to the next unit. Thus, the chain continues till the ultimate finished product is manufactured. In this way, the bicycles used to be produced in Ludhiana at half the production cost of TI Cycles. Even the large manufacturers of cycles, like Hero cycles, Atlas cycles and Avon cycles are reported to maintain only one week's inventory.ConclusionInventory managers are faced with high service-level requirements and many SKUsappreciate the complexity of inventory optimization, as well as the explicit control that is needed over total investment in warehousing, moving and logistics. Inventory optimization can provide both an enormous performance improvement for the supply chain and ongoing continuous improvements over competitors. The company achieves the stability needed to have enough stock to meet unpredictable demands without wasteful allocation of capital. Having the right amount of stock in the right place at the right time improves customer satisfaction, market share and bottom line. Certainly, the organizations that are able to take inventory optimization to the enterprise level will reap greater benefits. Zero inventory may be wishful thinking, but embracing new technologies and processes to manage one's inventory more efficiently could move one much closer to that ideal.中文译文:零库存方法对于一个企业来说,在供应链中优化库存管理是至关重要的。

数据库管理系统中英文对照

数据库管理系统中英文对照

数据库管理系统地介绍Raghu Ramakrishnan数据库<database, 有时拼作data base )又称为电子数据库, 是专门组织起来地一组数据或信息, 其目地是为了便于计算机快速查询及检索. 数据库地结构是专门设计地, 在各种数据处理操作命令地支持下, 可以简化数据地存储, 检索, 修改和删除. 数据库可以存储在磁盘, 磁带, 光盘或其他辅助存储设备上. b5E2RGbCAP 数据库由一个或一套文件组成, 其中地信息可以分解为记录, 每一记录又包含一个或多个字段<或称为域). 字段是数据存取地基本单位. 数据库用于描述实体,其中地一个字段通常表示与实体地某一属性相关地信息. 通过关键字以及各种分类<排序)命令,用户可以对多条记录地字段进行查询,重新整理,分组或选择,以实体对某一类数据地检索, 也可以生成报表. p1EanqFDPw所有数据库<最简单地除外)中都有复杂地数据关系及其链接.处理与创建,访问以及维护数据库记录有关地复杂任务地系统软件包叫做数据库管理系统vDBMS .DBMS 软件包中地程序在数据库与其用户间建立接口.<这些用户可以是应用程序员,管理员及其他需要信息地人员和各种操作系统程序). DXDiTa9E3d DBMS可组织,处理和表示从数据库中选出地数据元•该功能使决策者能搜索探查和查询数据库地内容, 从而对在正规报告中没有地,不再出现地且无法预料地问题做出回答.这些问题最初可能是模糊地并且<或者)是定义不恰当地, 但是人们可以浏览数据库直到获得所需地信息•简言之,DBMS将“管理”存储地数据项, 并从公共数据库中汇集所需地数据项以回答非程序员地询问. RTCrpUDGiTDBMS由3个主要部分组成:<1)存储子系统,用来存储和检索文件中地数据;<2)建模和操作子系统, 提供组织数据以及添加, 删除,维护, 更新数据地方法;<3)用户和DBMS之间地接口.在提高数据库管理系统地价值和有效性方面正在展现以下一些重要发展趋势;5PCzVD7HxA1 .管理人员需要最新地信息以做出有效地决策.2.客户需要越来越复杂地信息服务以及更多地有关其订单, 发票和账号地当前信息.3.用户发现他们可以使用传统地程序设计语言, 在很短地一段时间内用数据库系统开发客户应用程序4.商业公司发现了信息地战略价值,他们利用数据库系统领先于竞争对手. 数据库模型数据库模型描述了在数据库中结构化和操纵数据地方法, 模型地结构部分规定了数据如何被描述<例如树, 表等):模型地操纵部分规定了数据添加,删除, 显示, 维护, 打印,查找,选择,排序和更新等操作. jLBHrnAILg 分层模型第一个数据库管理系统使用地是分层模型,也就是说,将数据记录排列成树形结构.一些记录时根目录, 在其他所有记录都有独立地父记录.树形结构地设计反映了数据被使用地顺序, 也就是首先访问处于树根位置地记录, 接下来是跟下面地记录,等等. xHAQX74J0X分层模型地开发是因为分层关系在商业应用中普遍存在,众所周知,一个组织结构图表就描述了一种分层关系:高层管理人员在最高层, 中层管理人员在较低地层次,负责具体事务地雇员在最底层. 值得注意地是, 在一个严格地分层结构体系中, 在每个管理层下可能有多个雇员或多个层次地雇员, 但每个雇员只有一个管理者.分层结构数据地典型特征是数据之间地一对多关系. LDAYtRyKfE 在分层方法中,当数据库建立时, 每一关系即被明确地定义. 在分层数据库中地每一记录只能包含一个关键字段, 任意两个字段之间只能有一种关系. 由于数据并不总是遵循这种严格地分层关系, 所以这样可能会出现一些问题. Zzz6ZB2Ltk 关系模型在1970 年, 数据库研究取得了重大突破.E.F.Codd 提出了一种截然不同地数据库管理方法, 使用表作为数据结构, 称之为关系模型. dvzfvkwMI1关系数据库是使用最广地数据结构,数据被组织成关系表, 每个表由称作记录地行和称作字段地列组成. 每个记录包含了专用工程地字段值. 例如,在一个包含雇员信息地表中, 一个记录包含了像一个人姓名和地址这样地字段地值. rqyn14ZNXI 结构化查询语言<SQL是一种在关系型数据库中用于处理数据地查询语言.它是非过程化语言或者说是描述性地,用户只须指定一种类似于英语地描述, 用来确定操作, 记录或描述记录组合. 查询优化器将这种描述翻译为过程执行数据库操作. EmxvxOtOco 网状模型网状模型在数据之间通过链接表结构创建关系, 子记录可以链接到多个父记录.这种将记录和链接捆绑到一起地方法叫做指针, 他是指向一个记录存储位置地存储地址. 使用网状方法,一个子记录可以链接到一个关键记录,同时, 它本身也可以作为一个关键记录.链接到其他一系列子记录.在早期, 网状模型比其他模型更有性能优势;但是在今天,这种优势地特点只有在自动柜员机网络, 航空预定系统等大容量和高速处理过程中才是最重要地. SixE2yXPq5分层和网状数据库都是专用程序, 如果开发一个新地应用程序, 那么在不同地应用程序中保持数据库地一致性是非常困难地. 例如开发一个退休金程序, 需要访问雇员数据,这一数据同时也被工资单程序访问.虽然数据是相同地, 但是也必须建立新地数据库. 6ewMyirQFL 对象模型最新地数据库管理方法是使用对象模型, 记录由被称作对象地实体来描述, 可以在对象中存储数据, 同时提供方法或程序执行特定地任务. kavU42VRUs对象模型使用地查询语言与开发数据库程序所使用地面向对象地程序设计语言是相同地,因为没有像SQL这样简单统一地查询语言,所以会产生一些问题. 对象模型相对较新, 仅有少数几个面向对象地数据库实例. 它引起了人们地关注, 因为选择面向对象程序设计语言地开发人员希望有一个基于在对象模型基础上地数据库. y6v3ALoS89 分布式数据库类似地, 分布式数据库指地是数据库地各个部分分别存储在物理上相互分开地计算机上. 分布式数据库地一个目地是访问数据信息时不必考虑其他位置. 注意, 一旦用户和数据分开, 通信和网络则开始扮演重要角色. M2ub6vSTnP分布式数据库需要部分常驻于大型主机上地软件, 这些软件在大型机和个人计算机之间建立桥梁, 并解决数据格式不兼容地问题. 在理想情况下, 大型主机上地数据库看起来像是一个大地信息仓库, 而大部分处理则在个人计算机上完成. 0YujCfmUCw 分布式数据库系统地一个缺点是它们常以主机中心模型为基础, 在这种模型中, 大型主机看起来好像是雇主, 而终端和个人计算机看起来好像是奴隶. 但是这种方法也有许多优点:由于数据库地集中控制, 前面提到地数据完整性和安全性地问题就迎刃而解了.当今地个人计算机,部门级计算机和分布式处理都需要计算机之间以及应用程序之间在相等或对等地基础上相互通信, 在数据库中客户机/ 服务器模型为分布式数据库提供了框架结构. eUts8ZQVRd利用相互连接地计算机上运行地数据库应用程序地一种方法是将程序分解为相互独立地部分. 客户端是一个最终用户或通过网络申请资源地计算机程序, 服务器是一个运行着地计算机软件, 存储着那些通过网络传输地申请.当申请地资源是数据库中地数据时, 客户机/服务器模型则为分布式数据库提供了框架结构. sQsAEJkW5T 文件服务器指地是一个通过网络提供文件访问地软件, 专门地文件服务器是一台被指定为文件服务器地计算机. 这是非常有用地,例如,如果文件比较大而且需要快速访问,在这种情况下,一台微型计算机或大型主机将被用作文件服务器. 分布式文件服务器将文件分散到不同地计算机上, 而不是将它们集中存放到专门地文件服务器上.GMsIasNXkA后一种文件服务器地优点包括在其他计算机上存储和检索文件地能力, 并可以在每一台计算机上消除重复文件. 然而,一个重要地缺点是每个读写请求需要在网络上传播, 在刷新文件时可能出现问题. 假设一个用户申请文件中地一个数据并修改它, 同时另外一个用户也申请这个数据并修改它, 解决这种问题地方法叫做数据锁定, 即第一个申请使其他申请处于等待状态, 直到完成第一个申请,其他用户可以读取这个数据, 但不能修改. TIrRGchYzg数据库服务器是一个通过网络为数据库申请提供服务地软件,例如, 假设某个用户在他地个人计算机上输入了一个数据查询命令, 如果应用程序按照客户机/ 服务器模型设计, 那么个人计算机上地查询语言通过网络传送数据库服务器上, 当发现数据时发出通知. 7EqZcWLZNX在工程界也有许多分布式数据库地例子,如SUN公司地网络文件系统<NFS 被应用到计算机辅助工程应用程序中,将数据分散到由SUNT作站组成地网络上地不同硬盘之间. lzq7IGf02E分布式数据库是革命性地进步, 因为把数据存放在被使用位置上是很合乎常理地.例如一个大公司不同部门之间地计算机, 应该将数据存储在本地, 然而,当被授权地管理人员需要整理部门数据时, 数据应该能够被访问. 数据库信息系统软件将保护数据库地安全性和完整性, 对用户而言, 分布式数据库和非分布式数据库看起来没有什么差别. zvpgeqJ1hk原文Database Management Systems( 3th Edition >,Wiley ,2004, 5-12NrpoJac3v1A introduction to Database Management SystemRaghu RamakrishnanA database (sometimes spelled data base> is also called an electronicdatabase , referring to any collection of data, or information, that is specially organized for rapid search and retrieval by a computer. Databases are structuredto facilitate the storage, retrieval , modification, and deletion of data in conjunction with various data- processing operations .Databases can be stored on magnetic disk or tape, optical disk, or some other secondary storagedevic1e n.owfTG4KIA database consists of a file or a set of files. The information in thesefiles may be broken down into records, each of which consists of one or more fields. Fields are the basic units of data storage , and each field typically contains information pertaining to one aspect or attribute of the entity described by the database . Using keywords and various sorting commands, users can rapidly search , rearrange, group, and select the fields in many records to retrieve or create reports on particular aggregate of data f.jnFLDa5ZoComplex data relationships and linkages may be found in all but the simplest databases .The system software package that handles the difficult tasks associated with creating ,accessing, and maintaining database records is called a database management system(DBMS>.The programs in a DBMS package establish an interface between the database itself and the users of the database.. (These users may be applications programmers, managers and others with information needs, and various OS programs.>tfnNhnE6e5A DBMS can organize, process, and present selected data elements form the database. This capability enables decision makers to search, probe, and query database contents in order to extract answers to nonrecurring and unplanned questions that aren't available in regular reports. These questions mightinitially be vague and/or poorly defined ,but people can “browse”through the database until they have the needed information. In short, the DBMS will “manage”the stored data items and assemble the needed items from the common database in response to the queries of those who aren't programmers.HbmVN777sLA database management system (DBMS> is composed of three major parts:(1>a storage subsystem that stores and retrieves data in files 。

计算机 数据库 外文文献翻译 中英文

计算机 数据库 外文文献翻译 中英文

科技外文文献Microsoft Future "Soul" - SQL Server 2005 Exploration SecretAuthor : CHEN Bao-linSQL Server development "Brief History"At the beginning of this before, let us look at Microsoft SQL Server development "Brief History."1988 : SQL Server from Microsoft and Sybase common development, running on OS / 2 platform.1993-09-14 : SQL Server 4.2, a desktop database system contains less functional. Integration with Windows and to provide easy-to-use user interface.1994 : Microsoft and Sybase database in cooperation in the development of suspension.1995 : SQL Server 6.0, code-named "SQL95" Microsoft rewriting most of the core system. Provide a low-cost small business application database program.1996-04-16 : SQL Server 6.5, This version brings significant performance improvement and providing a wide variety of useful functions.1998-11-16 : SQL Server 7.0, code-named "Sphinx." Completely rewritten core database engine, providing small and medium business applications database program, contains the initial Web support. SQL Server starting from this version has been widely used.2000-08-07 : the birth of SQL Server 2000, code-named "Shiloh." Microsoft to produce the product has been defined as enterprise-class database system, which includes three components (DB, OLAP, English Query). Rich front-end tools, improved development tools, and XML support, the promotion of this version of the promotion and application. And contains the following several versions.Enterprise Edition : through the deployment of cluster TB-class support services giant databases and thousands of concurrent users online.Standard Edition : to support SMEs.Personal version : support desktop applications.Developer : staff development for enterprises and Windows CE build enterprise applications.Window CE Version : can be applied to any Windows CE mobile devices.2003-04-24 : SQL Server 2000, 64-bit version. Codenamed "Liberty" has been and Unix / Linux Oracle compete.2005-11-07 : SQL Server 2005, codenamed "Yukon" Microsoft SQL Server products to the latest version. Microsoft commented that the status of this product took five years of major changes, a landmark product. Microsoft SQL Server 4.2 to 2005. Microsoft since the early 1990s to enter the database market, SQL Server 2005 until the launch, behaved like an enterprise database from the market to lead the followers of the restructuring, sword was sharpened for 10 years, through many a storm, Microsoft already enterprises database management perspective extends to a broader and deeper realm, the paper attempts to explore the history, Aggregate Microsoft SQL Server formative history.1987 Sysbase developed Unix systems running SQL Server version. In 1988, Microsoft invited the then momentum in the database fields are busy Sysbase. joint development of SQL server. "Sima heart erased", Microsoft tried to enter the database market moves obviously, and, database market is bound to whip up some wind action. Sure enough, after 10 years of market access database for the intense period of the Warring States. 1993-04-12, Microsoft SQL Server version 4.2. And before the introduction of Windows NT echoed that Microsoft officially entered the enterprise applications market. And the SQL Server database and the enterprise is the most important. Although SQL Server 4.2 while still just a desktop version, but there has been considerablepotential. 1994, Microsoft and Sybase formal suspension of the database development cooperation This meaningfully.From 1995 to 2000, Microsoft has adopted 6.0, 6.5,7.0, 2000 Version 4. From the perspective view, SQL Server 2000 version has been able to provide the following services.Online Services (On-line services) : "On-Line" refers to real-time online users use data services.Online transaction processing OLTP (On-Line Transaction Processing) : OLTP operation by the order-processing services transactions, or transactions follow completion or undoes all the principles. It also did not include the type of services. This is a sector that is the most universal and most widely forms of service. Analysis of online services OLAP (On-Line Analytical Processing) : OLAP is a kind of multidimensional data display (such as data warehousing, data mart, data cube), usually to do data mining. As OLTP used to operate and SQL data definition, OLAP is used and MDX (MultiDimensional Expressions) visit and definitions of data. From the technical structure of SQL Server 2000, as follows.Data structure•physical structure of data structure.•logical framework : how to define Tables, ro ws, columns, and other data objectsData Processing• data processing storage engine : it is responsible for dealing with how the data retention.• engine : it is responsible for how the data for the visit and relations.• SQL Server Agent : it is respo nsible for task scheduling and events management.Data manipulation• DB APIs : ADO (ActiveX Data Objects).OLE DB (linking and embedding data objects).DB-Library for C + +.ODBC (Open Data Internet).ESQL (Embedded SQL.)• URLs (uniform resource locat or address).• English inquiries (English Query).SQL Server Enterprise Manager.Tools : Inquiry analyzers, DTS (Data Transformation Services), Backup and restore and replication, metadata services, storage expansion process, SQL tracking, can be used for performance tuning.Experiences from users, SQL Server 2000 version of a number of new characteristics, such as XML support, many examples of support, data warehouse and business intelligence to enhance performance and scalability will improve, operating guide, and the inquiries, DTS, Transact SQL enhancements.From the license price, Microsoft SQL Server 2000, the price and total cost of ownership (TCO) only to the Oracle or D B2 2 / 1 to 1 / 3.In summary, Microsoft high-performance low-cost access to the product concept on the market success SQL Server 2000 database can meet the OLTP and OLAP application deployment, and better performance, and prices relative Oracle, DB2 and other databases low. Meanwhile, SQL Server 2000 Enterprise Edition also includes the standard version and other versions to meet different levels of user demand, These factors prompted the SQL Server 2000 was a significant part of the SME market share Microsoft has the opportunity to enter the mainstream database vendors ranks.At the same time, we should realize that SQL Server 2000 and Oracle launched late in the G 10 high-end enterprise-level functions in surviving deficient, so bridging the gap to catch up on the historic mission to the code-named "Yukon," the new version.Killer code-named "Yukon"From the 1989 release of Microsoft SQL Server 1.0 is now a full 15 years. In that 15 years of SQL Server fromscratch, from small to large, experiencing a once legendary. It has not only eroded with IBM, Oracle database market share, and the next generation of SQL Server has begun to gradually become the next Windows operating system core. China and the Bill Gates mouth • The constant repetition of "seamless calculation" is the core of Yukon, The code-named "Yukon," the next generation of our database will be brought into what kind of world? Internet "soft" pillarIn today's era of the network, data searching,data storage, classification of data, etc. All this has become the Internet network constitutes the "soft" pillars, and the database system is the pillar of the most critical. If there is no database support, we would never be able to Google or Baidu in the search for the information they need. can not use the convenient electronic mailbox, but that Network World because it is a large database consisting of.According to IDC's latest data show that the global database software market seems to be stirring Tension 2003 total revenue reached 13.6 billion U.S. dollars, compared with 2002's 12.6 billion U.S. dollars have increased. Oracle, IBM and Microsoft now controls 75% market share. Oracle last year for a market share of 39.8%, 31.3% for IBM, Microsoft to 12.1%.What is the database? In the University's computer textbooks, the database is being interpreted in this way : The database is the computer application system in a specialized data resource management system. There are many forms of data, such as text, digital, symbols, graphics, images and voices, and so on. All computer data system to deal with the subject. People familiar approach of a document is produced, will soon compile a program processing documents, will be covered by the procedural requirements of data organized into data files, documentation of procedures to call. Data files and program files maintain a certain relationship. Computer Application in the rapid development of the situation, by means of such a document will highlight deficiencies. For example, it allows poor definitive data, facilitate transplantation, in different documents stored information much duplication and waste of storage space, Update inconvenience. Database system will solve this problem. Database systems from the application of specific procedures, but based on the data management, All data will be stored in a database, scientific organizations, and by means of the database management system, using it as an intermediary, with a variety of applications or application interface to make it easy access to the data in the database.This note describes is indeed very detailed, but you may not always seem dizziness, In fact, a simple database that is after a group of computer collation of data stored in one or more documents, and the management of the database software called on the database management system. A general database system (104217) can be divided into the database (Database ) and Data Management System (Database Management System, DBMS) in two parts, all of these constitute the Internet is a "soft" pillars all.Microsoft's SQL Server database software, as many of the upgrade from 6.5 to the 7.0 version, gradually become mainstream database software, and SQL Server 2000 also proved that the Windows operating system can bear the same high-end data application, as the mainstream business application of database management software. It broke the rule by the large Unix database software myth and the next generation of SQL Server 2005 there will be what kind of change?Live Yukon core secretsMicrosoft in the next version of SQL Server (codenamed "Yukon") at the planning stage , considered more of the future development of the database, and SQL Server programming capabilities. Microsoft's internal development staff had long been aware that the future must introduce a more unified programming model but for a different data model to provide more flexibility. The unified programming model means that the ordinary data access and operation tasks can be carried out through various channels. For example, you can choose to use XML or Framework, or Transact-S QL (T-SQL) code, and so on.Such planning will result is a new database programming platform, which in many ways a natural extension. First, host. NET Framework common language runtime (CLR) to the function of the process of expansion of database programming and managed code area. Secondly,. NET framework provides a host integration from within SQL Server powerful object database functions. XML is the in-depth support functions through the XML data typeto achieve, and It has a data type of relationship between all the functions. In addition, also added a pair of XML Query (XQuery) and XML structure definition language (XSD) standard server support. Finally, SQL Server Yukon includes T-SQL language to enhance the important function.XML in SQL Server Yukon's history really began with SQL Server 2000. SQL Server 2000 with the introduction of the XML format to relational data. large load and segmentation XML documents and databases will be open targets for XML-based Web services, and other functions, However Yukon provide a more senior XML Query function, After perfecting the Y ukon will be full play all of the advantages of XML. XML Why so critical? In fact, from the initial XML an alternative HTML said the technical development of a line format, now be seen as a storage format. XML lasting memory has drawn widespread attention, the Internet has also been a lot of XML data type applications. XML itself can be an across any platform data format, It started as a file format for use, as XML in the enterprise has been widely recognized, Users began to use XML to solve thorny business problems, such as data integration. This makes as a data storage format XML development today, Because XML can be displayed on any platform to produce the same results, XML has become a mainstream database storage format. This built-in the Yukon comprehensive XML support will trigger a new database technology revolution.These new programming models and enhanced common language to create a series of programmable, They complement and expand the current relational database model. This architecture has the ultimate aim is to build more scalable, more reliable, more robust applications, and to enhance the development of efficiency. These models Another result is a service called SQL Agent new application framework -- for Asynchronous sources delivering the Distributed Application Framework.Yukon joining century gambleConstantly talking before we say a string of technology advantages, then you may very curious, Why should we introduce this appears to be a high-end database application software technologies? Perhaps we should kick the answer.The richest on Earth doing computer predictions for the future, he believes, in the next world, every one ordinary computer will have a large enough super hard disks, At that time the hard disk is no longer simply an 80 GB is likely to be 80 TB, Although it is only a change GB TB, but that means hard disk capacity of a full upgrade of 1000 times. And the existing Windows disk data storage NTFS format, simply unable to cope with such a large capacity hard disk data search. Said an image of the example, if the 100 TB of disk space on your computer, At that time, or you use Windows XP, You collate debris disk of the time required is likely to be for two days and two nights, if you want to find a particular document, You will have waited for several hours. That feeling is like to return to 286 times.In order to solve this thorny problem, the next generation Windows operating system Longhorn decided with the previous non-Windows diametrically with the programming model. The core is Avalon (development code). Avalon is the new Windows GUI library. New Longhorn into the Indigo (Web services) and WinFS (file system) of the new function. Including Avalon, these three new function called hell. Longhorn is the founder of a new "local" API. Although now is to the Win32 API compatibility and grow, However, to use the new Longhorn functions, under normal circumstances the use of hell. Max belongs to the present. NET Framework in the city. Present. NET Framework used in the category, which has hell, DLL support for the procedural mechanisms and the operation. NET basically the same.. NET Framework in SQL Server Yukon Availability when major version upgrade ( Major VersionUp), the specific date is the end of 2004. In the Yukon. NET Framework to run. In the storage process (Stored Procedures) use. NET Framework The class library. Yukon operations. NET Framework version 2.0. Supplementary to the present. NET Framework 1.1 is no relevant category of multimedia. WinFS use Yukon engines. In other words, Longhorn, the file system will use database engine.This time you understand, the next generation Windows operating system, the whole document data management will be introduced SQL Server configuration management, when Our computer data querycapabilities, data integration capability will be greatly enhanced. This of course, that the rich keep saying that the "seamless calculation" is a critical step on Microsoft, Let database software and operating systems integration projects century is undoubtedly a gamble, which, if successful, Microsoft will gradually become the dominant database, but if it fails, The almost even harden the next generation Windows listing of the normal schedule.Microsoft has provided some tools for SQL server and client applications on the network between the transmission of data increases secret. However, the Microsoft product manager said Kirsten Ward, plans to release next year a new SQL Server database will be stored in the data encryption, Hacker attacks increase defense capabilities.Microsoft earlier this year "SQL Server 2005" release time postponed until the first half of next year. The database software will enhance the launch of Microsoft database computing power and better with Oracle and IBM compete. Microsoft will also introduce a unified storage concept, locating and retrieving data more convenient. Oracle in Windows and Unix database market has been in a leading position. However, the recently adopted this year, Microsoft SQL Server to increase more advanced functions have also made remarkable progress.In addition, Microsoft will also provide a service called "Best Practices Analyzer Tool" (best practice analyzer tool) software. Database administrators can use the software using Microsoft editor of the Guide database software debugging. This applies to software tools for Microsoft database software current version "SQL Server 2000" and to provide a database administrator in various fields Operations Guide, For example, how to improve performance and how to conduct more effective data backup and so on.Ward said that the software tool also includes an "Upgrade Advisor" procedure. This procedure can scan database programs and warned "SQL Server 2000" users to make the necessary amendments changed so that the procedures compatible with the upcoming launch of the "SQL Server 2005."(Source : China Computer Education)中文译文微软未来的“灵魂”—SQL Server 2005探密作者:陈宝林SQL Server的发展“简史”在开始本文之前,先让我们来看一下微软SQL Server的发展“简史”。

数据库外文参考文献及翻译.

数据库外文参考文献及翻译.

数据库外文参考文献及翻译数据库外文参考文献及翻译数据库管理系统——实施数据完整性一个数据库,只有用户对它特别有信心的时候。

这就是为什么服务器必须实施数据完整性规则和商业政策的原因。

执行SQL Server的数据完整性的数据库本身,保证了复杂的业务政策得以遵循,以及强制性数据元素之间的关系得到遵守。

因为SQL Server的客户机/服务器体系结构允许你使用各种不同的前端应用程序去操纵和从服务器上呈现同样的数据,这把一切必要的完整性约束,安全权限,业务规则编码成每个应用,是非常繁琐的。

如果企业的所有政策都在前端应用程序中被编码,那么各种应用程序都将随着每一次业务的政策的改变而改变。

即使您试图把业务规则编码为每个客户端应用程序,其应用程序失常的危险性也将依然存在。

大多数应用程序都是不能完全信任的,只有当服务器可以作为最后仲裁者,并且服务器不能为一个很差的书面或恶意程序去破坏其完整性而提供一个后门。

SQL Server使用了先进的数据完整性功能,如存储过程,声明引用完整性(DRI),数据类型,限制,规则,默认和触发器来执行数据的完整性。

所有这些功能在数据库里都有各自的用途;通过这些完整性功能的结合,可以实现您的数据库的灵活性和易于管理,而且还安全。

声明数据完整性声明数据完整原文请找腾讯3249114六,维-论'文.网 定义一个表时指定构成的主键的列。

这就是所谓的主键约束。

SQL Server使用主键约束以保证所有值的唯一性在指定的列从未侵犯。

通过确保这个表有一个主键来实现这个表的实体完整性。

有时,在一个表中一个以上的列(或列的组合)可以唯一标志一行,例如,雇员表可能有员工编号( emp_id )列和社会安全号码( soc_sec_num )列,两者的值都被认为是唯一的。

这种列经常被称为替代键或候选键。

这些项也必须是唯一的。

虽然一个表只能有一个主键,但是它可以有多个候选键。

SQL Server的支持多个候选键概念进入唯一性约束。

大学毕业设计仓库管理系统数据库计算机外文参考文献原文及翻译

大学毕业设计仓库管理系统数据库计算机外文参考文献原文及翻译

河北工程大学毕业论文(设计)英文参考文献原文复印件及译文数据仓库数据仓库为商务运作提供结构与工具,以便系统地组织、理解和使用数据进行决策。

大量组织机构已经发现,在当今这个充满竞争、快速发展的世界,数据仓库是一个有价值的工具。

在过去的几年中,许多公司已花费数百万美元,建立企业范围的数据仓库。

许多人感到,随着工业竞争的加剧,数据仓库成了必备的最新营销武器——通过更多地了解客户需求而保住客户的途径。

“那么”,你可能会充满神秘地问,“到底什么是数据仓库?”数据仓库已被多种方式定义,使得很难严格地定义它。

宽松地讲,数据仓库是一个数据库,它与组织机构的操作数据库分别维护。

数据仓库系统允许将各种应用系统集成在一起,为统一的历史数据分析提供坚实的平台,对信息处理提供支持。

按照W. H. Inmon,一位数据仓库系统构造方面的领头建筑师的说法,“数据仓库是一个面向主题的、集成的、时变的、非易失的数据集合,支持管理决策制定”。

这个简短、全面的定义指出了数据仓库的主要特征。

四个关键词,面向主特征。

(1)(2)确保(3)(4)下的应用数据。

由于这种分离,数据仓库不需要事务处理、恢复和并行控制机制。

通常,它只需要两种数据访问:数据的初始化装入和数据访问。

概言之,数据仓库是一种语义上一致的数据存储,它充当决策支持数据模型的物理实现,并存放企业决策所需信息。

数据仓库也常常被看作一种体系结构,通过将异种数据源中的数据集成在一起而构造,支持结构化和启发式查询、分析报告和决策制定。

“好”,你现在问,“那么,什么是建立数据仓库?”根据上面的讨论,我们把建立数据仓库看作构造和使用数据仓库的过程。

数据仓库的构造需要数据集成、数据清理、和数据统一。

利用数据仓库常常需要一些决策支持技术。

这使得“知识工人”(例如,经理、分析人员和主管)能够使用数据仓库,快捷、方便地得到数据的总体视图,根据数据仓库中的信息做出准确的决策。

有些作者使用术语“建立数据仓库”表示构造数据仓库的过程,而用术语“仓库DBMS”表示管理和使用数据仓库。

SQL数据库英文文献及翻译

SQL数据库英文文献及翻译

SQL数据库英文文献及翻译The fact that you are reading a book on SQL indicates that you, somehow, need to interact with databases. SQL is a language used to do just this, so before looking at SQL itself, it is important that you understand some basic concepts about databases and database technologies.Whether you are aware of it or not, you use databases all the time. Each time you select a name from your email address book, you are using a database. If you conduct a search on an Internet search site, you are using a database. When you log into your network at work, you are validating your name and password against a database. Even when you use your ATM card at a cash machine, you are using databases for PIN number verification and balance checking.But even though we all use databases all the time, there remains much confusion over what exactly a database is. This is especially true because different people use the same database terms to mean different things. Therefore, a good place to start our study is with a list and explanation of the most important database terms.Reviewing Basic Concepts What follows is a very brief overview of some basic database concepts. It is intended to either jolt your memory if you already have some database experience, or to provide you with the absolute basics, if you are new to databases. Understanding databases is an important part of mastering SQL, and you might want to find a good book on database fundamentals to brush up on the subject if needed.What Is a Database?The term database is used in many organized fashion. The simplest way to think of it is to imagine a database as a filing cabinet. The filing cabinet is simply a physical location to store data, regardless of what that data is or how it is organizedDatabase A container (usually a file or set of files) to store organized data.Misuse Causes ConfusionPeople often use the term database to refer to the database software they are running. This is incorrect, and it is a source of much confusion. Database software is actually called the Database Management System (or DBMS). The database is the container created and manipulated via the DBMS.A database might be a file stored on a hard drive, but it might not. And for the most part this is not even significant as you never access a database directly anyway; you always use the DBMS and it accesses the database for you.TablesWhen you store information in your filing cabinet you don't just toss it in a drawer. Rather, you create files within the filing cabinet, and then you file related data in specific files.In the database world, that file is called a table. A table is a structuredfile that can store data of a specific type. A table might contain a list of customers, a product catalog, or any other list of information. Table A structured list of data of a specific type.The key here is that the data stored in the table is one type of data or one list. You would never store a list of customers and a list of orders in the same database table. Doing so would make subsequent retrieval and access difficult. Rather, you'd create two tables, one for each list. Every table in a database has a name that identifies it. That name is always unique—meaning no other table in that database can have the same name. Table Names What makes a table name unique is actually a combination of several things including the database name and table name. Some databases also use the name of the database owner as part of the unique name. This means that while you cannot use the same table name twice in the same database, you definitely can reuse table names in different databases. Tables have characteristics and properties that define how data is stored in them. These include information about what data may be stored, how it is broken up, how individual pieces of information are named, and much more. This set of information that describes a table is known as a schema, and schema are used to describe specific tables within a database, as well as entire databases (and the relationship between tables in them, if any). Schema Information about database and table layout and properties. Columns and DatatypesTables are made up of columns. A column contains a particular piece of information within a table.Column A single field in a table. All tables are made up of one or more columns.The best way to understand this is to envision database tables as grids, somewhat like spreadsheets. Each column in the grid contains a particular piece of information. In a customer table, for example, one column contains the customer number, another contains the customer name, and the address, city, state, and zip are all stored in their own columns Breaking Up DataIt is extremely important to break data into multiple columns correctly. For example, city, state, and zip should always be separate columns. By breaking these out, it becomes possible to sort or filter data by specific columns (for example, to find all customers in a particular state or in a particular city). If city and state are combined into one column, it would be extremely difficult to sort or filter by state.1657Each column in a database has an associated datatype. A datatype defines what type of data the column can contain. For example, if the column is to contain a number (perhaps the number of items in an order), the datatype would be a numeric datatype. If the column were to contain dates, text, notes, currency amounts, and so on, the appropriate datatype would be used to specify this.DatatypeA type of allowed data. Every table column has an associated datatype that restricts (or allows) specific data in that column.Datatypes restrict the type of data that can be stored in a column (for example, preventing the entry of alphabetical characters into a numeric field). Datatypes also help sort data correctly, and play an important role in optimizing disk usage. As such, special attention must be given to picking the right datatype when tables are created.Datatype CompatibilityDatatypes and their names are one of the primary sources of SQL incompatibility. While most basic datatypes are supported consistently, many more advanced datatypes are not. And worse, occasionally you'll find that the same datatype is referred to by Data in a table is stored in rows; each record saved is stored in its own row. Again, envisioning a table as a spreadsheet style grid, the vertical columns in the grid are the table columns, and the horizontal rows are the table rows.For example, a customers table might store one customer per row. The number of rows in the table is the number of records in it.Row A record in a table.Records or Rows?You may hear users refer to database records when referring to rows. For the most part, the two terms are used interchangeably, but row is technically the correct term.Primary KeysEvery row in a table should have some column (or set of columns) that uniquely identifies it. A table containing customers might use a customer number column for this purpose, whereas a table containing orders might use the order ID. An employee list table might use an employee ID or the employee social security number column.This column (or set of columns) that uniquely identifies each row in a table is called a primary key. The primary key is used to refer to a specific row. Without a primary key, updating or deleting specific rows in a table becomes extremely difficult as there is no guaranteed safe way to refer to just the rows to be affected.Always Define Primary Keys Although primary keys are not actually required, most database designers ensure that every table they create has a primary key so that future data manipulation is possible and manageable.Any column in a table can be established as the primary key, as long as it meets the following conditions:• No two rows can have the same primary key value.• Every row must have a primary key value (primary key columns may not allow NULL values).• Values in primary key columns can never be modified or updated.• Primary key values can never be reused. (If a row is deleted from the table, its primary key may not be assigned to any new rows in the future.) What Is SQL?SQL (pronounced as the letters S-Q-L or as sequel) is an abbreviation for Structured Query Language. SQL is a language designed specifically for communicating with databases.Unlike other languages (spoken languages like English, or programming languages like Java or Visual Basic), SQL is made up of very few words. This is deliberate. SQL is designed to do one thing and do it well—provide you with a simple and efficient way to read and write data from a database. Primary keys are usually defined on a single column within a table. But this is not required, and multiple columns may be used together as a primary key. When multiple columns are used, the rules listed above must apply to all columns that make up the primary key, and the values of all columns together must be unique (individual columns need not have unique values).。

电子信息工程数据库管理中英文对照外文翻译文献

电子信息工程数据库管理中英文对照外文翻译文献

中英文对照外文翻译文献(文档含英文原文和中文翻译)译文:数据库管理数据库(有时拼成Database)也称为电子数据库,是指由计算机特别组织的用下快速查找和检索的任意的数据或信息集合。

数据库与其它数据处理操作协同工作,其结构要有助于数据的存储、检索、修改和删除。

数据库可存储在磁盘或磁带、光盘或某些辅助存储设备上。

一个数据库由一个文件或文件集合组成。

这些文件中的信息可分解成一个个记录,每个记录有一个或多个域。

域是数据库存储的基本单位,每个域一般含有由数据库描述的属于实体的一个方面或一个特性的信息。

用户使用键盘和各种排序命令,能够快速查找、重排、分组并在查找的许多记录中选择相应的域,建立特定集上的报表。

数据库记录和文件的组织必须确保能对信息进行检索。

早期的系统是顺序组织的(如:字母顺序、数字顺序或时间顺序);直接访问存储设备的研制成功使得通过索引随机访问数据成为可能。

用户检索数据库信息的主要方法是query(查询)。

通常情况下,用户提供一个字符串,计算机在数据库中寻找相应的字符序列,并且给出字符串在何处出现。

比如,用户必须能在任意给定时间快速处理内部数据。

而且,大型企业和其它组织倾向于建立许多独立的文件,其中包含相互关联的甚至重叠的数据,这些数据、处理活动经常需要和其它文件的数据相连。

为满足这些要求,开发邮各种不同类型的数据库管理系统,如:非结构化的数据库、层次型数据库、网络型数据库、关系型数据库、面向对象型数据库。

在非结构化的数据库中,按照实体的一个简单列表组织记录;很多个人计算机的简易数据库是非结构的。

层次型数据库按树型组织记录,每一层的记录分解成更小的属性集。

层次型数据库在不同层的记录集之间提供一个单一链接。

与此不同,网络型数据库在不同记录集之间提供多个链接,这是通过设置指向其它记录集的链或指针来实现的。

网络型数据库的速度及多样性使其在企业中得到广泛应用。

当文件或记录间的关系不能用链表达时,使用关系型数据库。

mysql数据库英文文献

mysql数据库英文文献

mysql数据库英文文献及翻译MySQL architecture is best understood in the context of its history. Thus, the two are discussed in the same chapter.MySQL HistoryMySQL history goes back to 1979 when Monty Widenius, working for a small companycalled TcX, created a reporting tool written in BASIC that ran on a 4 Mhzcomputer with 16 KB RAM. Over time, the tool was rewritten in C and ported to run on Unix. It was still just a low-level storage engine with a reporting front end. The tool was known by the name of Unireg.Working under the adverse conditions of little computational resources, and perhaps building on his God-given talent,Monty developed a habit and ability to write very efficient code naturally. He also developed, or perhaps was gifted from the start,with an unusually acute vision of what needed to be done to the code to make it useful in future development—without knowing in advance much detail about what that future development would be.In addition to the above, with TcX being a very small company and Monty being one of the owners, he had a lot of say in what happened to his code. While there are perhaps a good number of programmers out there with Monty’s talent and ability, for a number of reasons, few get to carry their code around for more than 20 years. Monty did.Monty’s work, talents, and ownership of the code provided a foundation upon which the Miracle of MySQL could be built.Some time in the 1990s, TcX customers began to push for an SQL interface to their data. Several possibilities were considered. One was to load it into a commercial database.Monty was not satisfied with the speed. He tried borrowing mSQL code for the SQL part and integrating it with his low-level storage engine. That did not work well,either. Then came the classic move of a talented,driven programmer: “I’ve had enough of those tools that somebody else wrote that don’t work! I’m writing my own!”Thus in May of 1996 MySQL version 1.0 was released to a limited group, followed by a public release in October 1996 of version 3.11.1. The initial public release provided only a binary distribution for Solaris. A month later, the source and the Linux binary were released.In the next two years, MySQL was ported to a number of other operating systems as the feature set gradually increased. MySQL was originally released under a special license that allowed commercial use to those who were not redistributing it with their software. Special licenses were available for sale to those who wanted to bundle it with their product. Additionally, commercial support was also being sold. This provided TcX with some revenue to justify the further development of MySQL,although the purpose of its original creation had already been fulfilled.During this period MySQL progressed to version 3.22. It supported a decent subset of the SQL language, had an optimizer a lot more sophisticated than one would expect could possibly be written by one person, was extremely fast, and was very stable.Numerous APIs were contributed, so one could write a client in pretty much any existing programming language. However, it still lacked support for transactions,subqueries, foreign keys, stored procedures, and views. The locking happened only at a table level, which in some cases could slow it down to a grinding halt. Someprogrammers unable to get around its limitations still considered it a toy, while others were more than happy to dump their Oracle or SQL Server in favor of MySQL, and deal with the limitations in their code in exchange for improvement in performance and licensing cost savings.Around 1999–2000 a separate company named MySQL AB was established. It hired several developers and established a partnership with Sleepycat to provide an SQL interface for the Berkeley DB data files. Since Berkeley DB had transaction capabilities,this would give MySQL support for transactions, which it previously lacked.After some changes in the code in preparation for integrating Berkeley DB,version 3.23 was released.Although the MySQL developers could never work out all the quirks of the Berkeley DB interface and the Berkeley DB tables were never stable, the effort was not wasted.As a result, MySQL source became equipped with hooks to add any type of storage engine, including a transactional one.By April of 2000, with 原文请找腾讯3249114六~维-论~文.网,ISAM, was reworked and released as MyISAM. Among a number of improvements,full-text search capabilities were now supported. A short-lived partnership with NuSphere to add Gemini, a transactional engine with row-level locking, ended in a lawsuit toward the end of 2001. However, around the same time, Heikki Tuuri approached MySQL AB with a proposal to integrate his own storage engine,InnoDB, which was also capable of transactions and row-level locking.Heikki’s contribut ion integrated much more smoothly with the new table handler interface already polished off by the Berkeley DB integration efforts. The MySQL/InnoDB combination became version 4.0, and was released as alpha in October of 2001. By early 2002 the MySQL/InnoDB combowas stable and instantly took MySQL to another level. Version 4.0 was finally declared production stable in March 2003.It might be worthy of mention that the version number change was not caused by the addition of InnoDB. MySQL developers have always viewed InnoDB as an important addition, but by no means something that they completely depend on for success.Back then, and even now, the addition of a new storage engine is not likely to be celebrated with a version number change. In fact, compared to previous versions,not much was added in version 4.0. Perhaps the most significant addition was the query cache, which greatly improved performance of a large number ofapplications.Replication code on the slave was rewritten to use two threads: one for network I/O from the master, and the other to process the updates. Some improvements were added to the optimizer. The 1506mysql数据库英文文献及翻译client/server protocol became SSL-capable.Version 4.1 was released as alpha in April of 2003, and was declared beta in June of 2004. Unlike version 4.0, it added a number of significant improvements. Perhaps the most significant was subqueries, a feature long-awaited by many users. Spatial indexing support was added to the MyISAM storage engine. Unicode support was implemented. The client/server protocol saw a number of changes. It was made more secure against attacks, and supported prepared statements.In parallel with the alpha version of 4.1, work progressed on yet another development branch: version 5.0, which would add stored procedures, server-side cursors,triggers, views, XA transactions, significant improvements in the query optimizer,and a number of other features. The decision to create a separate development branch was made because MySQL developers felt that it would take a long time to stabilize 4.1 if, on top of all the new features that they were adding to it, they had to deal with the stored procedures. Version 5.0 was finally released as alpha in December 2003. For a while this created quite a bit of confusion—there were two branches in the alpha stage. Eventually 4.1 stabilized (October 2004), and the confusion was resolved.Version 5.0 stabilized a year later, in October of 2005.The first alpha release of 5.1 followed in November 2005, which added a number of improvements, some of which are table data partitioning, row-based replication,event scheduler, and a standardized plug-in API that facilitates the integration of new storage engines and other plug-ins.At this point, MySQL is being actively developed. 5.0 is currently the stable version,while 5.1 is in beta and should soon become stable. New features at this point go into version 5.2.MySQL ArchitectureFor the large part, MySQL architecture defies a formal definition or specification.When most of the code was originally written, it was not done to be a part of some great system in the future, but rather to solve some very specific problems. However,it was written so well and with enough insight that it reached the point where there were enough quality pieces to assemble a database server.Core ModulesI make an attempt in this section to identify the core modules in the system. However,let me add a disclaimer that this is only an attempt to formalize what exists.MySQL developers rarely think in those terms. Rather, they tend to think of files,directories, classes, structures, and functions. It is much more common to hear “This happens in mi_open( )” than to hear “This happens on the MyISAM storage engine level.” MySQL developers know the code so well that they are able to think conceptually on the level of functions, structures, and classes. They will probably find the abstractions in this section rather useless. However, it would be helpful to a person used to thinking in terms of modules and managers.With regard to MySQL, I use the term “module” rather loosely. Unlike what one would typically call a module, in many cases it is not something you can easily pull out and replace with another implementation. The code from one module might be spread across several files, and you often find the code from several different modules in the same file. This is particularly true of the older code. The newer code tends to fit into the pattern of modules better. So in our definition, a module is a piece of code that logically belongs together in some way, and performs a certain critical function in User Authentication Module• Access Control Module• Parser• Command Dispatcher• Query Cache Module• Optimizer• Table Manager• Table Modification Modul es• Table Maintenance Module• Status Reporting Module• Abstracted Storage Engine Interface (Table Handler)• Storage Engine Implementations (MyISAM, InnoDB, MEMORY, Berkeley DB)• Logging Module• Replication Master Module• Replication Slave Module• C lient/Server Protocol API• Low-Level Network I/O API• Core APIInteraction of the Core ModulesWhen the server is started on the command line, the Initialization Module takes control.It parses the configuration file and the command-line arguments, allocates global memory buffers, initializes global variables and structures, loads the access control tables, and performs a number of other initialization tasks. Once the initialization job is complete, the Initialization Module passes control to the Connection Manager, which starts listening for connections from clients in a loop.mysql数据库英文文献及翻译When a client connects to the database server, the Connection Manager performs a number of low-level network protocol tasks and then passes control to the Thread Manager, which in turn supplies a thread tohandle the connection (which from now on will be referred to as the Connection Thread). The Connection Thread might be created anew, or retrieved from the thread cache and called to active duty. Once the Connection Thread receives control, it first invokes the User Authentication Module.The credentials of the connecting user are verified, and the client may now issue requests.The Connection Thread passes the request data to the Command Dispatcher. Some requests, known in the MySQL code terminology as commands, can be accommodated by the Command Dispatcher directly, while more complex ones need to be redirected to another module. A typical command may request the server to run a query, change the active database, report the status, send a continuous dump of the replication updates, close the connection, or perform some other operation.In MySQL server terminology, there are two types of client requests: a query and a command. A query is anything that has to go through the parser. A command is a request that can be executed without the need to invoke the parser. We will use the term query in the context of MySQL internals. Thus, not only a SELECT but also a DELETE or INSERT in our terminology would be called a query. What we would call a query is sometimes called an SQL statement.If full query logging is enabled, the Command Dispatcher will ask the Logging Module to log the query or the command to the plain-text log prior to the dispatch. Thus in the full logging configuration all queries will be logged, even the ones that are not syntactically correct and will never be executed, immediately returning an error.The Command Dispatcher forwards queries to the Parser through the Query Cache Module. The Query Cache Module checks whether the query is of the type that can be cached, and if there exists a previously computed cached result that is still valid.In the case of a hit, the execution is short-circuited at this point, the cached result is returned to the user, and the Connection Thread receives control and is now ready to process another command. If the Query Cache Module reports a miss, the query goes to the Parser, which will make a decision on how to transfer control based on the query type.One can identify the following modules that could continue from that point: the Optimizer, the Table Modification Module, the Table Maintenance Module, the Replication Module, and the Status Reporting Module. Select queries are forwarded to the Optimizer; updates, inserts, deletes, and table-creation and schema-altering queries go to the respective Table Modification Modules; queries that check, repair, update key statistics, or defragment the table go to the Table Maintenance module;queries related to replication go to the Replication Module; and status requests go to the Status Reporting Module. There also exist a number of Table Modification Modules: Delete Module, Create Module, Update Module, Insert Module, and Alter Module.At this point, each of the modules that will receive control from the Parser passes the list of tables involved in the query to the Access Control Module and then, upon success,to the Table Manager, which opens the tables and acquires the necessary locks.Now the table operation module is ready to proceed with its specific task and will issue a number of requests to the Abstracted Storage Engine Module for low-level operations such as inserting or updating a record, retrieving the records based on a key value, or performing an operation on the table level, such as repairing it or updating the index statistics.The Abstracted Storage Engine Module will automatically translate the calls to the corresponding methods of the specific Storage Engine Module via object polymorphism.In other words, when dealing with a Storage Engine object, the caller thinks it is the caller does not need to be aware of the exact object type of the Storage Engine object.As the query or command is being processed, the corresponding module may send parts of the result set to the client as they become available. It may also send warnings or an error message. If an error message is issued, both the client and the server will understand that the query or command has failed and take the appropriate measures.The client will not accept any more result set, warning, or error message data for the given query, while the server will always transfer control to the Connection Thread after issuing an error. Note that since MySQL does not use exceptions for reasons of implementation stability and portability, all calls on all levels must be checked for errors with the appropriate transfer of control in the case of failure.If the low-level module has made a modification to the data in some way and if the binary update logging is enabled, the module will be responsible for asking the Logging Module to log the update event to the binary update log, sometimes known as the replication log, or, among MySQL developers and power users, the binlog. Once the task is completed, the execution flow returns to the Connection Thread,which performs the necessary clean-up and waits for another query or command from the client. The session continues until the client issues the Quit command.In addition to interacting with regular clients, a server may receive a command from a replication slave to continuously read its binary update log. This command will be handled by the Replication Master Module.If the server is configured as a replication slave, the Initialization Module will call the Replication Slave Module, which in turn will start two threads, called the SQL Thread and the I/O thread. They take care of propagating updates that happened on the master to the slave. It is possible for the same server to be configured as both a master and a slave.mysql数据库英文文献及翻译Network communication with a client goes through the Client/Server Protocol Module,which is responsible for packaging the data in the proper format, and depending on the connection settings, compressing it. The Client/Server Protocol Module in turn uses the Low-Level Network I/O module, which is responsible for sending and receiving the data on the socket level in a cross-platform portable way. It is also responsible for encrypting the data using the OpenSSL library calls if the connection options are set appropriately.As they perform their respective tasks, the core components of the server heavily rely on the Core API. The Core API provides a rich functionality set, which includes file I/O, memory management, string manipulation, implementations of various data structures and algorithms, and many other useful capabilities. MySQL developers are encouraged to avoid direct libc calls, and use the Core API to facilitate ports to new platforms and code optimization in the future.Writer:Sasba pacbev译文:深入理解MySQL核心技术姓名:苗月明学号:0651135MySQL的历史与架构MySQL的架构的最好的理解是从他的历史背景中去发现。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

中英文对照翻译附件1:外文翻译译文数据库管理数据库(有时拼成Database)也称为电子数据库,是指由计算机特别组织的用下快速查找和检索的任意的数据或信息集合。

数据库与其它数据处理操作协同工作,其结构要有助于数据的存储、检索、修改和删除。

数据库可存储在磁盘或磁带、光盘或某些辅助存储设备上。

一个数据库由一个文件或文件集合组成。

这些文件中的信息可分解成一个个记录,每个记录有一个或多个域。

域是数据库存储的基本单位,每个域一般含有由数据库描述的属于实体的一个方面或一个特性的信息。

用户使用键盘和各种排序命令,能够快速查找、重排、分组并在查找的许多记录中选择相应的域,建立特定集上的报表。

数据库记录和文件的组织必须确保能对信息进行检索。

早期的系统是顺序组织的(如:字母顺序、数字顺序或时间顺序);直接访问存储设备的研制成功使得通过索引随机访问数据成为可能。

用户检索数据库信息的主要方法是query(查询)。

通常情况下,用户提供一个字符串,计算机在数据库中寻找相应的字符序列,并且给出字符串在何处出现。

比如,用户必须能在任意给定时间快速处理内部数据。

而且,大型企业和其它组织倾向于建立许多独立的文件,其中包含相互关联的甚至重叠的数据,这些数据、处理活动经常需要和其它文件的数据相连。

为满足这些要求,开发邮各种不同类型的数据库管理系统,如:非结构化的数据库、层次型数据库、网络型数据库、关系型数据库、面向对象型数据库。

在非结构化的数据库中,按照实体的一个简单列表组织记录;很多个人计算机的简易数据库是非结构的。

层次型数据库按树型组织记录,每一层的记录分解成更小的属性集。

层次型数据库在不同层的记录集之间提供一个单一链接。

与此不同,网络型数据库在不同记录集之间提供多个链接,这是通过设置指向其它记录集的链或指针来实现的。

网络型数据库的速度及多样性使其在企业中得到广泛应用。

当文件或记录间的关系不能用链表达时,使用关系型数据库。

一个表或一个“关系”,就是一个简单的非结构列表。

多个关系可通过数学关系提供所需信息。

面向对象的数据库存储并处理更复杂的称为对象的数据结构,可组织成有层次的类,其中的每个类可以继承层次链中更高一级类的特性,这种数据库结构最灵活,最具适应性。

很多数据库包含自然语言文本信息,可由个人在家中使用。

小型及稍大的数据库在商业领域中占有越来越重要的地位。

典型的商业应用包括航班预订、产品管理、医院的医疗记录以及保险公司的合法记录。

最大型的数据库通常用天政府部门、企业、大专院校等。

这些数据库存有诸如摘要、报表、成文的法规、通讯录、报纸、杂志、百科全书、各式目录等资料。

索引数据库包含参考书目或用于找到相关书籍、期刊及其它参考文献的索引。

目前有上万种可公开访问的数据库,内容包罗万象,从法律、医学、工程到新闻、时事、游戏、分类广告、指南等。

科学家、医生、律师、财经分析师、股票经纪人等专家和各类研究者越来越多地依赖这些数据库从大量的信息中做快速的查找访问。

数据库管理系统的组织技术顺序的、直接的以及其他的文件处理方式常用于单个文件中数据的组织和构造,而DBMS可综合几个文件的数据项以回答用户对信息的查询,这就意味着DBMS能够访问和检索非关键记录字段的数据,即DBMS能够将几个大文件夹中逻辑相关的数据组织并连接在一起。

逻辑结构。

确定这些逻辑关系是数据管理者的任务,由数据定义语言完成。

DBMS 在存储、访问和检索操作过程中可选用以下逻辑构造技术:链表结构。

在该逻辑方式中,记录通过指针链接在一起。

指针是记录本中的一相数据项,它指出另一个逻辑相关的记录的存储位置,例如,顾客主文件中的记录将包含每个顾客的姓名和地址,而且该文件中的每个记录都由一个账号标识。

在记账期间,顾客可在不同时间购买许多东西。

公司保存一个发票文件以反映这下地交易,这种情况下可使用链表结构,以显示给定时间内未支付的发票。

顾客文件中的每个记录都包含这样一个字段,该字段指向发票文件中该顾客的第一个发票的记录位置,该发票记录又依次与该顾客的下一个发票记录相连,此链接的最后一个发票记录由一个作为指针的特殊字符标识。

层次(树型)结构。

该逻辑方式中,数据单元的多级结构类似一棵“倒立”的树,该树的树根在顶部,而树枝向下延伸。

在层次(树型)结构中存在主-从关系,惟一的根数据下是从属的元或节点,而每个元或树枝都只有一个所有者,这样,一个customer (顾客)拥有一个invoice(发票),而invoice(发票)又有从属项。

在树型结构中,树枝不能相连。

网状结构。

网状结构不像树型结构那样不允许树枝相连,它允许节点间多个方向连接,这样,每个节点都可能有几个所有者,中央电视台它又可能拥有任意多个其他数据单元。

数据管理软件允许从文件的任一记录开始提取该结构中的所需信息。

关系型结构。

关系型结构由许多表格组成,数据则以“关系”的形式存储在这些表中。

例如,可建立一些关系表,将大学课程同任课教师及上课地点连接起来。

为了找到英语课的上课地点和教师名,首先查询课程/教师关系表得到名字(为“Fitt”),再查询课程/地点关系表得到地点(“Main 142”),当然,也可能有其他关系。

这是一个相当新颖的数据库组织技术,将来有望得到广泛应用。

物理结构。

人们总是为了各自的目的,按逻辑方式设想或组织数据。

因此,在一个具体应用中,记录R1和R2是逻辑相连且顺序处理的,但是,在计算机系统中,这些在一个应用中逻辑相邻的记录,物理位置完全可能不在一起。

记录在介质和硬件中的物理结构不仅取决于所采用的I/O设备、存储设备及输入输出和存取技术,而且还取决于用户定义的R1和R2中数据的逻辑关系。

例如,R1和R2可能是持有信用卡的顾客记录,而顾客要求每两周将货物运送到同一个城市的同一个街区,而从运输部门的管理者看,R1和R2是按地理位置组织的运输记录的顺序项,但是在A/R应用中,可找到R1长表示的顾客,并且可根据其完全不同的账号处理他们的账目。

简言之,在许多计算机化的信息记录中,存储记录的物理位置用户是看不见的。

Oracle的数据库管理功能Oracle 包括许多使数据库易于管理的功能,分三部分讨论:Oracle 企业管理器、附加包、备份和恢复。

1.Oracle 企业管理器和任何数据库服务器一样,Oracle 数据库服务器包括以下部分:Oracle 企业管理器(IM)、一个带有图形接口的用于管理数据库用户、实例和提供Oracle 环境等附加信息功能(如:复制)的数据库管理工具框架。

在Oracle8i数据库之前,EM 软件必须安装在Windows95/98或者基于NT 的系统中,而且每个库每次只能由一个数据库管理者访问。

如今你可以通过浏览器或者把EM 装入Window95/98/2000 或基于NT 的系统中来使用EM。

多个数据库管理员可以同时访问EM 库。

在Oracle9i的EM版中,超级管理员可以定义在普通管理员的控制台上显示的服务,并能建立管理区域。

2.附加包正如下面所描述的那样,Oracle可使用一些可选的附加包,还有用于Oracle应用程序和SAP R/3的管理包。

(1)标准管理包Oracle的标准管理包提供了用于小型Oracle数据库的管理工具(如:Oracle 服务器/标准版)。

功能包括:对数据库争用、输入/输出、装载、内存使用和实例、对话分析、索引调整进行监控,并改变调查和跟踪。

(2)诊断包利用诊断包,可以监控、诊断及维护企业版数据库、操作系统和应用程序的安全。

用有关历史和实时的分析,可自动的在问题发生前将其消除。

诊断包还提供空间管理功能,有助于对未来系统资源需要的计划和跟踪。

(3)调整包利用调整包,可确定并调整企业版数据库和应用系统的瓶颈,如效率低的SQL、很差的数据设计、系统资源的不当使用,从而优化系统性能。

调整包能提前发现调整时机,并自动生成分析和需求变化来调整系统。

(4)变化管理包变化管理包在升级企业版数据库时帮助排错和避免丢失数据,以达到支持新的应用程序的目的。

该包能分析与应用程序变动有关的影响和复杂依赖关系并自动升级数据库。

用户可使用一种简单的向导按必要的步骤来升级。

(5)可用性Oracle 企业管理器可用管理Oracle标准版或企业版。

在标准版中,用于诊断、调整和改变实例的附加功能由标准管理包提供。

对于企业版,这些附加的功能由单独的诊断包、调整包和变化管理包提供。

3.备份和恢复正如每个数据库管理者所熟知的,对数据库做备份是一件很普通但又必要的工作。

一次不当的备份会使数据库难于恢复甚至不可恢复。

不幸的是,人们往往在相关系统发生故障而丢失了重要的业务数据后才认识到这项日常工作的重要。

下面介绍一些实现数据库备份操作的产品技术。

(1)恢复管理者典型的备份包括完整的数据库备份(最普通的类型)、桌面空间备份、数据文件备份、控件备份和存档注册备份。

Oracle8i为数据服务器管理备份和恢复管理器(RMAN)。

以前,Oracle的企业备份工具(EBU)在一些平台上提供了相似的解决方案。

然而,RMAN 及其存储在Oracle数据库中的恢复目录提供了更完整的解决方案。

RMAN可以自动定位、备份、存储并恢复数据文件、控制文件和存档记录注册。

当备份到期时,Oracle9i的RMAN可以重新启动备份和恢复来实现恢复窗口的任务。

Oracle企业管理器的备份管理器曾RMAN提供基于图形用户界面的接口。

(2)附加备份和恢复RMAN能够执行企业版数据库的附加备份。

附加备份仅备份上一次备份后改变了的数据文件、桌面空间或数据库块,因此,它比完整的备份占用时间短而且速度快。

RMAN 也能执行及时指向的恢复,这种恢复能在一个不期望的事件发片之前(如错误的删除表格)恢复数据。

(3)连续存储管理器许多媒体软件商支持RMAN。

Oracle捆绑了连续存储管理器来提供媒体管理服务,包括为至多四台设备提供磁带容量跟踪的服务。

RMAN界面自动地与媒体管理软件一起来管理备份和恢复操作必须的磁带设备。

(4)可用性尽管标准版和企业版的Oracle都有基本的恢复机制,但附加备份仅限于企业版。

Oracle 和 SQL Server 的比较选择我不得不决定是使用Oracle数据库及其数据库开发系统,还是选择配有Visual Studio的Microsoft SQL Server。

这个决策将决定我们今后Web项目的方向。

这两种组合各有什么优势和劣势呢?Lori: 决定选择哪种方案将取决于你目前的工作平台。

例如,如果你想实现一种基于Web的数据库应用,而且你的工作平台只是Windows,那么SQL Sever和Visual Studio 组件就是一个不错的选择。

但是对于混合平台,则最好选择Oracle解决方案。

还要考虑一些其他的因素,例如你可以获得哪些额外的功能以及需要哪些技术。

相关文档
最新文档