数据库论文中英文对照资料外文翻译文献

合集下载

数据库中英文对照外文翻译文献

数据库中英文对照外文翻译文献

中英文对照外文翻译Database Management SystemsA database (sometimes spelled data base) is also called an electronic database , referring to any collection of data, or information, that is specially organized for rapid search and retrieval by a computer. Databases are structured to facilitate the storage, retrieval , modification, and deletion of data in conjunction with various data-processing operations .Databases can be stored on magnetic disk or tape, optical disk, or some other secondary storage device.A database consists of a file or a set of files. The information in these files may be broken down into records, each of which consists of one or more fields. Fields are the basic units of data storage , and each field typically contains information pertaining to one aspect or attribute of the entity described by the database . Using keywords and various sorting commands, users can rapidly search , rearrange, group, and select the fields in many records to retrieve or create reports on particular aggregate of data.Complex data relationships and linkages may be found in all but the simplest databases .The system software package that handles the difficult tasks associated with creating ,accessing, and maintaining database records is called a database management system(DBMS).The programs in a DBMS package establish an interface between the database itself and the users of the database.. (These users may be applications programmers, managers and others with information needs, and various OS programs.)A DBMS can organize, process, and present selected data elements form the database. This capability enables decision makers to search, probe, and query database contents in order to extract answers to nonrecurring and unplanned questions that aren’t available in regular reports. These questions might initially be vague and/or poorly defined ,but people can “browse” through the database until they have the needed information. In short, the DBMS will “manage” the stored data items and assemble the needed items from the common database in response to the queries of those who aren’t programmers.A database management system (DBMS) is composed of three major parts:(1)a storage subsystemthat stores and retrieves data in files;(2) a modeling and manipulation subsystem that provides the means with which to organize the data and to add , delete, maintain, and update the data;(3)and an interface between the DBMS and its users. Several major trends are emerging that enhance the value and usefulness of database management systems;Managers: who require more up-to-data information to make effective decisionCustomers: who demand increasingly sophisticated information services and more current information about the status of their orders, invoices, and accounts.Users: who find that they can develop custom applications with database systems in a fraction of the time it takes to use traditional programming languages.Organizations : that discover information has a strategic value; they utilize their database systems to gain an edge over their competitors.The Database ModelA data model describes a way to structure and manipulate the data in a database. The structural part of the model specifies how data should be represented(such as tree, tables, and so on ).The manipulative part of the model specifies the operation with which to add, delete, display, maintain, print, search, select, sort and update the data.Hierarchical ModelThe first database management systems used a hierarchical model-that is-they arranged records into a tree structure. Some records are root records and all others have unique parent records. The structure of the tree is designed to reflect the order in which the data will be used that is ,the record at the root of a tree will be accessed first, then records one level below the root ,and so on.The hierarchical model was developed because hierarchical relationships are commonly found in business applications. As you have known, an organization char often describes a hierarchical relationship: top management is at the highest level, middle management at lower levels, and operational employees at the lowest levels. Note that within a strict hierarchy, each level of management may have many employees or levels of employees beneath it, but each employee has only one manager. Hierarchical data are characterized by this one-to-many relationship among data.In the hierarchical approach, each relationship must be explicitly defined when the database is created. Each record in a hierarchical database can contain only one key field and only one relationship is allowed between any two fields. This can create a problem because data do not always conform to such a strict hierarchy.Relational ModelA major breakthrough in database research occurred in 1970 when E. F. Codd proposed a fundamentally different approach to database management called relational model ,which uses a table asits data structure.The relational database is the most widely used database structure. Data is organized into related tables. Each table is made up of rows called and columns called fields. Each record contains fields of data about some specific item. For example, in a table containing information on employees, a record would contain fields of data such as a person’s last name ,first name ,and street address.Structured query language(SQL)is a query language for manipulating data in a relational database .It is nonprocedural or declarative, in which the user need only specify an English-like description that specifies the operation and the described record or combination of records. A query optimizer translates the description into a procedure to perform the database manipulation.Network ModelThe network model creates relationships among data through a linked-list structure in which subordinate records can be linked to more than one parent record. This approach combines records with links, which are called pointers. The pointers are addresses that indicate the location of a record. With the network approach, a subordinate record can be linked to a key record and at the same time itself be a key record linked to other sets of subordinate records. The network mode historically has had a performance advantage over other database models. Today , such performance characteristics are only important in high-volume ,high-speed transaction processing such as automatic teller machine networks or airline reservation system.Both hierarchical and network databases are application specific. If a new application is developed ,maintaining the consistency of databases in different applications can be very difficult. For example, suppose a new pension application is developed .The data are the same, but a new database must be created.Object ModelThe newest approach to database management uses an object model , in which records are represented by entities called objects that can both store data and provide methods or procedures to perform specific tasks.The query language used for the object model is the same object-oriented programming language used to develop the database application .This can create problems because there is no simple , uniform query language such as SQL . The object model is relatively new, and only a few examples of object-oriented database exist. It has attracted attention because developers who choose an object-oriented programming language want a database based on an object-oriented model. Distributed DatabaseSimilarly , a distributed database is one in which different parts of the database reside on physically separated computers . One goal of distributed databases is the access of informationwithout regard to where the data might be stored. Keeping in mind that once the users and their data are separated , the communication and networking concepts come into play .Distributed databases require software that resides partially in the larger computer. This software bridges the gap between personal and large computers and resolves the problems of incompatible data formats. Ideally, it would make the mainframe databases appear to be large libraries of information, with most of the processing accomplished on the personal computer.A drawback to some distributed systems is that they are often based on what is called a mainframe-entire model , in which the larger host computer is seen as the master and the terminal or personal computer is seen as a slave. There are some advantages to this approach . With databases under centralized control , many of the problems of data integrity that we mentioned earlier are solved . But today’s personal computers, departmental computers, and distributed processing require computers and their applications to communicate with each other on a more equal or peer-to-peer basis. In a database, the client/server model provides the framework for distributing databases.One way to take advantage of many connected computers running database applications is to distribute the application into cooperating parts that are independent of one anther. A client is an end user or computer program that requests resources across a network. A server is a computer running software that fulfills those requests across a network . When the resources are data in a database ,the client/server model provides the framework for distributing database.A file serve is software that provides access to files across a network. A dedicated file server is a single computer dedicated to being a file server. This is useful ,for example ,if the files are large and require fast access .In such cases, a minicomputer or mainframe would be used as a file server. A distributed file server spreads the files around on individual computers instead of placing them on one dedicated computer.Advantages of the latter server include the ability to store and retrieve files on other computers and the elimination of duplicate files on each computer. A major disadvantage , however, is that individual read/write requests are being moved across the network and problems can arise when updating files. Suppose a user requests a record from a file and changes it while another user requests the same record and changes it too. The solution to this problems called record locking, which means that the first request makes others requests wait until the first request is satisfied . Other users may be able to read the record, but they will not be able to change it .A database server is software that services requests to a database across a network. For example, suppose a user types in a query for data on his or her personal computer . If the application is designed with the client/server model in mind ,the query language part on the personal computer simple sends the query across the network to the database server and requests to be notified when the data are found.Examples of distributed database systems can be found in the engineering world. Sun’s Network Filing System(NFS),for example, is used in computer-aided engineering applications to distribute data among the hard disks in a network of Sun workstation.Distributing databases is an evolutionary step because it is logical that data should exist at the location where they are being used . Departmental computers within a large corporation ,for example, should have data reside locally , yet those data should be accessible by authorized corporate management when they want to consolidate departmental data . DBMS software will protect the security and integrity of the database , and the distributed database will appear to its users as no different from the non-distributed database .In this information age, the data server has become the heart of a company. This one piece of software controls the rhythm of most organizations and is used to pump information lifeblood through the arteries of the network. Because of the critical nature of this application, the data server is also the one of the most popular targets for hackers. If a hacker owns this application, he can cause the company's "heart" to suffer a fatal arrest.Ironically, although most users are now aware of hackers, they still do not realize how susceptible their database servers are to hack attacks. Thus, this article presents a description of the primary methods of attacking database servers (also known as SQL servers) and shows you how to protect yourself from these attacks.You should note this information is not new. Many technical white papers go into great detail about how to perform SQL attacks, and numerous vulnerabilities have been posted to security lists that describe exactly how certain database applications can be exploited. This article was written for the curious non-SQL experts who do not care to know the details, and as a review to those who do use SQL regularly.What Is a SQL Server?A database application is a program that provides clients with access to data. There are many variations of this type of application, ranging from the expensive enterprise-level Microsoft SQL Server to the free and open source mySQL. Regardless of the flavor, most database server applications have several things in common.First, database applications use the same general programming language known as SQL, or Structured Query Language. This language, also known as a fourth-level language due to its simplistic syntax, is at the core of how a client communicates its requests to the server. Using SQL in its simplest form, a programmer can select, add, update, and delete information in a database. However, SQL can also be used to create and design entire databases, perform various functions on the returned information, and even execute other programs.To illustrate how SQL can be used, the following is an example of a simple standard SQL query and a more powerful SQL query:Simple: "Select * from dbFurniture.tblChair"This returns all information in the table tblChair from the database dbFurniture.Complex: "EXEC master..xp_cmdshell 'dir c:\'"This short SQL command returns to the client the list of files and folders under the c:\ directory of the SQL server. Note that this example uses an extended stored procedure that is exclusive to MS SQL Server.The second function that database server applications share is that they all require some form of authenticated connection between client and host. Although the SQL language is fairly easy to use, at least in its basic form, any client that wants to perform queries must first provide some form of credentials that will authorize the client; the client also must define the format of the request and response.This connection is defined by several attributes, depending on the relative location of the client and what operating systems are in use. We could spend a whole article discussing various technologies such as DSN connections, DSN-less connections, RDO, ADO, and more, but these subjects are outside the scope of this article. If you want to learn more about them, a little Google'ing will provide you with more than enough information. However, the following is a list of the more common items included in a connection request.Database sourceRequest typeDatabaseUser IDPasswordBefore any connection can be made, the client must define what type of database server it is connecting to. This is handled by a software component that provides the client with the instructions needed to create the request in the correct format. In addition to the type of database, the request type can be used to further define how the client's request will be handled by the server. Next comes the database name and finally the authentication information.All the connection information is important, but by far the weakest link is the authentication information—or lack thereof. In a properly managed server, each database has its own users with specifically designated permissions that control what type of activity they can perform. For example, a user account would be set up as read only for applications that need to only access information. Another account should be used for inserts or updates, and maybe even a third account would be used for deletes.This type of account control ensures that any compromised account is limited in functionality. Unfortunately, many database programs are set up with null or easy passwords, which leads to successful hack attacks.译文数据库管理系统介绍数据库(database,有时拼作data base)又称为电子数据库,是专门组织起来的一组数据或信息,其目的是为了便于计算机快速查询及检索。

数据挖掘技术毕业论文中英文资料对照外文翻译文献综述

数据挖掘技术毕业论文中英文资料对照外文翻译文献综述

数据挖掘技术毕业论文中英文资料对照外文翻译文献综述数据挖掘技术简介中英文资料对照外文翻译文献综述英文原文Introduction to Data MiningAbstract:Microsoft® SQL Server™ 2005 provides an integrated environment for creating and working with data mining models. This tutorial uses four scenarios, targeted mailing, forecasting, market basket, and sequence clustering, to demonstrate how to use the mining model algorithms, mining model viewers, and data mining tools that are included in this release of SQL Server.IntroductionThe data mining tutorial is designed to walk you through the process of creating data mining models in Microsoft SQL Server 2005. The data mining algorithms and tools in SQL Server 2005 make it easy to build a comprehensive solution for a variety of projects, including market basket analysis, forecasting analysis, and targeted mailing analysis. The scenarios for these solutions are explained in greater detail later in the tutorial.The most visible components in SQL Server 2005 are the workspaces that you use to create and work with data mining models. The online analytical processing (OLAP) and data mining tools are consolidated into two working environments: Business Intelligence Development Studio and SQL Server Management Studio. Using Business Intelligence Development Studio, you can develop an Analysis Services project disconnected from the server. When the project is ready, you can deploy it to the server. You can also work directly against the server. The main function of SQL Server Management Studio is to manage the server. Each environment is described in more detail later in this introduction. For more information on choosing between the two environments, see "Choosing Between SQL Server Management Studio and Business Intelligence Development Studio" in SQL Server Books Online.All of the data mining tools exist in the data mining editor. Using the editor you can manage mining models, create new models, view models, compare models, and create predictions basedon existing models.After you build a mining model, you will want to explore it, looking for interesting patterns and rules. Each mining model viewer in the editor is customized to explore models built with a specific algorithm. For more information about the viewers, see "Viewing a Data Mining Model" in SQL Server Books Online.Often your project will contain several mining models, so before you can use a model to create predictions, you need to be able to determine which model is the most accurate. For this reason, the editor contains a model comparison tool called the Mining Accuracy Chart tab. Using this tool you can compare the predictive accuracy of your models and determine the best model.To create predictions, you will use the Data Mining Extensions (DMX) language. DMX extends SQL, containing commands to create, modify, and predict against mining models. For more information about DMX, see "Data Mining Extensions (DMX) Reference" in SQL Server Books Online. Because creating a prediction can be complicated, the data mining editor contains a tool called Prediction Query Builder, which allows you to build queries using a graphical interface. You can also view the DMX code that is generated by the query builder.Just as important as the tools that you use to work with and create data mining models are the mechanics by which they are created. The key to creating a mining model is the data mining algorithm. The algorithm finds patterns in the data that you pass it, and it translates them into a mining model — it is the engine behind the process.Some of the most important steps in creating a data mining solution are consolidating, cleaning, and preparing the data to be used to create the mining models. SQL Server 2005 includes the Data Transformation Services (DTS) working environment, which contains tools that you can use to clean, validate, and prepare your data. For more information on using DTS in conjunction with a data mining solution, see "DTS Data Mining Tasks and Transformations" in SQL Server Books Online.In order to demonstrate the SQL Server data mining features, this tutorial uses a new sample database called AdventureWorksDW. The database is included with SQL Server 2005, and it supports OLAP and data mining functionality. In order to make the sample database available, you need to select the sample database at the installation time in the “Advanced” dialog for component selection.Adventure WorksAdventureWorksDW is based on a fictional bicycle manufacturing company named Adventure Works Cycles. Adventure Works produces and distributes metal and composite bicycles to North American, European, and Asian commercial markets. The base of operations is located in Bothell, Washington with 500 employees, and several regional sales teams are located throughout their market base.Adventure Works sells products wholesale to specialty shops and to individuals through theInternet. For the data mining exercises, you will work with the AdventureWorksDW Internet sales tables, which contain realistic patterns that work well for data mining exercises.For more information on Adventure Works Cycles see "Sample Databases and Business Scenarios" in SQL Server Books Online.Database DetailsThe Internet sales schema contains information about 9,242 customers. These customers live in six countries, which are combined into three regions:North America (83%)Europe (12%)Australia (7%)The database contains data for three fiscal years: 2002, 2003, and 2004.The products in the database are broken down by subcategory, model, and product.Business Intelligence Development StudioBusiness Intelligence Development Studio is a set of tools designed for creating business intelligence projects. Because Business Intelligence Development Studio was created as an IDE environment in which you can create a complete solution, you work disconnected from the server. You can change your data mining objects as much as you want, but the changes are not reflected on the server until after you deploy the project.Working in an IDE is beneficial for the following reasons:The Analysis Services project is the entry point for a business intelligence solution. An Analysis Services project encapsulates mining models and OLAP cubes, along with supplemental objects that make up the Analysis Services database. From Business Intelligence Development Studio, you can create and edit Analysis Services objects within a project and deploy the project to the appropriate Analysis Services server or servers.If you are working with an existing Analysis Services project, you can also use Business Intelligence Development Studio to work connected the server. In this way, changes are reflected directly on the server without having to deploy the solution.SQL Server Management StudioSQL Server Management Studio is a collection of administrative and scripting tools for working with Microsoft SQL Server components. This workspace differs from Business Intelligence Development Studio in that you are working in a connected environment where actions are propagated to the server as soon as you save your work.After the data has been cleaned and prepared for data mining, most of the tasks associated with creating a data mining solution are performed within Business Intelligence Development Studio. Using the Business Intelligence Development Studio tools, you develop and test the datamining solution, using an iterative process to determine which models work best for a given situation. When the developer is satisfied with the solution, it is deployed to an Analysis Services server. From this point, the focus shifts from development to maintenance and use, and thus SQL Server Management Studio. Using SQL Server Management Studio, you can administer your database and perform some of the same functions as in Business Intelligence Development Studio, such as viewing, and creating predictions from mining models.Data Transformation ServicesData Transformation Services (DTS) comprises the Extract, Transform, and Load (ETL) tools in SQL Server 2005. These tools can be used to perform some of the most important tasks in data mining: cleaning and preparing the data for model creation. In data mining, you typically perform repetitive data transformations to clean the data before using the data to train a mining model. Using the tasks and transformations in DTS, you can combine data preparation and model creation into a single DTS package.DTS also provides DTS Designer to help you easily build and run packages containing all of the tasks and transformations. Using DTS Designer, you can deploy the packages to a server and run them on a regularly scheduled basis. This is useful if, for example, you collect data weekly data and want to perform the same cleaning transformations each time in an automated fashion.You can work with a Data Transformation project and an Analysis Services project together as part of a business intelligence solution, by adding each project to a solution in Business Intelligence Development Studio.Mining Model AlgorithmsData mining algorithms are the foundation from which mining models are created. The variety of algorithms included in SQL Server 2005 allows you to perform many types of analysis. For more specific information about the algorithms and how they can be adjusted using parameters, see "Data Mining Algorithms" in SQL Server Books Online.Microsoft Decision TreesThe Microsoft Decision Trees algorithm supports both classification and regression and it works well for predictive modeling. Using the algorithm, you can predict both discrete and continuous attributes.In building a model, the algorithm examines how each input attribute in the dataset affects the result of the predicted attribute, and then it uses the input attributes with the strongest relationship to create a series of splits, called nodes. As new nodes are added to the model, a tree structure begins to form. The top node of the tree describes the breakdown of the predicted attribute over the overall population. Each additional node is created based on the distribution of states of the predicted attribute as compared to the input attributes. If an input attribute is seen tocause the predicted attribute to favor one state over another, a new node is added to the model. The model continues to grow until none of the remaining attributes create a split that provides an improved prediction over the existing node. The model seeks to find a combination of attributes and their states that creates a disproportionate distribution of states in the predicted attribute, therefore allowing you to predict the outcome of the predicted attribute.Microsoft ClusteringThe Microsoft Clustering algorithm uses iterative techniques to group records from a dataset into clusters containing similar characteristics. Using these clusters, you can explore the data, learning more about the relationships that exist, which may not be easy to derive logically through casual observation. Additionally, you can create predictions from the clustering model created by the algorithm. For example, consider a group of people who live in the same neighborhood, drive the same kind of car, eat the same kind of food, and buy a similar version of a product. This is a cluster of data. Another cluster may include people who go to the same restaurants, have similar salaries, and vacation twice a year outside the country. Observing how these clusters are distributed, you can better understand how the records in a dataset interact, as well as how that interaction affects the outcome of a predicted attribute.Microsoft Naïve BayesThe Microsoft Naïve Bayes algorithm quickly builds mining models that can be used for classification and prediction. It calculates probabilities for each possible state of the input attribute, given each state of the predictable attribute, which can later be used to predict an outcome of the predicted attribute based on the known input attributes. The probabilities used to generate the model are calculated and stored during the processing of the cube. The algorithm supports only discrete or discretized attributes, and it considers all input attributes to be independent. The Microsoft Naïve Bayes algorithm produces a simple mining model that can be considered a starting point in the data mining process. Because most of the calculations used in creating the model are generated during cube processing, results are returned quickly. This makes the model a good option for exploring the data and for discovering how various input attributes are distributed in the different states of the predicted attribute.Microsoft Time SeriesThe Microsoft Time Series algorithm creates models that can be used to predict continuous variables over time from both OLAP and relational data sources. For example, you can use the Microsoft Time Series algorithm to predict sales and profits based on the historical data in a cube.Using the algorithm, you can choose one or more variables to predict, but they must be continuous. You can have only one case series for each model. The case series identifies the location in a series, such as the date when looking at sales over a length of several months or years.A case may contain a set of variables (for example, sales at different stores). The Microsoft Time Series algorithm can use cross-variable correlations in its predictions. For example, prior sales at one store may be useful in predicting current sales at another store.Microsoft Neural NetworkIn Microsoft SQL Server 2005 Analysis Services, the Microsoft Neural Network algorithm creates classification and regression mining models by constructing a multilayer perceptron network of neurons. Similar to the Microsoft Decision Trees algorithm provider, given each state of the predictable attribute, the algorithm calculates probabilities for each possible state of the input attribute. The algorithm provider processes the entire set of cases , iteratively comparing the predicted classification of the cases with the known actual classification of the cases. The errors from the initial classification of the first iteration of the entire set of cases is fed back into the network, and used to modify the network's performance for the next iteration, and so on. You can later use these probabilities to predict an outcome of the predicted attribute, based on the input attributes. One of the primary differences between this algorithm and the Microsoft Decision Trees algorithm, however, is that its learning process is to optimize network parameters toward minimizing the error while the Microsoft Decision Trees algorithm splits rules in order to maximize information gain. The algorithm supports the prediction of both discrete and continuous attributes.Microsoft Linear RegressionThe Microsoft Linear Regression algorithm is a particular configuration of the Microsoft Decision Trees algorithm, obtained by disabling splits (the whole regression formula is built in a single root node). The algorithm supports the prediction of continuous attributes.Microsoft Logistic RegressionThe Microsoft Logistic Regression algorithm is a particular configuration of the Microsoft Neural Network algorithm, obtained by eliminating the hidden layer. The algorithm supports the prediction of both discrete andcontinuous attributes.)中文译文数据挖掘技术简介摘要:微软® SQL Server™2005中提供用于创建和使用数据挖掘模型的集成环境的工作。

外文文献-中文翻译-数据库

外文文献-中文翻译-数据库

外文文献-中文翻译-数据库英文原文2:《DBA Survivor: Become a Rock Star DBA》by Thomas LaRock,Published By Apress.2010You know that a database is a collection of logically related data elements that may be structured in various ways lo meet the multiple processing and retrieval needs of organizations and individuals. There’s nothing new about databases—early ones were chiseled in stone, penned on scrolls, and written on index cards. But now databases are commonly recorded on magnetizable media, and computer programs are required to perform the necessary storage and retrieval operations.Yo u’ll see in the following pages that complex data relationships and linkages may be found in all but the simplest databases. The system software package that handles the difficult tasks associated with creating, accessing, and maintaining database records is called a database management system (DBMS) .The programs in a DBMS package establish an interface between the database itself and the users of the database. (These users may be applications programmers, managers and others with information needs, and various OS programs.)A DBMS can organize, process, and present selected data elements from the database. This capability enables decision makers to search, probe, and query database contents in order to extract answers to nonrecurring and unplanned questions (hat aren't available in regular reports. These questions might initially be vague and / or poorly defined, but peo ple can "browse” through the database until they have the needed information. Inshort, the DBMS will “m anage”the stored data items and assemble the needed items from the common database in response to the queries of those who aren’t10programmers. In a file-oriented system, users needing special information may communicate their needs to a programmer, who, when time permits, will write one or more programs to extract the data and prepare the information[4].The availability of a DBMS, however, offers users a much faster alternative communications path.If the DBMS provides a way to interactively and update the database, as well as interrogate it capability allows for managing personal data-Aces however, it does not automatically leave an audit trail of actions and docs not provide the kinds of control a necessary in a multiuser organization. These-controls arc only available when a set of application programs arc customized for each data entry and updating function.Software for personal computers which perform me of the DBMS functions have been very popular. Personal computers were intended for use by individuals for personal information storage and process- These machines have also been used extensively small enterprises, professionals like doctors, acrylics, engineers, lasers and so on .By the nature of intended usage, database systems on these machines except from several of the requirements of full doge database systems. Since data sharing is not tended, concurrent operations even less so. the fewer can be less complex. Security and integrity maintenance arc de-emphasized or absent. As data limes will be small, performance efficiency is also important. In fact, the only aspect of a database system that is important is data Independence. Data-dependence, as stated earlier, means that applicant programs and user queries need not recognizant physical organization of data on secondary storage. The importance of this aspect, particularly for the personal computer user, is that this greatly simplifies database usage. The user can store, access and manipulate data a( a high level (close to (he application) and be totally shielded from the10low level (close to the machine) details of data organization. We will not discuss details of specific PC DBMS software packages here. Let us summarize in the following the strengths and weaknesses of personal computer data-base software systems:The most obvious positive factor is the user friendliness of the software. A user with no prior computer background would be able to use the system to store personal and professional data, retrieve and perform relayed processing. The user should, of course, satiety himself about the quality of software and the freedom from errors (bugs) so that invest-merits in data arc protected.For the programmer implementing applications with them, the advantage lies in the support for applications development in terms of input screen generations, output report generation etc. offered by theses stems.The main negative point concerns absence of data protection features. Unless encrypted, data cane accessed by whoever has access to the machine Data can be destroyed through mistakes or malicious intent. The second weakness of many of the PC-based systems is that of performance. If data volumes grow up to a few thousands of records, performance could be a bottleneck.For organization where growth in data volumes is expected, availability of. the same or compatible software on large machines should be considered.This is one of the most common misconceptions about database management systems that are used in personal computers. Thoroughly comprehensive and sophisticated business systems can be developed in dBASE, Paradox and other DBMSs. However, they are created by experienced programmers using the DBMS's own programming language. Thai is not the same as users who create and manage personal10files that are not part of the mainstream company system.Transaction Management of DatabaseThe objective of long-duration transactions is to model long-duration, interactive Database access sessions in application environments. The fundamental assumption about short-duration of transactions that underlies the traditional model of transactions is inappropriate for long-duration transactions. The implementation of the traditional model of transactions may cause intolerably long waits when transactions aleph to acquire locks before accessing data, and may also cause a large amount of work to be lost when transactions are backed out in response to user-initiated aborts or system failure situations.The objective of a transaction model is to pro-vide a rigorous basis for automatically enforcing criterion for database consistency for a set of multiple concurrent read and write accesses to the database in the presence of potential system failure situations. The consistency criterion adopted for traditional transactions is the notion of scrializability. Scrializa-bility is enforced in conventional database systems through theuse of locking for automatic concurrency control, and logging for automatic recovery from system failure situations. A “transaction’’ that doesn't provide a basis for automatically enforcing data-base consistency is not really a transaction. To be sure, a long-duration transaction need not adopt seri-alizability as its consistency criterion. However, there must be some consistency criterion.Version System Management of DatabaseDespite a large number of proposals on version support in the context of computer aided design and software engineering, the absence of a consensus on version semantics10has been a key impediment to version support in database systems. Because of the differences between files and databases, it is intuitively clear that the model of versions in database systems cannot be as simple as that adopted in file systems to support software engineering.For data-bases, it may be necessary to manage not only versions of single objects (e.g. a software module, document, but also versions of a collection of objects (e.g. a compound document, a user manual, etc. and perhaps even versions of the schema of database (c.g. a table or a class, a collection of tables or classes).Broadly, there arc three directions of research and development in versioning. First is the notion of a parameterized versioning", that is, designing and implementing a versioning system whose behavior may be tailored by adjusting system parameters This may be the only viable approach, in view of the fact that there are various plausible choices for virtually every single aspect of versioning.The second is to revisit these plausible choices for every aspect of versioning, with the view to discardingsome of themes either impractical or flawed. The third is the investigation into the semantics and implementation of versioning collections of objects and of versioning the database.There is no consensus of the definition of the te rm “management information system”. Some writers prefer alternative terminology such as “information processing system”, "information and decision syste m, “organizational information syste m”, or simply “i nformat ion system” to refer to the computer-based information processing system which supports the operations, management, and decision-making functions of an organization. This text uses “MIS” because i t is descriptive and generally understood; it also frequently uses "information system”instead of ''MIS” t o refer to an organizational information system.10A definition of a management information system, as the term is generally understood, is an integrated, user-machine system for providing information 丨o support operations, management, and decision-making functions in an organization. The system utilizes computer hardware and software; manual procedures: models for analysis planning, control and decision making; and a database. The fact that it is an integrated system does not mean that it is a single, monolithic structure: rather, ii means that the parts fit into an overall design. The elements of the definition arc highlighted below: Computer-based user-machine system.Conceptually, a management information can exist without computer, but it is the power of the computer which makes MIS feasible. The question is not whether computers should be used in management information system, but the extent to whichinformation use should be computerized. The concept of a user-machine system implies that some (asks are best performed humans, while others are best done by machine. The user of an MIS is any person responsible for entering input da(a, instructing the system, or utilizing the information output of the system. For many problems, the user and the computer form a combined system with results obtained through a set of interactions between the computer and the user.User-machine interaction is facilitated by operation in which the user's input-output device (usually a visual display terminal) is connected lo the computer. The computer can be a personal computer serving only one user or a large computer that serves a number of users through terminals connected by communication lines. The user input-output device permits direct input of data and immediate output of results. For instance, a person using The computer interactively in financial planning poses 4t what10if* questions by entering input at the terminal keyboard; the results are displayed on the screen in a few second.The computer-based user-machine characteristics of an MIS affect the knowledge requirements of both system developer and system user, “computer-based” means that the designer of a management information system must have a knowledge of computers and of their use in processing. The “user-machine” concept means the system designer should also understand the capabilities of humans as system components (as information processors) and the behavior of humans as users of information.Information system applications should not require users Co be computer experts. However, users need to be able lo specify(heir information requirements; some understanding of computers, the nature of information, and its use in various management function aids users in this task.Management information system typically provide the basis for integration of organizational information processing. Individual applications within information systems arc developed for and by diverse sets of users. If there are no integrating processes and mechanisms, the individual applications may be inconsistent and incompatible. Data item may be specified differently and may not be compatible across applications that use the same data. There may be redundant development of separate applications when actually a single application could serve more than one need. A user wanting to perform analysis using data from two different applications may find the task very difficult and sometimes impossible.The first step in integration of information system applications is an overall information system plan. Even though application systems are implemented one at a10time, their design can be guided by the overall plan, which determines how they fit in with other functions. In essence, the information system is designed as a planed federation of small systems.Information system integration is also achieved through standards, guidelines, and procedures set by the MIS function. The enforcement of such standards and procedures permit diverse applications to share data, meet audit and control requirements, and be shares by multiple users. For instance, an application may be developed to run on a particular small computer. Standards for integration may dictate that theequipment selected be compatible with the centralized database. The trend in information system design is toward separate application processing form the data used to support it. The separate database is the mechanism by which data items are integrated across many applications and made consistently available to a variety of users. The need for a database in MIS is discussed below.The term “information” and “data” are frequently used interchangeably; However, information is generally defined as data that is meaningful or useful to The recipient. Data items are therefore the raw material for producing information.The underlying concept of a database is that data needs to be managed in order to be available for processing and have appropriate quality. This data management includes both software and organization. The software to create and manage a database is a database management system.When all access to any use of database is controlled through a database management system, all applications utilizing a particular data item access the same data item which is stored in only one place. A single updating of the data item updates it for10all uses. Integration through a database management system requires a central authority for the database. The data can be stored in one central computer or dispersed among several computers; the overriding requirement is that there be an organizational function to exercise control.It is usually insufficient for human recipients to receive only raw data or even summarized data. Data usually needs to be processed and presented in such a way that Che result is directed toward the decision to be made. To do this, processing of dataitems is based on a decision model.For example, an investment decision relative to new capital expenditures might be processed in terms of a capital expenditure decision model.Decision models can be used to support different stages in the decision-making process. “Intelligence’’ models can be used to search for problems and/or opportunities. Models can be used to identify and analyze possible solutions. Choice models such as optimization models maybe used to find the most desirable solution.In other words, multiple approaches are needed to meet a variety of decision situations. The following are examples and the type of model that might be included in an MIS to aid in analysis in support of decision-making; in a comprehensive information system, the decision maker has available a set of general models that can be applied to many analysis and decision situations plus a set of very specific models for unique decisions. Similar models are available tor planning and control. The set of models is the model base for the MIS.Models are generally most effective when the manager can use interactive dialog (o build a plan or to iterate through several decision choices under different conditions.10中文译文2:《数据库幸存者:成为一个摇滚名明星》众所周知,数据库是逻辑上相关的数据元的汇集.这些数据元可以按不同的结构组织起来,以满足单位和个人的多种处理和检索的需要。

毕业设计数据库管理外文文献

毕业设计数据库管理外文文献

1. Database management system1. Database management systemA Database Management System (DBMS)is a set of computer programs that controls the creation, maintenance,and the use of a database. It allows organizations to place control of database development in the hands of database administrators (DBAs) and other specialists. A DBMS is a system software package that helps the use of integrated collection of data records and files known as databases. It allows different user application programs to easily access the same database. DBMSs may use any of a variety of database models,such as the network model or relational model. In large systems,a DBMS allows users and other software to store and retrieve data in a structured way. Instead of having to write computer programs to extract information,user can ask simple questions in a query language. Thus, many DBMS packages provide Fourth—generation programming language (4GLs) and other application development features. It helps to specify the logical organization for a database and access and use the information within a database. It provides facilities for controlling data access,enforcing data integrity,managing concurrency,and restoring the database from backups。

SQL数据库中英文对照外文翻译文献

SQL数据库中英文对照外文翻译文献

SQL数据库中英文对照外文翻译文献中英文对照外文翻译文献(文档含英文原文和中文翻译)Working with DatabasesThis chapter describes how to use SQL statements in embedded applications to control databases. There are three database statements that set up and open databases for access: SET DATABASE declares a database handle, associates the handle with an actual database file, and optionally assigns operational parameters for the database.SET NAMES optionally specifies the character set a client application uses for CHAR, VARCHAR, and text Blob data. The server uses this information to transliterate from a database?s default character set to the client?s character set on SELECT operations, and to transliterate from a client application?s character set to the database character set on INSERT and UPDATE operations.g CONNECT opens a database, allocates system resources for it, and optionally assigns operational parameters for the database.All databases must be closed before a program ends. A database can be closed by using DISCONNECT, or by appending the RELEASE option to the final COMMIT or ROLLBACK in a program.Declaring a databaseBefore a database can be opened and used in a program, it must first be declared with SET DATABASE to:CHAPTER 3 WORKING WITH DATABASES. Establish a database handle. Associate the database handle with a database file stored on a local or remote node.A database handle is aunique, abbreviated alias for an actual database name. Database handles are used in subsequent CONNECT, COMMIT RELEASE, and ROLLBACK RELEASE statements to specify which databases they should affect. Except in dynamic SQL (DSQL) applications, database handles can also be used inside transaction blocks to qualify, or differentiate, table names when two or more open databases contain identically named tables.Each database handle must be unique among all variables used in a program. Database handles cannot duplicate host-language reserved words, and cannot be InterBase reserved words.The following statement illustrates a simple database declaration:EXEC SQLSET DATABASE DB1 = ?employee.gdb?;This database declaration identifies the database file, employee.gdb, as a database the program uses, and assigns the database a handle, or alias, DB1.If a program runs in a directory different from the directory that contains the database file, then the file name specification in SET DATABASE must include a full path name, too. For example, the following SET DATABASE declaration specifies the full path to employee.gdb:EXEC SQLSET DATABASE DB1 = ?/interbase/examples/employee.gdb?;If a program and a database file it uses reside on different hosts, then the file name specification must also include a host name. The following declaration illustrates how a Unix host name is included as part of the database file specification on a TCP/IP network:EXEC SQLSET DATABASE DB1 = ?jupiter:/usr/interbase/examples/employee.gdb?;On a Windows network that uses the Netbeui protocol, specify the path as follows: EXEC SQLSET DATABASE DB1 = ?//venus/C:/Interbase/examples/employee.gdb?; DECLARING A DATABASEEMBEDDED SQL GUIDE 37Declaring multiple databasesAn SQL program, but not a DSQL program, can access multiple databases at the same time. In multi-database programs, database handles are required. A handle is used to:1. Reference individual databases in a multi-database transaction.2. Qualify table names.3. Specify databases to open in CONNECT statements.Indicate databases to close with DISCONNECT, COMMIT RELEASE, and ROLLBACK RELEASE.DSQL programs can access only a single database at a time, so database handle use is restricted to connecting to and disconnecting from a database.In multi-database programs, each database must be declared in a separate SET DATABASE statement. For example, the following code contains two SET DATABASE statements: . . .EXEC SQLSET DATABASE DB2 = ?employee2.gdb?;EXEC SQLSET DATABASE DB1 = ?employee.gdb?;. . .4Using handles for table namesWhen the same table name occurs in more than one simultaneously accessed database, a database handle must be used to differentiate one table name from another. The database handle is used as a prefix to table names, and takes the form handle.table.For example, in the following code, the database handles, TEST and EMP, are used to distinguish between two tables, each named EMPLOYEE:. . .EXEC SQLDECLARE IDMATCH CURSOR FORSELECT TESTNO INTO :matchid FROM TEST.EMPLOYEEWHERE TESTNO > 100;EXEC SQLDECLARE EIDMATCH CURSOR FORSELECT EMPNO INTO :empid FROM EMP.EMPLOYEEWHERE EMPNO = :matchid;. . .CHAPTER 3 WORKING WITH DATABASES38 INTERBASE 6IMPORTANTThis use of database handles applies only to embedded SQL applications. DSQL applications cannot access multiple databases simultaneously.4Using handles with operationsIn multi-database programs, database handles must be specified in CONNECT statements to identify which databases among several to open and prepare for use in subsequent transactions.Database handles can also be used with DISCONNECT, COMMIT RELEASE, and ROLLBACKRELEASE to specify a subset of open databases to close.To open and prepare a database with CONNECT, see “Opening a database” on page 41.To close a database with DISCONNECT, COMMIT RELEASE, or ROLLBACK RELEASE, see“Closing a database” on page 49. To learn more about using database handles in transactions, see “Accessing an open database” on page 48.Preprocessing and run time databasesNormally, each SET DATABASE statement specifies a single database file to associate with a handle. When a program is preprocessed, gpre uses the specified file to validate the prog ram?s table and column references. Later, when a user runs the program, the same database file is accessed. Different databases can be specified for preprocessing and run time when necessary.4Using the COMPILETIME clause A program can be designed to run against any one of several identically structured databases. In other cases, the actual database that a program will use at runtime is not available when a program is preprocessed and compiled. In such cases, SET DATABASE can include a COMPILETIME clause to specify a database for gpre to test against during preprocessing. For example, the following SET DATABASE statement declares that employee.gdb is to be used by gpre during preprocessing: EXEC SQLSET DATABASE EMP = COMPILETIME ?employee.gdb?;IMPORTANTThe file specification that follows the COMPILETIME keyword must always be a hard-coded, quoted string.DECLARING A DATABASEEMBEDDED SQL GUIDE 39When SET DATABASE uses the COMPILETIME clause, but no RUNTIME clause, and does not specify a different database file specification in a subsequent CONNECT statement, the same database file is used both for preprocessing and run time. To specify different preprocessing and runtime databases with SET DATABASE, use both the COMPILETIME andRUNTIME clauses.4Using the RUNTIME clauseWhen a database file is specified for use during preprocessing, SET DATABASE can specify a different database to use at run time by including the RUNTIME keyword and a runtime file specification:EXEC SQLSET DATABASE EMP = COMPILETIME ?employee.gdb?RUNTIME ?employee2.gdb?;The file specification that follows the RUNTIME keyword can be either ahard-coded, quoted string, or a host-language variable. For example, the following C code fragment prompts the user for a database name, and stores the name in a variable that is used later in SET DATABASE:. . .char db_name[125];. . .printf("Enter the desired database name, including node and path):\n");gets(db_name);EXEC SQLSET DATABASE EMP = COMPILETIME ?employee.gdb?RUNTIME : db_name; . . .Note host-language variables in SET DATABASE must be preceded, as always, by a colon.Controlling SET DATABASE scopeBy default, SET DATABASE creates a handle that is global to all modules in an application.A global handle is one that may be referenced in all host-language modules comprising the program. SET DATABASE provides two optional keywords to change the scope of a declaration:g STATIC limits declaration scope to the module containing the SET DATABASE statement. No other program modules can see or use a database handle declared STATIC.CHAPTER 3 WORKING WITH DATABASES40 INTERBASE 6EXTERN notifies gpre that a SET DATABASE statement in a module duplicates a globally-declared database in another module. If the EXTERN keyword is used, then another module must contain the actual SET DATABASE statement, or an error occurs during compilation.The STATIC keyword is used in a multi-module program to restrict database handle access to the single module where it is declared. The following example illustrates the use of the STATIC keyword:EXEC SQLSET DATABASE EMP = STATIC ?employee.gdb?;The EXTERN keyword is used in a multi-module program to signal that SET DATABASE in one module is not an actual declaration, but refers to a declaration made in a different module. Gpre uses this information during preprocessing. Thefollowing example illustrates the use of the EXTERN keyword: EXEC SQLSET DATABASE EMP = EXTERN ?employee.gdb?;If an application contains an EXTERN reference, then when it is used at run time, the actual SET DATABASE declaration must be processed first, and the database connected before other modules can access it.A single SET DATABASE statement can contain either the STATIC or EXTERN keyword, but not both. A scope declaration in SET DATABASE applies to both COMPILETIME and RUNTIME databases.Specifying a connection character setWhen a client application connects to a database, it may have its own character set requirements. The server providing database access to the client does not know about these requirements unless the client specifies them. The client application specifies its character set requirement using the SET NAMES statement before it connects to the database.SET NAMES specifies the character set the server should use when translating data from the database to the client application. Similarly, when the client sends data to the database, the server translates the data from the client?s character set to the database?s default character set (or the character set for an individual column if it differs from the database?s default character set). For example, the followingstatements specify that the client is using the DOS437 character set, then connect to the database:EXEC SQLOPENING A DATABASEEMBEDDED SQL GUIDE 41SET NAMES DOS437;EXEC SQLCONNECT ?europe.gdb? USER ?JAMES? PASSWORD ?U4EEAH?;For more information about character sets, see the Data Definition Guide. For the complete syntax of SET NAMES and CONNECT, see the Language Reference. Opening a database After a database is declared, it must be attached with a CONNECT statement before it can be used. CONNECT:1. Allocates system resources for the database.2. Determines if the database file is local, residing on the same host where the application itself is running, or remote, residing on a different host.3. Opens the database and examines it to make sure it is valid.InterBase provides transparent access to all databases, whether local or remote. If the database structure is invalid, the on-disk structure (ODS) number does not correspond to the one required by InterBase, or if the database is corrupt, InterBase reports an error, and permits no further access. Optionally, CONNECT can be used to specify:4. A user name and password combination that is checked against the server?s security database before allowing the connect to succeed. User names can be up to 31 characters.Passwords are restricted to 8 characters.5. An SQL role name that the user adopts on connection to the database, provided that the user has previously been granted membership in the role. Regardless of role memberships granted, the user belongs to no role unless specified with this ROLE clause.The client can specify at most one role per connection, and cannot switch roles except by reconnecting.6. The size of the database buffer cache to allocate to the application when the default cache size is inappropriate.Using simple CONNECT statementsIn its simplest form, CONNECT requires one or more database parameters, each specifying the name of a database to open. The name of the database can be a: Database handle declared in a previous SET DATABASE statement.CHAPTER 3 WORKING WITH DATABASES42 INTERBASE 61. Host-language variable.2. Hard-coded file name.4Using a database handleIf a program uses SET DATABASE to provide database handles, those handles should be used in subsequent CONNECT statements instead of hard-coded names. For example, . . .EXEC SQLSET DATABASE DB1 = ?employee.gdb?;EXEC SQLSET DATABASE DB2 = ?employee2.gdb?;EXEC SQLCONNECT DB1;EXEC SQLCONNECT DB2;. . .There are several advantages to using a database handle with CONNECT:1. Long file specifications can be replaced by shorter, mnemonic handles.2. Handles can be used to qualify table names in multi-database transactions. DSQL applications do not support multi-database transactions.3. Handles can be reassigned to other databases as needed.4. The number of database cache buffers can be specified as an additional CONNECT parameter.For more information about setting the number of databas e cache buffers, see “Setting database cache buffers” on page 47. 4Using strings or host-language variables Instead of using a database handle, CONNECT can use a database name supplied at run time. The database name can be supplied as either a host-language variable or a hard-coded, quoted string.The following C code demonstrates how a program accessing only a single database might implement CONNECT using a file name solicited from a user at run time:. . .char fname[125];. . .printf(?Enter the desired database name, including nodeand path):\n?);OPENING A DATABASEEMBEDDED SQL GUIDE 43gets(fname);. . .EXEC SQLCONNECT :fname;. . .TipThis technique is especially useful for programs that are designed to work with many identically structured databases, one at a time, such as CAD/CAM or architectural databases.MULTIPLE DATABASE IMPLEMENTATIONTo use a database specified by the user as a host-language variable in a CONNECT statement in multi-database programs, follow these steps:1. Declare a database handle using the following SET DATABASE syntax:。

数据库安全中英文对照外文翻译文献

数据库安全中英文对照外文翻译文献

中英文对照外文翻译文献(文档含英文原文和中文翻译)Database Security in a Web Environment IntroductionDatabases have been common in government departments and commercial enterprises for many years. Today, databases in any organization are increasingly opened up to a multiplicity of suppliers, customers, partners and employees - an idea that would have been unheard of a few years ago. Numerous applications and their associated data are now accessed by a variety of users requiring different levels of access via manifold devices and channels – often simultaneously. For example:• Online banks allow customers to perform a variety of banking operations - via the Internet and over the telephone – whilst maintaining the privacy of account data.• E-Commerce merchants and their Service Providers must store customer, order and payment data on their merchant server - and keep it secure.• HR departments allow employees to update their personal information –whilst protecting certain management information from unauthorized access.• The medical profession must protect the confidentiality of patient data –whilst allowing essential access for treatment.• Online brokerages need to be able to provide large numbers of simultaneous users with up-to-date and accurate financial information.This complex landscape leads to many new demands upon system security. The global growth of complex web-based infrastructures is driving a need for security solutions that provide mechanisms to segregate environments; perform integrity checking and maintenance; enable strong authentication andnon-repudiation; and provide for confidentiality. In turn, this necessitates comprehensive business and technical risk assessment to identify the threats,vulnerabilities and impacts, and from this define a security policy. This leads to security definitions throughout the infrastructure - operating system, database management system, middleware and network.Financial, personal and medical information systems and some areas of government have strict requirements for security and privacy. Inappropriate disclosure of sensitive information to the wrong parties can have severe social, legal and regulatory consequences. Failure to address the basics can result in substantial direct and consequential financial losses - witness the fraud losses through the compromise of several million credit card numbers in merchants’ databases [Occf], plus associated damage to brand-image and loss of consumer confidence.This article discusses some of the main issues in database and web server security, and also considers important architecture and design issues.A Simple ModelAt the simplest level, a web server system consists of front-end software and back-end databases with interface software linking the two. Normally, the front-end software will consist of server software and the network server operating system, and the back-end database will be a relational orobject-oriented database fulfilling a variety of functions, including recording transactions, maintaining accounts and inventory. The interface software typically consists of Common Gateway Interface (CGI) scripts used to receive information from forms on web sites to perform online searches and to update the database.Depending on the infrastructure, middleware may be present; in addition, security management subsystems (with session and user databases) that address the web server’s and related applications’ requirements for authentication, accesscontrol and authorization may be present. Communications between this subsystem and either the web server, middleware or database are via application program interfaces (APIs)..This simple model is depicted in Figure 1.Security can be provided by the following components:• Web server.• Middleware.• Operating system.. Figure 1: A Simple Model.• Database and Database Management System.• Security management subsystem.The security of such a system addressesAspects of authenticity, integrity and confidentiality and is dependent on the security of the individual components and their interactions. Some of the most common vulnerabilities arise from poor configuration, inadequate change control procedures and poor administration. However, even if these areas are properlyaddressed, vulnerabilities still arise. The appropriate combination of people, technology and processes holds the key to providing the required physical and logical security. Attention should additionally be paid to the security aspects of planning, architecture, design and implementation.In the following sections, we consider some of the main security issues associated with databases, database management systems, operating systems and web servers, as well as important architecture and design issues. Our treatment seeks only to outline the main issues and the interested reader should refer to the references for a more detailed description.Database SecurityDatabase management systems normally run on top of an operating system and provide the security associated with a database. Typical operating system security features include memory and file protection, resource access control and user authentication. Memory protection prevents the memory of one program interfering with that of another and limits access and use of the objects employing techniques such as memory segmentation. The operating system also protects access to other objects (such as instructions, input and output devices, files and passwords) by checking access with reference to access control lists. Security mechanisms in common operating systems vary tremendously and, for those that are lacking, there exists special-purpose security software that can be integrated with the existing environment. However, this can be an expensive, time-consuming task and integration difficulties may also adversely impact application behaviors.Most database management systems consist of a number of modules - including database querying and database and file management - along with authorization, concurrent access and database description tables. Thesemanagement systems also use a variety of languages: a data definition language supports the logical definition of the database; developers use a data manipulation language; and a query language is used by non-specialist end-users.Database management systems have many of the same security requirements as operating systems, but there are significant differences since the former are particularly susceptible to the threat of improper disclosure, modification of information and also denial of service. Some of the most important security requirements for database management systems are: • Multi-Level Access Control.• Confidentiality.• Reliability.• Integrity.• Recovery.These requirements, along with security models, are considered in the following sections.Multi-Level Access ControlIn a multi-application and multi-user environment, administrators, auditors, developers, managers and users – collectively called subjects - need access to database objects, such as tables, fields or records. Access control restricts the operations available to a subject with respect to particular objects and is enforced by the database management system. Mandatory access controls require that each controlled object in the database must be labeled with a security level, whereas discretionary access controls may be applied at the choice of a subject.Access control in database management systems is more complicated than in operating systems since, in the latter, all objects are unrelated whereas in a database the converse is true. Databases are also required to make accessdecisions based on a finer degree of subject and object granularity. In multi-level systems, access control can be enforced by the use of views - filtered subsets of the database - containing the precise information that a subject is authorized to see.A general principle of access control is that a subject with high level security should not be able to write to a lower level object, and this poses a problem for database management systems that must read all database objects and write new objects. One solution to this problem is to use a trusted database management system.ConfidentialitySome databases will inevitably contain what is considered confidential data. For example, it could be inherently sensitive or its source may be sensitive, or it may belong to a sensitive table, thus making it difficult to determine what is actually confidential. Disclosure is also difficult to define, as it can be direct, indirect, involve the disclosure of bounds or even mere existence.An inference problem exists in database management systems whereby users can infer sensitive information from relatively insensitive queries. A trivial example is a request for information about the average salary of an employee and the number of employees turns out to be just one, thus revealing the employee’s salary. However, much more sophisticated statistical inference attacks can also be mounted. This highlights the fact that, although the data itself may be properly controlled, confidential information may still leak out.Controls can take several forms: not divulging sensitive information to unauthorized parties (which depends on the respective subject and object security levels), logging what each user knows or masking response data. The first control can be implemented fairly easily, the second quickly becomesunmanageable for a large number of users and the third leads to imprecise responses, and also exemplifies the trade-off between precision and security. Polyinstantiation refers to multiple instances of a data object existing in the database and it can provide a partial solution to the inference problem whereby different data values are supplied, depending on the security level, in response to the same query. However, this makes consistency management more difficult.Another issue that arises is when the security level of an aggregate amount is different to that of its elements (a problem commonly referred to as aggregation). This can be addressed by defining appropriate access control using views.Reliability, Integrity and RecoveryArguably, the most important requirements for databases are to ensure that the database presents consistent information to queries and can recover from any failures. An important aspect of consistency is that transactions execute atomically; that is, they either execute completely or not at all.Concurrency control addresses the problem of allowing simultaneous programs access to a shared database, while avoiding incorrect behavior or interference. It is normally addressed by a scheduler that uses locking techniques to ensure that the transactions are serial sable and independent. A common technique used in commercial products is two-phase locking (or variations thereof) in which the database management system controls when transactions obtain and release their locks according to whether or not transaction processing has been completed. In a first phase, the database management system collects the necessary data for the update: in a second phase, it updates the database. This means that the database can recover from incomplete transactions by repeatingeither of the appropriate phases. This technique can also be used in a distributed database system using a distributed scheduler arrangement.System failures can arise from the operating system and may result in corrupted storage. The main copy of the database is used for recovery from failures and communicates with a cached version that is used as the working version. In association with the logs, this allows the database to recover to a very specific point in the event of a system failure, either by removing the effects of incomplete transactions or applying the effects of completed transactions. Instead of having to recover the entire database after a failure, recovery can be made more efficient by the use of check pointing. It is used during normal operations to write additional updated information - such as logs, before-images of incomplete transactions, after-images of completed transactions - to the main database which reduces the amount of work needed for recovery. Recovery from failures in distributed systems is more complicated, since a single logical action is executed at different physical sites and the prospect of partial failure arises.Logical integrity, at field level and for the entire database, is addressed by the use of monitors to check important items such as input ranges, states and transitions. Error-correcting and error-detecting codes are also used.Security ModelsVarious security models exist that address different aspects of security in operating systems and database management systems. For example, theBell-LaPadula model defines security in terms of mandatory access control and addresses confidentiality only. The Bell LaPadula models, and other models including the Biba model for integrity, are described more fully in [Cast95] and [Pfle89]. These models are implementation-independent and provide a powerfulinsight into the properties of secure systems, lead to design policies and principles, and some form the basis for security evaluation criteria.Web Server SecurityWeb servers are now one of the most common interfaces between users and back-end databases, and as such, their security becomes increasingly important. Exploitation of vulnerabilities in the web server can lead to unforeseen attacks on middleware and backend databases, bypassing any controls that may be in place. In this section, we focus on common web server vulnerabilities and how the authentication requirements of web servers and databases are met.In general, a web server platform should not be shared with other applications and should be the only machine allowed to access the database. Using a firewall can provide additional security - either between the web server and users or between the web server and back-end database - and often the web server is placed on a de-militarized zone (DMZ) of a firewall. While firewalls can be used to block certain incoming connections, they must allow HTTP (and HTTPS) connections through to the web server, and so attacks can still be launched via the ports associated with these connections.VulnerabilitiesVulnerabilities appear on a weekly basis and, here, we prefer to focus on some general issues rather than specific attacks. Common web server vulnerabilities include:• No policy exists.• The default configuration is on.• Reusable passwords appear in clear.• Unnecessary ports available for network services are not disabled.• New security holes are not tracked. Even if they are, well-known vulnerabilities are not always fixed as the source code patches are not applied by system administrator and old programs are not re-compiled or removed.• Security tools are not used to scan the network for weaknesses and changes or to detect intrusions.• Faulty and buggy software - for example, buffer overflow and stack smashingAttacks• Automatic directory listings - this is of particular concern for the interface software directories.• Server root files are generally visible or accessible.• Lack of logs and bac kups.• File access is often not explicitly configured by the system administrator according to the security policy. This applies to configuration, client, administration and log files, administration programs, and CGI program sources and executables. CGI scripts allow dynamic web pages and make program development (in, for example, Perl) easy and rapid. However, their successful exploitation may allow execution of malicious programs, launching ofdenial-of-service attacks and, ultimately, privilege escalation on a server.Web Server and Database AuthenticationWhile user, browser and web server authentication are relatively well understood [Garf97], [Ghos98] and [Tree98], the introduction of additional components, such as databases and middleware, raise a number of authentication issues. There are a variety of options for authentication in a simple model (Figure 1). Firstly, both the web server and database management system can individually authenticate a user. This option requires the user to authenticatetwice which may be unacceptable in certain applications, although a singlesign-on device (which aims to manage authentication in a user-transparent way) may help. Secondly, a common approach is for the database to automatically grant user access based on web server authentication. However, this option should only be used for accessing publicly available information. Finally, the database may grant user access employing the web server authentication credentials as a basis for its own user authentication, using security management subsystems (Figure 1). We consider this last option in more detail.Web-based communications use the stateless HTTP protocol with the implication that state, and hence authentication, is not preserved when browsing successive web pages. Cookies, or files placed on user’s machine by a web server, were developed as a means of addressing this issue and are often used to provide authentication. However, after initial authentication, there is typically no re authentication per page in the same realm, only the use of unencrypted cookies (sometimes in association with IP addresses). This approach provides limited security as both cookies and IP addresses can be tampered with or spoofed.A stronger authentication method, commonly used by commercial implementations, uses digitally signed cookies. This allows additional systems, such as databases, to use digitally signed cookie data, including a session ID, as a basis for authentication. When a user has been authenticated by a web server (using a password, for example), a session ID is assigned and is stored in a security management subsystem database. When a user subsequently requests information from a database, the database receives a copy of the session ID, the security management subsystem checks this session ID against its local copy and, if authentication is successful, user access is granted to the database.The session ID is typically transmitted in the clear between the web server and database, but may be protected by SSL or even by physical security measures. The communications between the browser and web servers, and the web servers and security management subsystem (and its databases), are normally protected by SSL and use a web server security API that is used to digitally sign and verify browser cookies. The communications between the back-end databases and security management subsystem (and its databases) are also normally protected by SSL and use a database security API that verifies session Ids originating from the database and provides additional user authorization credentials. The web server security API is generally proprietary while, for the database security API, many vendors have adopted standards such as the Generic Security Services API (GSS-API) or CORBA [RFC2078] and [Corba].Architecture and DesignSecurity requirements for designing, building and implementing databases are important so that the systems, as part of the overall infrastructure, meet their requirements in actual operation. The various security models provide an important insight into the design requirements for databases and their management systems.Secure Database Management System ArchitecturesIn multi-level database management systems, a variety of architectures are possible: trusted subject, integrity locked, kernels and replicated. Trusted subject is used by most of the leading database management system vendors and can be integrated in existing products. Basically, the trusted subject architecture allows users to access a database via an un trusted front-end, a trusted database management system and trusted operating system. The operating systemprovides physical access to the database and the database management system provides multilevel object protection.The other architectures - integrity locked, kernels and replicated - all vary in detail, but they use a trusted front-end and an un trusted database management system. For details of these architectures and research prototypes, the reader is referred to [Cast95]. Different architectures are suited to different environments: for example, the trusted subject architecture is less integrated with the underlying operating system and is best suited when a trusted path can be assured between applications and the database management system.Secure Database Management System DesignAs discussed above, there are several fundamental differences between operating system and database management system design, including object granularity, multiple data types, data correlations and multi-level transactions. Other differences include the fact that database management systems include both physical and logical objects and that the database lifecycle is normally longer.These differences must be reflected in the design requirements which include:• Access, flow and infer ence controls.• Access granularity and modes.• Dynamic authorization.• Multi-level protection.• Polyinstantiation.• Auditing.• Performance.These requirements should be considered alongside basic information integrity principles, such as:• Well-formed transactions - to ensure that transactions are correct and consistent.• Continuity of operation - to ensure that data can be properly recovered, depending on the extent of a disaster.• Authorization and role management – to ensure that distinct roles are defined and users are authorized.• Authenticated users - to ensure that users are authenticated.• Least privilege - to ensure that users have the minimal privilege necessary to perform their tasks.• Separation of duties - to ensure that no single individual has access to critical data.• Delegation of authority - to ensure that the database management system policies are flexible enough to meet the organization’s requirements.Of course, some of these requirements and principles are not met by the database management system, but by the operating system and also by organizational and procedural measures.Database Design MethodologyVarious approaches to design exist, but most contain the same main stages. The principle aim of a design methodology is to provide a robust, verifiable design process and also to separate policies from how policies are actually implemented. An important requirement during any design process is that different design aspects can be merged and this equally applies to security.A preliminary analysis should be conducted that addresses the system risks, environment, existing products and performance. Requirements should then beanalyzed with respect to the results of a risk assessment. Security policies should be developed that include specification of granularity, privileges and authority.These policies and requirements form the input to the conceptual design that concentrates on subjects, objects and access modes without considering implementation details. Its purpose is to express information and process flows in a complete and consistent way.The logical design takes into account the operating system and database management system that will be used and which of the security requirements can be provided by which mechanisms. The physical design considers the actual physical realization of the logical design and, indeed, may result in a revision of the conceptual and logical phases due to physical constraints.Security AssuranceOnce a product has been developed, its security assurance can be assessed by a number of methods including formal verification, validation, penetration testing and certification. For example, if a database is to be certified as TCSEC Class B1, then it must implement the Bell-LaPadula mandatory access control model in which each controlled object in the database must be labeled with a security level.Most of these methods can be costly and lengthy to perform and are typically specific to particular hardware and software configurations. However, the international Common Criteria certification scheme provides the added benefit of a mutual recognition arrangement, thus avoiding the prospect of multiple certifications in different countries.ConclusionThis article has considered some of the security principles that are associated with databases and how these apply in a web based environment. Ithas also focused on important architecture and design principles. These principles have focused mainly on the prevention, assurance and recovery aspects, but other aspects, such as detection, are equally important in formulating a total information protection strategy. For example, host-based intrusion detection systems as well as a robust and tested set of business recovery procedures should be considered.Any fit-for-purpose, secure e-business infrastructure should address all the above aspects: prevention, assurance, detection and recovery. Certain industries are now starting to specify their own set of global, secure e-business requirements. International card payment associations have recently started to require minimum information security standards from electronic commerce merchants handling credit card data, to help manage fraud losses and associated impacts such as brand-image damage and loss of consumer confidence.网络环境下的数据库安全简介数据库在政府部门和商业机构得到普遍应用已经很多年了。

大学毕业设计关于数据库外文翻译2篇

大学毕业设计关于数据库外文翻译2篇

原文:Structure of the Relational database—《Database System Concepts》Part1: Relational Databases The relational model is the basis for any relational database management system (RDBMS).A relational model has three core components: a collection of obj ects or relations, operators that act on the objects or relations, and data integrity methods. In other words, it has a place to store the data, a way to create and retrieve the data, and a way to make sure that the data is logically consistent.A relational database uses relations, or two-dimensional tables, to store the information needed to support a business. Let's go over the basic components of a traditional relational database system and look at how a relational database is designed. Once you have a solid understanding of what rows, columns, tables, and relationships are, you'll be well on your way to leveraging the power of a relational database.Tables, Row, and ColumnsA table in a relational database, alternatively known as a relation, is a two-dimensional structure used to hold related information. A database consists of one or more related tables.Note: Don't confuse a relation with relationships. A relation is essentially a table, and a relationship is a way to correlate, join, or associate two tables.A row in a table is a collection or instance of one thing, such as one employee or one line item on an invoice. A column contains all the information of a single type, and the piece of data at the intersection of a row and a column, a field, is the smallest piece of information that can be retrieved with the database's query language. For example, a table with information about employees might have a column calledLAST_NAME that contains all of the employees' last names. Data is retrieved from a table by filtering on both the row and the column.Primary Keys, Datatypes, and Foreign KeysThe examples throughout this article will focus on the hypothetical work of Scott Smith, database developer and entrepreneur. He just started a new widget company and wants to implement a few of the basic business functions using the relational database to manage his Human Resources (HR) department.Relation: A two-dimensional structure used to hold related information, also known as a table.Note: Most of Scott's employees were hired away from one of his previous employers, some of whom have over 20 years of experience in the field. As a hiring incentive, Scott has agreed to keep the new employees' original hire date in the new database.Row:A group of one or more data elements in a database table that describes a person, place, or thing.Column:The component of a database table that contains all of the data of the same name and type across all rows.You'll learn about database design in the following sections, but let's assume for the moment that the majority of the database design is completed and some tables need to be implemented. Scott creates the EMP table to hold the basic employee information, and it looks something like this:Notice that some fields in the Commission (COMM) and Manager (MGR) columns do not contain a value; they are blank. A relational database can enforce the rule that fields in a column may or may not be empty. In this case, it makes sense for an employee who is not in the Sales department to have a blank Commission field. It also makes sense for the president of the company to have a blank Manager field, since that employee doesn't report to anyone.Field:The smallest piece of information that can be retrieved by the database query language. A field is found at the intersection of a row and a column in a database table.On the other hand, none of the fields in the Employee Number (EMPNO) column are blank. The company always wants to assign an employee number to an employee, and that number must be different for each employee. One of the features of a relational database is that it can ensure that a value is entered into this column and that it is unique. Th e EMPNO column, in this case, is the primary key of the table.Primary Key:A column (or columns) in a table that makes the row in the table distinguishable from every other row in the same table.Notice the different datatypes that are stored in the EMP ta ble: numeric values, character or alphabetic values, and date values.As you might suspect, the DEPTNO column contains the department number for the employee. But how do you know what department name is associated with what number? Scott created the DEPT table to hold the descriptions for the department codes in the EMP table.The DEPTNO column in the EMP table contains the same values as the DEPTNO column in the DEPT table. In this case, the DEPTNO column in the EMP table is considered a foreign key to the same column in the DEPT table.A foreign key enforces the concept of referential integrity in a relational database. The concept of referential integrity not only prevents an invalid department number from being inserted into the EMP table, but it also prevents a row in the DEPT table from being deleted if there are employees still assigned to that department.Foreign Key:A column (or columns) in a table that draws its values from a primary or unique key column in another table. A foreign key assists in ensuring the data integrity of a table. Referential Integrity A method employed by a relational database system that enforces one-to-many relationships between tables.Data ModelingBefore Scott created the actual tables in the database, he went through a design process known as data modeling. In this process, the developer conceptualizes and documents all the tables for the database. One of the common methods for mod eling a database is called ERA, which stands for entities, relationships, and attributes. The database designer uses an application that can maintain entities, their attributes, and their relationships. In general, an entity corresponds to a table in the database, and the attributes of the entity correspond to columns of the table.Data Modeling:A process of defining the entities, attributes, and relationships between the entities in preparation for creating the physical database.The data-modeling process involves defining the entities, defining the relationships between those entities, and then defining the attributes for each of the entities. Once a cycle is complete, it is repeated as many times as necessary to ensure that the designer is capturing what is important enough to go into the database. Let's take a closer look at each step in the data-modeling process.Defining the EntitiesFirst, the designer identifies all of the entities within the scope of the database application.The entities are the pers ons, places, or things that are important to the organization and need to be tracked in the database. Entities will most likely translate neatly to database tables. For example, for the first version of Scott's widget company database, he identifies four entities: employees, departments, salary grades, and bonuses. These will become the EMP, DEPT, SALGRADE, and BONUS tables.Defining the Relationships Between EntitiesOnce the entities are defined, the designer can proceed with defining how each of the entities is related. Often, the designer will pair each entity with every other entity and ask, "Is there a relationship between these two entities?" Some relationships are obvious; some are not.In the widget company database, there is most likely a relations hip between EMP and DEPT, but depending on the business rules, it is unlikely that the DEPT and SALGRADE entities are related. If the business rules were to restrict certain salary grades to certain departments, there would most likely be a new entity that defines the relationship between salary grades and departments. This entity wouldbe known as an associative or intersection table and would contain the valid combinations of salary grades and departments.Associative Table:A database table that stores th e valid combinations of rows from two other tables and usually enforces a business rule. An associative table resolves a many-to-many relationship.In general, there are three types of relationships in a relational database:One-to-many The most common type of relationship is one-to-many. This means that for each occurrence in a given entity, the parent entity, there may be one or more occurrences in a second entity, the child entity, to which it is related. For example, in the widget company database, the DEPT entity is a parent entity, and for each department, there could be one or more employees associated with that department. The relationship between DEPT and EMP is one-to-many.One-to-one In a one-to-one relationship, a row in a table is related to only one or none of the rows in a second table. This relationship type is often used for subtyping. For example, an EMPLOYEE table may hold the information common to all employees, while the FULLTIME, PARTTIME, and CONTRACTOR tables hold information unique to full-time employees, part-time employees, and contractors, respectively. These entities would be considered subtypes of an EMPLOYEE and maintain a one-to-one relationship with the EMPLOYEE table. These relationships are not as common as one-to-many relationships, because if one entity has an occurrence for a corresponding row in another entity, in most cases, the attributes from both entities should be in a single entity.Many-to-many In a many-to-many relationship, one row of a table may be related to man y rows of another table, and vice versa. Usually, when this relationship is implemented in the database, a third entity isdefined as an intersection table to contain the associations between the two entities in the relationship. For example, in a database used for school class enrollment, the STUDENT table has a many-to-many relationship with the CLASS table—one student may take one or more classes, and a given class may have one or more students. The intersection table STUDENT_CLASS would contain the comb inations of STUDENT and CLASS to track which students are in which classes.Once the designer has defined the entity relationships, the next step is to assign the attributes to each entity. This is physically implemented using columns, as shown here for th e SALGRADE table as derived from the salary grade entity.After the entities, relationships, and attributes have been defined, the designer may iterate the data modeling many more times. When reviewing relationships, new entities may be discovered. For exa mple, when discussing the widget inventory table and its relationship to a customer order, the need for a shipping restrictions table may arise.Once the design process is complete, the physical database tables may be created. Logical database design sessions should not involve physical implementation issues, but once the design has gone through an iteration or two, it's the DBA's job to bring the designers "down to earth." As a result, the design may need to be revisited to balance the ideal database implementation versus the realities of budgets andschedules.译文:关系数据库的结构—《数据库系统结构》第一章:关系数据库关系模型是任何关系数据库管理系统(RDBMS)的基础。

数据库外文参考文献及翻译

数据库外文参考文献及翻译

数据库外文参考文献及翻译数据库外文参考文献及翻译SQL ALL-IN-ONE DESK REFERENCE FOR DUMMIESData Files and DatabasesI. Irreducible complexityAny software system that performs a useful function is going to be complex. The more valuable the function, the more complex its implementation will be. Regardless of how the data is stored, the complexity remains. The only question is where that complexity resides. Any non-trivial computer application has two major components: the program the data. Although an application’s level of complexity depends on the task to be performed, developers have some control over the location of that complexity. The complexity may reside primarily in the program part of the overall system, or it may reside in the data part.Operations on the data can be fast. Because the programinteracts directly with the data, with no DBMS in the middle, well-designed applications can run as fast as the hardware permits. What could be better? A data organization that minimizes storage requirements and at the same time maximizes speed of operation seems like the best of all possible worlds. But wait a minute . Flat file systems came into use in the 1940s. We have known about them for a long time, and yet today they have been almost entirely replaced by database s ystems. What’s up with that? Perhaps it is the not-so-beneficial consequences。

大数据外文翻译文献

大数据外文翻译文献

大数据外文翻译文献(文档含中英文对照即英文原文和中文翻译)原文:What is Data Mining?Many people treat data mining as a synonym for another popularly used term, “Knowledge Discovery in Databases”, or KDD. Alternatively, others view data mining as simply an essential step in the process of knowledge discovery in databases. Knowledge discovery consists of an iterative sequence of the following steps:· data cleaning: to remove noise or irrelevant data,· data integration: where multiple data sources may be combined,·data selection : where data relevant to the analysis task are retrieved from the database,·data transformation : where data are transformed or consolidated into forms appropriate for mining by performing summary or aggregation operations, for instance,·data mining: an essential process where intelligent methods are applied in order to extract data patterns,·pattern evaluation: to identify the truly interesting patterns representing knowledge based on some interestingness measures, and ·knowledge presentation: where visualization and knowledge representation techniques are used to present the mined knowledge to the user .The data mining step may interact with the user or a knowledge base. The interesting patterns are presented to the user, and may be stored as new knowledge in the knowledge base. Note that according to this view, data mining is only one step in the entire process, albeit an essential one since it uncovers hidden patterns for evaluation.We agree that data mining is a knowledge discovery process. However, in industry, in media, and in the database research milieu, the term “data mining” is becoming more popular than the longer term of “knowledge discovery in databases”. Therefore, in this book, we choose to use the term “data mining”. We adop t a broad view of data mining functionality: data mining is the process of discovering interestingknowledge from large amounts of data stored either in databases, data warehouses, or other information repositories.Based on this view, the architecture of a typical data mining system may have the following major components:1. Database, data warehouse, or other information repository. This is one or a set of databases, data warehouses, spread sheets, or other kinds of information repositories. Data cleaning and data integration techniques may be performed on the data.2. Database or data warehouse server. The database or data warehouse server is responsible for fetching the relevant data, based on the user’s data mining request.3. Knowledge base. This is the domain knowledge that is used to guide the search, or evaluate the interestingness of resulting patterns. Such knowledge can include concept hierarchies, used to organize attributes or attribute values into different levels of abstraction. Knowledge such as user beliefs, which can be used to assess a pattern’s interestingness based on its unexpectedness, may also be included. Other examples of domain knowledge are additional interestingness constraints or thresholds, and metadata (e.g., describing data from multiple heterogeneous sources).4. Data mining engine. This is essential to the data mining system and ideally consists of a set of functional modules for tasks such ascharacterization, association analysis, classification, evolution and deviation analysis.5. Pattern evaluation module. This component typically employs interestingness measures and interacts with the data mining modules so as to focus the search towards interesting patterns. It may access interestingness thresholds stored in the knowledge base. Alternatively, the pattern evaluation module may be integrated with the mining module, depending on the implementation of the data mining method used. For efficient data mining, it is highly recommended to push the evaluation of pattern interestingness as deep as possible into the mining process so as to confine the search to only the interesting patterns.6. Graphical user interface. This module communicates between users and the data mining system, allowing the user to interact with the system by specifying a data mining query or task, providing information to help focus the search, and performing exploratory data mining based on the intermediate data mining results. In addition, this component allows the user to browse database and data warehouse schemas or data structures, evaluate mined patterns, and visualize the patterns in different forms.From a data warehouse perspective, data mining can be viewed as an advanced stage of on-1ine analytical processing (OLAP). However, data mining goes far beyond the narrow scope of summarization-styleanalytical processing of data warehouse systems by incorporating more advanced techniques for data understanding.While there may be many “data mining systems” on the market, not all of them can perform true data mining. A data analysis system that does not handle large amounts of data can at most be categorized as a machine learning system, a statistical data analysis tool, or an experimental system prototype. A system that can only perform data or information retrieval, including finding aggregate values, or that performs deductive query answering in large databases should be more appropriately categorized as either a database system, an information retrieval system, or a deductive database system.Data mining involves an integration of techniques from mult1ple disciplines such as database technology, statistics, machine learning, high performance computing, pattern recognition, neural networks, data visualization, information retrieval, image and signal processing, and spatial data analysis. We adopt a database perspective in our presentation of data mining in this book. That is, emphasis is placed on efficient and scalable data mining techniques for large databases. By performing data mining, interesting knowledge, regularities, or high-level information can be extracted from databases and viewed or browsed from different angles. The discovered knowledge can be applied to decision making, process control, information management, query processing, and so on. Therefore,data mining is considered as one of the most important frontiers in database systems and one of the most promising, new database applications in the information industry.A classification of data mining systemsData mining is an interdisciplinary field, the confluence of a set of disciplines, including database systems, statistics, machine learning, visualization, and information science. Moreover, depending on the data mining approach used, techniques from other disciplines may be applied, such as neural networks, fuzzy and or rough set theory, knowledge representation, inductive logic programming, or high performance computing. Depending on the kinds of data to be mined or on the given data mining application, the data mining system may also integrate techniques from spatial data analysis, Information retrieval, pattern recognition, image analysis, signal processing, computer graphics, Web technology, economics, or psychology.Because of the diversity of disciplines contributing to data mining, data mining research is expected to generate a large variety of data mining systems. Therefore, it is necessary to provide a clear classification of data mining systems. Such a classification may help potential users distinguish data mining systems and identify those that best match their needs. Data mining systems can be categorized according to various criteria, as follows.1) Classification according to the kinds of databases mined.A data mining system can be classified according to the kinds of databases mined. Database systems themselves can be classified according to different criteria (such as data models, or the types of data or applications involved), each of which may require its own data mining technique. Data mining systems can therefore be classified accordingly.For instance, if classifying according to data models, we may have a relational, transactional, object-oriented, object-relational, or data warehouse mining system. If classifying according to the special types of data handled, we may have a spatial, time -series, text, or multimedia data mining system , or a World-Wide Web mining system . Other system types include heterogeneous data mining systems, and legacy data mining systems.2) Classification according to the kinds of knowledge mined.Data mining systems can be categorized according to the kinds of knowledge they mine, i.e., based on data mining functionalities, such as characterization, discrimination, association, classification, clustering, trend and evolution analysis, deviation analysis , similarity analysis, etc.A comprehensive data mining system usually provides multiple and/or integrated data mining functionalities.Moreover, data mining systems can also be distinguished based on the granularity or levels of abstraction of the knowledge mined, includinggeneralized knowledge(at a high level of abstraction), primitive-level knowledge(at a raw data level), or knowledge at multiple levels (considering several levels of abstraction). An advanced data mining system should facilitate the discovery of knowledge at multiple levels of abstraction.3) Classification according to the kinds of techniques utilized.Data mining systems can also be categorized according to the underlying data mining techniques employed. These techniques can be described according to the degree of user interaction involved (e.g., autonomous systems, interactive exploratory systems, query-driven systems), or the methods of data analysis employed(e.g., database-oriented or data warehouse-oriented techniques, machine learning, statistics, visualization, pattern recognition, neural networks, and so on ) .A sophisticated data mining system will often adopt multiple data mining techniques or work out an effective, integrated technique which combines the merits of a few individual approaches.什么是数据挖掘?许多人把数据挖掘视为另一个常用的术语—数据库中的知识发现或KDD的同义词。

信息系统和数据库开发中英文对照外文翻译文献

信息系统和数据库开发中英文对照外文翻译文献

中英文对照外文翻译文献(文档含英文原文和中文翻译)Information System Development and DatabaseDevelopmentIn many organizations, database development from the beginning of enterprise data modeling, data modeling enterprises determine the scope of the database and the general content. This step usually occurs in an organization's information system planning process, it aims to help organizations create an overall data description or explanation, and not the design of a specific database. A specific database for one or more information systems provide data and the corporate data model (which may involve a number of databases) described by the organization maintaining the scope of the data. Data modeling in the enterprise, you review of the current system, the need to support analysis of the nature of the business areas, the need for further description of the abstract data, and planning one or more database developmentproject. Figure 1 shows Pine Valley furniture company's enterprise data model of a part.1.1 Information System ArchitectureSenior data model is only general information system architecture (ISA) or a part of an organization's information system blueprint. In the information system planning, you can build an enterprise data model as a whole information system architecture part. According to Zachman (1987), Sowa and Zachman (1992) views of an information system architecture consists of the following six key components:DataManipulation of data processing (of a data flow diagram can be used, with the object model methods, or other symbols that).Networks, which organizations and in organizations with its main transmission of data between business partners (it can connect through the network topology map and to demonstrate).People who deal with the implementation of data and information and is the source and receiver (in the process model for the data shows that the sender and the receiver).Implementation of the events and time points (they can use state transition diagram and other means.)The reasons for the incident and data processing rules (often in the form of text display, but there are also a number of charts for the planning tools such as decision tables).1.2 Information EngineeringInformation systems planners in accordance with the specific information system planning methods developed information system architecture. Information engineering is a popular and formal methods. Information engineering is a data-oriented creation and maintenance of the information system. Information engineering is because the data-oriented, so when you begin to understand how the database is defined by the logo and when information engineering a concise explanation is very helpful. Information Engineering follow top-down planning approach, in which specific information systems from a wide range of informationneeds in the understanding derived from (for example, we need about customers, products, suppliers, sales and processing of the data center), rather than merging many detailed information requested ( orders such as a screen or in accordance with the importation of geographical sales summary report). Top-down planning will enable developers to plan more comprehensive information system, consider system components provide an integrated approach to enhance the information system and the relationship between the business objectives of the understanding, deepen their understanding of information systems throughout the organization in understanding the impact.Information Engineering includes four steps: planning, analysis, design and implementation. The planning stage of project information generated information system architecture, including enterprise data model.1.3 Information System PlanningInformation systems planning objective is to enable IT organizations and the business strategy closely integrated, such integration for the information systems and technology to make the most of the investment interest is very important. As the table as a description, information engineering approach the planning stage include three steps, we in the follow-up of three sections they discussed.1. Critical factors determining the planningPlanning is the key factor that organizational objectives, critical success factors and problem areas. These factors determine the purpose of the establishment of planning and environment planning and information systems linked to strategic business planning. Table 2 shows the Pine Valley furniture company's key planning a number of possible factors, these factors contribute to the information systems manager for the new information systems and databases clubs top priority to deal with the demand. For example, given the imprecise sales forecasts this problem areas, information systems managers in the organization may be stored in the database additional historical sales data, new market research data and new product test data.2. The planning organizations set targetsOrganizations planning targets defined scope of business, and business scope will limit the subsequent analysis and information systems may change places. Five key planning targets as follows:● organizational units in the various sectors.● organizations location of the place of business operations.● functions of the business support organizations handling mission of the relevant group. Unlike business organizations function modules, in fact a function can be assigned to various organizations modules (for example, product development function is the production and sale of the common responsibility of the Ministry).● types of entities managed by the organization on the people, places and things of the major types of data.● Information System data set processing software applications and support procedures.3. To set up a business modelA comprehensive business model including the functions of each enterprise functional decomposition model, the enterprise data model and the various planning matrix. Functional decomposition is the function of the organization for a more detailed decomposition process, the functional decomposition is to simplify the analysis of the issue, distracted and identify components and the use of the classical approach. Pine Valley furniture company in order to function in the functional decomposition example in figure 2 below. In dealing with business functions and support functions of the full set, multiple databases, is essential to a specific database therefore likely only to support functions (as shown in Figure 2) provide a subset of support. In order to reduce data redundancy and to make data more meaningful, has a complete, high-level business view is very helpful.The use of specific enterprise data model to describe the symbol. Apart from the graphical description of this type of entity, a complete enterprise data model should also include a description of each entity type description of business operations and a summary of that business rules. Business rules determine the validity of the data.An enterprise data model includes not only the types of entities, including the link between the data entities, as well as various other objects planning links. Showed that the linkage between planning targets a common form of matrix. Because of planning matrix need not be explicit modeling database can be clearly described business needs, planning matrix is an important function. Regular planning matrix derived from theoperational rules, it will help social development activities that top priority will be sorting and development activities under the top-down view through an enterprise-wide approach for the development of these activities. There are many types of planning matrix is available, their commonalities are:● locations - features show business function in which the implementation of operational locations.● unit - functions which showed that business function or business unit responsible for implementation.● Information System - data entities to explain how each information system interact with each data entity (for example, whether or not each system in each entity have the data to create, retrieve, update and delete).● support functions - data in each functional entities in the data set for the acquisition, use, update and delete.● Information System - target indication for each information system to support business objectives.Data entities matrix. Such a matrix can be used for a variety of purposes, including the following three objectives:1) identify gaps in the data entities to indicate the types of entities not use any function or functions which do not use any entity.2) found that the loss of each functional entities involved in the inspection staff through the matrix to identify any possible loss of the entity.3) The distinction between development activities if the priority to the top of a system development function for a high-priority (probably because it important organizational objectives related), then this area used by entities in the development of the database has a high priority. Hoffer, George and Valacich (2002) are the works of the matrix on how to use the planning and completion of the Information Engineering.The planning system more complete description.2 database development processBased on information engineering information systems planning database is a source of development projects. These new database development projects is usuallyin order to meet the strategic needs of organizations, such as improving customer support, improve product and inventory management, or a more accurate sales forecast. However, many more database development project is the bottom-up approach emerging, such as information system user needs specific information to complete their work, thus beginning a project request, and as other information systems experts found that organizations need to improve data management and begin new projects. Bottom-up even in the circumstances, to set up an enterprise data model is also necessary to understand the existing database can provide the necessary data, otherwise, the new database, data entities and attributes can be added to the current data resources to the organization. Both the strategic needs or operational information needs of each database development projects normally concentrated in a database. Some projects only concentrated in the database definition, design and implementation of a database, as a follow-up to the basis of the development of information systems. However, in most cases, the database and associated information processing function as a complete information systems development project was part of the development.2.1 System Development Life CycleGuide management information system development projects is the traditional process of system development life cycle (SDLC). System development life cycle is an organization of the database designers and programmers information system composed of the Panel of Experts detailed description, development, maintenance and replacement of the entire information system steps. This process is because Waterfall than for every step into the adjacent the next step, that is, the information system is a specification developed by a piece of land, every piece of the output is under an input. However shown in the figure, these steps are not purely linear, each of the steps overlap in time (and thus can manage parallel steps), but when the need to reconsider previous decisions, but also to roll back some steps ahead. (And therefore water can be put back in the waterfall!)Figure 4 on the system development life cycle and the purpose of each stage of the product can be delivered concise notes. The system development life cycle including each stage and database development-related activities, therefore, the question of database management systems throughout the entire development process. In Figure 5 we repeat of the system development life cycle stage of the seven, and outlines thecommon database at each stage of development activities. Please note that the systems development life cycle stages and database development steps一一对应exists between the relationship between the concept of modeling data in both systems development life cycle stages between.Enterprise ModelingDatabase development process from the enterprise modeling (system development life cycle stage of the project feasibility studies, and to choose a part), Organizations set the scope and general database content. Enterprise modeling in information systems planning and other activities, these activities determine which part of information systems need to change and strengthen the entire organization and outlines the scope of data. In this step, check the current database and information systems, development of the project as the main areas of the nature of the business, with a very general description of each term in the development of information systems when needed data. Each item only when it achieved the expected goals of organizations can be when the next step.Conceptual Data ModelingOne has already begun on the Information System project, the concept of data modeling phase of the information systems needs of all the data. It is divided into two stages. First, it began the project in the planning stage and the establishment of a plan similar to Figure 1. At the same time outlining the establishment of other documents to the existing database without considering the circumstances specific development projects in the scope of the required data. This category only includes high-level data (entities), and main contact. Then in the system development life-cycle analysis stage must have a management information system set the entire organization Details of the data model definition of all data attributes, listing all data types that all data inter-entity business linkages, defining description of the full data integrity rules. In the analysis phase, but also the concept of inspection data model (also called the concept behind the model) and the goal of information systems used to explain other aspects of the model of consistency categories, such as processing steps, rules and data processing time of timing. However, even if the concept is such detailed data model is only preliminary, because follow-up information system life cycle activities in the design of services, statements, display and inquiries may find that missing element or mistakes. Therefore, the concept of data often said that modeling is atop-down manner, its areas of operation from the general understanding of the driver, rather than the specific information processing activities by the driver.3. Logical Database DesignLogical database design from two perspectives database development. First, the concept of data model transform into relational database theory based on the criteria that means - between. Then, as the design of information systems, every computer procedures (including procedures for the input and output format), database support services, statements, and inquiries revealed that a detailed examination. In this so-called Bottom-up analysis, accurate verification of the need to maintain the database and the data in each affairs, statements and so on the needs of those in the nature of the data.For each separate statements, services, and so on the analysis must take into account a specific, limited but complete database view. When statements, services, and other analysis might be necessary to change the concept of data model. Especially in large-scale projects, the different analytical systems development staff and the team can work independently in different procedures or in a centralized, the details of their work until all the logic design stage may be displayed. In these circumstances, logic database design stage must be the original concept of data model and user view these independent or merged into a comprehensive design. In logic design information systems also identify additional information processing needs of these new demands at this time must be integrated into the logic of earlier identified in the database design.Logical database design is based on the final step for the formation of good data specifications and determine the rules, the combination, the data after consultation specifications or converted into basic atomic element. Most of today's database, these rules from the relational database theory and the process known as standardization. This step is the result of management of these data have not cited any database management system for a complete description of the database map. Logical database design completed, we began to identify in detail the logic of the computer program and maintenance, the report contents of the database for inquiries.4. Physical database design and definitionPhysical database design and definition phase decisions computer memory (usuallydisk) database in the organization, definition of According to the library management system for physical structure, the procedures outlined processing services, produce the desired management information and decision support statements. The objective of this stage is to design an effective and safe management of all data-processing database, the physical database design to closely integrate the information systems of other physical aspects of the design, including procedures, computer hardware, operating systems and data communications networks.5. Database ImplementationThe database prepared by the realization stage, testing and installation procedures for handling databases. Designers can use the standard programming language (such as COBOL, C or Visual Basic), the dedicated database processing languages (such as SQL), or the process of the non-exclusive language programming in order to produce a statement of the fixed format, the result will be displayed, and may also include charts. In achieving stage, but also the completion of all the database files, training users for information systems (database) user setup program. The final step is to use existing sources of information (documents legacy applications and databases and now needs new data) loading data. Loading data is often the first step in data from existing files and databases to an intermediate format (such as binary or text files) and then to turn intermediate loading data to a new database. Finally, running databases and related applications for the actual user maintenance and retrieval of data. In operation, the regular backup database and the database when damaged or affected resume database.6. Database maintenanceDuring the database in the progressive development of database maintenance. In this step, in order to meet changing business conditions, in order to correct the erroneous database design, database applications or processing speed increase, delete or change the structure of the database. When a procedure or failure of the computer database affect or damage the database may also be reconstruction. This step usually is the longest in the database development process step, as it continued to databases and related applications throughout the life cycle, the development of each database can be seen as a brief database development process and data modeling concepts arise, logical and physical database design and database to achieve dealing with the changes.2.2 Information System developed by other meansSystem Development Life Cycle minor changes in law or its variant of the often used to guide information systems and database development. Information System is a life-cycle methodology, it is highly structured approach, which includes many checks and balances to ensure that every step of produce accurate results, and new or alternative information system and it must communications or data definitions consistent existing system needs consistency. System development life cycle because of the regular need to have a working system for a long time been criticized because only work in the system until the end of the whole process generated. More and more organizations now use rapid application development method, it is a includes analysis, design and implementation of steps to repeat the rapid iterative process until convergence to users the system so far. Rapid Application Development Act required the database has been in existence, and enhance system is mainly to the application of data retrieval application, but not to those who generate and modify database applications.The most widely used method of rapid application development is one of the prototype. The prototype system is a method of iterative development process, analysts and users through close co-operation, continuing to revise the system will eventually convert all the needs of a working system. Figure 6 shows prototype of the process. In this diagram we contains notes, briefly describes each stage of the prototype of the database development activities. Normally, when information systems problems were identified, tried only a rough concept of data modeling. In the development of the initial prototype, the design of the user wants to display and statements, and that any new database needs and define a term prototype database. This is usually a new database, copy the part of the existing system, but might also added some new content. When the need for new content, these elements are usually from external data sources, such as market research data, the general economic indicators or industry standards.When a prototype of a new version to repeat the achievement and maintenance of database activities. Usually only a minimum level of security and integrity control, because at this time the focus is as soon as possible to produce a prototype version can be used. But document management project also deferred to the final, only be used in the delivery of user training. Finally, once constructed an acceptable prototype,developers, and users will be the final decision of whether to prototype delivery and the use of the database. If the system (including database) efficiency is very low, then the system and database will be re-programming and re-organization in order to achieve the desired performance.Along with visual programming tools (such as Visual Basic, Java, Visual C + + and fourth generation language) increasingly popular use of visual programming tools can easily change the user interface with the system, the prototype is becoming the choice of system development methodology. Customers using the prototype method statements and show changes to the content and layout is quite easy. In the process, the new database needs were identified, so it is the development of the use of the existing database should be amended. There is even the possibility of a need for a new database system prototype method, in such circumstances, when the system demand in the iterative process of development in the ever-changing needs access to sample data, the construction or reconstruction of the database prototype.3 database development of the three-tier architecture modelIn this article on the front of the database development process mentioned in the interpretation of a system development project on the establishment of the several different, but related database view or model:● conceptual model (in the analysis stage of the establishment).● external model or user view (in the analysis phase and the establishment of logical design phase).● physical model or internal model (in the physical design phase of the establishment).Figure 7 describes the database view that the relationship between the three, it is important to remember that they are the same organizations database view or model. In other words, each organization has a database of the physical model, a concept model and one or more users view.Therefore, the three-tier architecture model using the same data set observe the different ways definition database.Concept models on the full database structure, has nothing to do with the technical specifications. Conceptual model definition do not involve the entire database datastored in the computer how the secondary memory. Usually, the conceptual model by entities - links (E-R) map or object modeling symbols such a graphical format to describe, we have this type of concept model called the data model. In addition, the conceptual model specification as a metadata stored in the database or data dictionary.Physical models including conceptual model of how data stored in computer memory in the two specifications. Analysts and the database design is as important to the physical database (physical mode) definition, it provides information on the distribution and management of data storage and access of the physical memory space of two full database technology specifications.Database development and database technology database is among the three models divided into basis. Database development projects may have a role to only deal with these three views of a related work. For example, a beginner may be designed for one or more procedures external model, and an experienced developer will design the physical model or conceptual model. Database design issues at different levels are quite different.4 three-tier structure of the database positioning systemObviously, all the good things in the database are, and the "three"!When designing a database, you have to choose where to store data. This option in the physical database design stage. Database is divided into individual databases, the Working Group database, departmental databases, corporate databases and the Internet database. Individuals often by the end-user database design and development of their own, just by database experts to give training and advice to help, it only contains individual end-users interested in the data. Sometimes, personal database from the database or enterprise Working Group extracted from the database, such circumstances database prepared by some experts from the regular routine to create local database. Sector Working Group database and the database is often the end-user, business experts and the central database system experts development. The collaborative work of these officers is necessary because in the design of the database to be shared by a large number of issues weigh: processing speed, ease of use, data definition differences and other similar problems. Due to corporate databases and the Internet database broad impact, large-scale, it is normally concentrated in the database development team has received professional training to develop a database of experts.1. Customers layerA desktop or notebook also known as that layer, which specialized management user interface and system localization data in this layer can be implemented on the Web scripting tasks.2. Server / Web serverHTTP protocol handling, scripting tasks, the implementation of computing and provide data access, the layer known as processing services layer.3. Enterprise Server (Minicomputer or mainframe) layerThe implementation of complex computing and inter-organizational management from multiple data sources of data integration, also known as data services layer.In an organization, hierarchical database and information system architecture for distributed computing and the client / server architecture of the concept of correlation. Client / server architecture based on a LAN environment, including servers (referred to as database server or database engine) database software implementation from the client workstation database orders, each customer applications focus on their user interface functions. In fact, the whole concept of the database (as well as the application of these databases to handle routine) as a distributed database or the separate but related physical database distribution in the local PC workstation, server intermediate (working group or sector) and one center server (departments or enterprises ). Simply said that the use of client / server architecture for:● it can handle multiple processors on the same application at the same time, improve application response time and data processing speed.● It can use each computer platform of the best data processing (such as PC Minicom Advanced user interface with the mainframe and computing speed).● can mix various client technology (Intel or Motorola processor assembly of personal computers, computer networks, information kiosks, etc.) and public data sharing. In addition, you can change the technology at any layer and other layers only a small influence on the system module.● able to handle close to the data source to be addressed to improve response time and reduce network traffic.。

数据分析外文文献+翻译

数据分析外文文献+翻译

数据分析外文文献+翻译文献1:《数据分析在企业决策中的应用》该文献探讨了数据分析在企业决策中的重要性和应用。

研究发现,通过数据分析可以获取准确的商业情报,帮助企业更好地理解市场趋势和消费者需求。

通过对大量数据的分析,企业可以发现隐藏的模式和关联,从而制定出更具竞争力的产品和服务策略。

数据分析还可以提供决策支持,帮助企业在不确定的环境下做出明智的决策。

因此,数据分析已成为现代企业成功的关键要素之一。

文献2:《机器研究在数据分析中的应用》该文献探讨了机器研究在数据分析中的应用。

研究发现,机器研究可以帮助企业更高效地分析大量的数据,并从中发现有价值的信息。

机器研究算法可以自动研究和改进,从而帮助企业发现数据中的模式和趋势。

通过机器研究的应用,企业可以更准确地预测市场需求、优化业务流程,并制定更具策略性的决策。

因此,机器研究在数据分析中的应用正逐渐受到企业的关注和采用。

文献3:《数据可视化在数据分析中的应用》该文献探讨了数据可视化在数据分析中的重要性和应用。

研究发现,通过数据可视化可以更直观地呈现复杂的数据关系和趋势。

可视化可以帮助企业更好地理解数据,发现数据中的模式和规律。

数据可视化还可以帮助企业进行数据交互和决策共享,提升决策的效率和准确性。

因此,数据可视化在数据分析中扮演着非常重要的角色。

翻译文献1标题: The Application of Data Analysis in Business Decision-making The Application of Data Analysis in Business Decision-making文献2标题: The Application of Machine Learning in Data Analysis The Application of Machine Learning in Data Analysis文献3标题: The Application of Data Visualization in Data Analysis The Application of Data Visualization in Data Analysis翻译摘要:本文献研究了数据分析在企业决策中的应用,以及机器研究和数据可视化在数据分析中的作用。

数据库 外文翻译 外文文献 英文文献 数据库安全

数据库 外文翻译 外文文献 英文文献 数据库安全

Database Security“Why do I need to secure my database server? No one can access it —it’s in a DMZ protected by the firewall!” This is often the response when it is recommended that such devices are included within a security health check. In fact, database security is paramount in defending an organizations information, as it may be indirectly exposed to a wider audience than realized.This is the first of two articles that will examine database security. In this article we will discuss general database security concepts and common problems. In the next article we will focus on specific Microsoft SQL and Oracle security concerns.Database security has become a hot topic in recent times. With more and more people becoming increasingly concerned with computer security, we are finding that firewalls and Web servers are being secured more than ever(though this does not mean that there are not still a large number of insecure networks out there). As such, the focus is expanding to consider technologies such as databases with a more critical eye.◆Common sense securityBefore we discuss the issues relating to database security it is prudent to high- light the necessity to secure the underlying operating system and supporting technologies. It is not worth spending a lot of effort securing a database if a vanilla operating system is failing to provide a secure basis for the hardening of the data- base. There are a large number of excellent documents in the public domain detailing measures that should be employed when installing various operating systems.One common problem that is often encountered is the existence of a database on the same server as a web server hosting an Internet (or Intranet) facing application. Whilst this may save the cost of purchasing a separate server, it does seriously affect the security of the solution. Where this is identified, it is often the case that the database is openly connected to the Internet. One recent example I can recall is an Apache Web server serving an organizations Internet offering, with an Oracle database available on the Internet on port 1521. When investigating this issue further it was discovered that access to the Oracle server was not protected (including lack of passwords), which allowed the server to be stopped. The database was not required from an Internet facing perspective, but the use of default settings and careless security measures rendered the server vulnerable.The points mentioned above are not strictly database issues, and could be classified as architectural and firewall protection issues also, but ultimately it is the database that is compromised. Security considerations have to be made from all parts of a public facing net- work. You cannot rely on someone or something else within your organization protecting your database fr om exposur e.◆ Attack tools are now available for exploiting weaknesses in SQL and OracleI came across one interesting aspect of database security recently while carrying out a security review for a client. We were performing a test against an intranet application, which used a database back end (SQL) to store client details. The security review was proceeding well, with access controls being based on Windows authentication. Only authenticated Windows users were able to see data belonging to them. The application itself seemed to be handling input requests, rejecting all attempts to access the data- base directly.We then happened to come across a backup of the application in the office in which we were working. This media contained a backup of the SQL database, which we restored onto our laptop. All security controls which were in place originally were not restored with the database and we were able to browse the complete database, with no restrictions in place to protect the sensitive data. This may seem like a contrived way of compromising the security of the system, but does highlight an important point. It is often not the direct approach that is taken to attack a target, and ultimately the endpoint is the same; system compromise. A backup copy of the database may be stored on the server, and thus facilitates access to the data indirectly.There is a simple solution to the problem identified above. SQL 2000 can be configured to use password protection for backups. If the backup is created with password protection, this password must be used when restoring the password. This is an effective and uncomplicated method of stopping simple capture of backup data. It does however mean that the password must be remembered!◆Curr ent tr endsThere are a number of current trends in IT security, with a number of these being linked to database security.The focus on database security is now attracting the attention of the attackers. Attack tools are now available for exploiting weaknesses in SQL and Oracle. The emergence of these tools has raised the stakes and we have seen focused attacks against specific data- base ports on servers exposed to the Internet.One common theme running through the security industry is the focus on application security, and in particular bespoke Web applications. With he functionality of Web applications becoming more and more complex, it brings the potential for more security weaknesses in bespoke application code. In order to fulfill the functionality of applications, the backend data stores are commonly being used to format the content of Web pages. This requires more complex coding at the application end. With developers using different styles in code development, some of which are not as security conscious as other, this can be the source of exploitable errors.SQL injection is one such hot topic within the IT security industry at the moment. Discussions are now commonplace among technical security forums, with more and more ways and means of exploiting databases coming to light all the time. SQL injection is a misleading term, as the concept applies to other databases, including Oracle, DB2 and Sybase.◆ What is SQL Injection?SQL Injection is simply the method of communication with a database using code or commands sent via a method or application not intended by the developer. The most common form of this is found in Web applications. Any user input that is handled by the application is a common source of attack. One simple example of mishandling of user input is highlighted in Figure 1.Many of you will have seen this common error message when accessing web sites, and often indicates that the user input has not been correctly handled. On getting this type of error, an attacker will focus in with more specific input strings.Specific security-related coding techniques should be added to coding standard in use within your organization. The damage done by this type of vulnerability can be far reaching, though this depends on the level of privileges the application has in relation to the database.If the application is accessing data with full administrator type privileges, then maliciously run commands will also pick up this level of access, and system compromise is inevitable. Again this issue is analogous to operating system security principles, where programs should only be run with the minimum of permissions that is required. If normal user access is acceptable, then apply this restriction.Again the problem of SQL security is not totally a database issue. Specific database command or requests should not be allowed to pass through theapplication layer. This can be prevented by employing a “secure coding” approach.Again this is veering off-topic, but it is worth detailing a few basic steps that should be employed.The first step in securing any application should be the validation and control of user input. Strict typing should be used where possible to control specific data (e.g. if numeric data is expected), and where string based data is required, specific non alphanumeric characters should be prohibited where possible. Where this cannot be performed, consideration should be made to try and substitute characters (for example the use of single quotes, which are commonly used in SQL commands).Specific security-related coding techniques should be added to coding standard in use within your organization. If all developers are using the same baseline standards, with specific security measures, this will reduce the risk of SQL injection compromises.Another simple method that can be employed is to remove all procedures within the database that are not required. This restricts the extent that unwanted or superfluous aspects of the database could be maliciously used. This is analogous to removing unwanted services on an operating system, which is common security practice.◆ OverallIn conclusion, most of the points I have made above are common sense security concepts, and are not specific to databases. However all of these points DO apply to databases and if these basic security measures are employed, the security of your database will be greatly improved.The next article on database security will focus on specific SQL and Oracle security problems, with detailed examples and advice for DBAs and developers.There are a lot of similarities between database security and general IT security, with generic simple security steps and measures that can be (and should be) easily implemented to dramatically improve security. While these may seem like common sense, it is surprising how many times we have seen that common security measures are not implemented and so causea security exposure.◆User account and password securityOne of the basic first principals in IT security is “make su re you have a good password”. Within this statement I have assumed that a password is set in the first place, though this is often not the case.I touched on common sense security in my last article, but I think it is important to highlight this again. As with operating systems, the focus of attention within database account security is aimed at administrationaccounts. Within SQL this will be the SA account and within Oracle it may be the SYSDBA or ORACLE account.It is very common for SQL SA accounts to have a password of ‘SA’ or even worse a blank password, which is just as common. This password laziness breaks the most basic security principals, and should be stamped down on. Users would not be allowed to have a blank password on their own domain account, so why should valuable system resources such as databases be allowed to be left unprotected. For instance, a blank ‘SA’password will enable any user with client software (i.e. Microsoft query analyser or enterprise manager to ‘manage’ the SQL server and databases).With databases being used as the back end to Web applications, the lack of password control can result in a total compromise of sensitive information. With system level access to the database it is possible not only to execute queries into the database, create/modify/delete tables etc, but also to execute what are known as Stored Procedures.数据库安全“为什么要确保数据库服务安全呢?任何人都不能访问-这是一个非军事区的保护防火墙”,当我们被建议使用一个带有安全检查机制的装置时,这是通常的反应。

计算机 数据库 外文文献翻译 中英文

计算机 数据库 外文文献翻译 中英文

科技外文文献Microsoft Future "Soul" - SQL Server 2005 Exploration SecretAuthor : CHEN Bao-linSQL Server development "Brief History"At the beginning of this before, let us look at Microsoft SQL Server development "Brief History."1988 : SQL Server from Microsoft and Sybase common development, running on OS / 2 platform.1993-09-14 : SQL Server 4.2, a desktop database system contains less functional. Integration with Windows and to provide easy-to-use user interface.1994 : Microsoft and Sybase database in cooperation in the development of suspension.1995 : SQL Server 6.0, code-named "SQL95" Microsoft rewriting most of the core system. Provide a low-cost small business application database program.1996-04-16 : SQL Server 6.5, This version brings significant performance improvement and providing a wide variety of useful functions.1998-11-16 : SQL Server 7.0, code-named "Sphinx." Completely rewritten core database engine, providing small and medium business applications database program, contains the initial Web support. SQL Server starting from this version has been widely used.2000-08-07 : the birth of SQL Server 2000, code-named "Shiloh." Microsoft to produce the product has been defined as enterprise-class database system, which includes three components (DB, OLAP, English Query). Rich front-end tools, improved development tools, and XML support, the promotion of this version of the promotion and application. And contains the following several versions.Enterprise Edition : through the deployment of cluster TB-class support services giant databases and thousands of concurrent users online.Standard Edition : to support SMEs.Personal version : support desktop applications.Developer : staff development for enterprises and Windows CE build enterprise applications.Window CE Version : can be applied to any Windows CE mobile devices.2003-04-24 : SQL Server 2000, 64-bit version. Codenamed "Liberty" has been and Unix / Linux Oracle compete.2005-11-07 : SQL Server 2005, codenamed "Yukon" Microsoft SQL Server products to the latest version. Microsoft commented that the status of this product took five years of major changes, a landmark product. Microsoft SQL Server 4.2 to 2005. Microsoft since the early 1990s to enter the database market, SQL Server 2005 until the launch, behaved like an enterprise database from the market to lead the followers of the restructuring, sword was sharpened for 10 years, through many a storm, Microsoft already enterprises database management perspective extends to a broader and deeper realm, the paper attempts to explore the history, Aggregate Microsoft SQL Server formative history.1987 Sysbase developed Unix systems running SQL Server version. In 1988, Microsoft invited the then momentum in the database fields are busy Sysbase. joint development of SQL server. "Sima heart erased", Microsoft tried to enter the database market moves obviously, and, database market is bound to whip up some wind action. Sure enough, after 10 years of market access database for the intense period of the Warring States. 1993-04-12, Microsoft SQL Server version 4.2. And before the introduction of Windows NT echoed that Microsoft officially entered the enterprise applications market. And the SQL Server database and the enterprise is the most important. Although SQL Server 4.2 while still just a desktop version, but there has been considerablepotential. 1994, Microsoft and Sybase formal suspension of the database development cooperation This meaningfully.From 1995 to 2000, Microsoft has adopted 6.0, 6.5,7.0, 2000 Version 4. From the perspective view, SQL Server 2000 version has been able to provide the following services.Online Services (On-line services) : "On-Line" refers to real-time online users use data services.Online transaction processing OLTP (On-Line Transaction Processing) : OLTP operation by the order-processing services transactions, or transactions follow completion or undoes all the principles. It also did not include the type of services. This is a sector that is the most universal and most widely forms of service. Analysis of online services OLAP (On-Line Analytical Processing) : OLAP is a kind of multidimensional data display (such as data warehousing, data mart, data cube), usually to do data mining. As OLTP used to operate and SQL data definition, OLAP is used and MDX (MultiDimensional Expressions) visit and definitions of data. From the technical structure of SQL Server 2000, as follows.Data structure•physical structure of data structure.•logical framework : how to define Tables, ro ws, columns, and other data objectsData Processing• data processing storage engine : it is responsible for dealing with how the data retention.• engine : it is responsible for how the data for the visit and relations.• SQL Server Agent : it is respo nsible for task scheduling and events management.Data manipulation• DB APIs : ADO (ActiveX Data Objects).OLE DB (linking and embedding data objects).DB-Library for C + +.ODBC (Open Data Internet).ESQL (Embedded SQL.)• URLs (uniform resource locat or address).• English inquiries (English Query).SQL Server Enterprise Manager.Tools : Inquiry analyzers, DTS (Data Transformation Services), Backup and restore and replication, metadata services, storage expansion process, SQL tracking, can be used for performance tuning.Experiences from users, SQL Server 2000 version of a number of new characteristics, such as XML support, many examples of support, data warehouse and business intelligence to enhance performance and scalability will improve, operating guide, and the inquiries, DTS, Transact SQL enhancements.From the license price, Microsoft SQL Server 2000, the price and total cost of ownership (TCO) only to the Oracle or D B2 2 / 1 to 1 / 3.In summary, Microsoft high-performance low-cost access to the product concept on the market success SQL Server 2000 database can meet the OLTP and OLAP application deployment, and better performance, and prices relative Oracle, DB2 and other databases low. Meanwhile, SQL Server 2000 Enterprise Edition also includes the standard version and other versions to meet different levels of user demand, These factors prompted the SQL Server 2000 was a significant part of the SME market share Microsoft has the opportunity to enter the mainstream database vendors ranks.At the same time, we should realize that SQL Server 2000 and Oracle launched late in the G 10 high-end enterprise-level functions in surviving deficient, so bridging the gap to catch up on the historic mission to the code-named "Yukon," the new version.Killer code-named "Yukon"From the 1989 release of Microsoft SQL Server 1.0 is now a full 15 years. In that 15 years of SQL Server fromscratch, from small to large, experiencing a once legendary. It has not only eroded with IBM, Oracle database market share, and the next generation of SQL Server has begun to gradually become the next Windows operating system core. China and the Bill Gates mouth • The constant repetition of "seamless calculation" is the core of Yukon, The code-named "Yukon," the next generation of our database will be brought into what kind of world? Internet "soft" pillarIn today's era of the network, data searching,data storage, classification of data, etc. All this has become the Internet network constitutes the "soft" pillars, and the database system is the pillar of the most critical. If there is no database support, we would never be able to Google or Baidu in the search for the information they need. can not use the convenient electronic mailbox, but that Network World because it is a large database consisting of.According to IDC's latest data show that the global database software market seems to be stirring Tension 2003 total revenue reached 13.6 billion U.S. dollars, compared with 2002's 12.6 billion U.S. dollars have increased. Oracle, IBM and Microsoft now controls 75% market share. Oracle last year for a market share of 39.8%, 31.3% for IBM, Microsoft to 12.1%.What is the database? In the University's computer textbooks, the database is being interpreted in this way : The database is the computer application system in a specialized data resource management system. There are many forms of data, such as text, digital, symbols, graphics, images and voices, and so on. All computer data system to deal with the subject. People familiar approach of a document is produced, will soon compile a program processing documents, will be covered by the procedural requirements of data organized into data files, documentation of procedures to call. Data files and program files maintain a certain relationship. Computer Application in the rapid development of the situation, by means of such a document will highlight deficiencies. For example, it allows poor definitive data, facilitate transplantation, in different documents stored information much duplication and waste of storage space, Update inconvenience. Database system will solve this problem. Database systems from the application of specific procedures, but based on the data management, All data will be stored in a database, scientific organizations, and by means of the database management system, using it as an intermediary, with a variety of applications or application interface to make it easy access to the data in the database.This note describes is indeed very detailed, but you may not always seem dizziness, In fact, a simple database that is after a group of computer collation of data stored in one or more documents, and the management of the database software called on the database management system. A general database system (104217) can be divided into the database (Database ) and Data Management System (Database Management System, DBMS) in two parts, all of these constitute the Internet is a "soft" pillars all.Microsoft's SQL Server database software, as many of the upgrade from 6.5 to the 7.0 version, gradually become mainstream database software, and SQL Server 2000 also proved that the Windows operating system can bear the same high-end data application, as the mainstream business application of database management software. It broke the rule by the large Unix database software myth and the next generation of SQL Server 2005 there will be what kind of change?Live Yukon core secretsMicrosoft in the next version of SQL Server (codenamed "Yukon") at the planning stage , considered more of the future development of the database, and SQL Server programming capabilities. Microsoft's internal development staff had long been aware that the future must introduce a more unified programming model but for a different data model to provide more flexibility. The unified programming model means that the ordinary data access and operation tasks can be carried out through various channels. For example, you can choose to use XML or Framework, or Transact-S QL (T-SQL) code, and so on.Such planning will result is a new database programming platform, which in many ways a natural extension. First, host. NET Framework common language runtime (CLR) to the function of the process of expansion of database programming and managed code area. Secondly,. NET framework provides a host integration from within SQL Server powerful object database functions. XML is the in-depth support functions through the XML data typeto achieve, and It has a data type of relationship between all the functions. In addition, also added a pair of XML Query (XQuery) and XML structure definition language (XSD) standard server support. Finally, SQL Server Yukon includes T-SQL language to enhance the important function.XML in SQL Server Yukon's history really began with SQL Server 2000. SQL Server 2000 with the introduction of the XML format to relational data. large load and segmentation XML documents and databases will be open targets for XML-based Web services, and other functions, However Yukon provide a more senior XML Query function, After perfecting the Y ukon will be full play all of the advantages of XML. XML Why so critical? In fact, from the initial XML an alternative HTML said the technical development of a line format, now be seen as a storage format. XML lasting memory has drawn widespread attention, the Internet has also been a lot of XML data type applications. XML itself can be an across any platform data format, It started as a file format for use, as XML in the enterprise has been widely recognized, Users began to use XML to solve thorny business problems, such as data integration. This makes as a data storage format XML development today, Because XML can be displayed on any platform to produce the same results, XML has become a mainstream database storage format. This built-in the Yukon comprehensive XML support will trigger a new database technology revolution.These new programming models and enhanced common language to create a series of programmable, They complement and expand the current relational database model. This architecture has the ultimate aim is to build more scalable, more reliable, more robust applications, and to enhance the development of efficiency. These models Another result is a service called SQL Agent new application framework -- for Asynchronous sources delivering the Distributed Application Framework.Yukon joining century gambleConstantly talking before we say a string of technology advantages, then you may very curious, Why should we introduce this appears to be a high-end database application software technologies? Perhaps we should kick the answer.The richest on Earth doing computer predictions for the future, he believes, in the next world, every one ordinary computer will have a large enough super hard disks, At that time the hard disk is no longer simply an 80 GB is likely to be 80 TB, Although it is only a change GB TB, but that means hard disk capacity of a full upgrade of 1000 times. And the existing Windows disk data storage NTFS format, simply unable to cope with such a large capacity hard disk data search. Said an image of the example, if the 100 TB of disk space on your computer, At that time, or you use Windows XP, You collate debris disk of the time required is likely to be for two days and two nights, if you want to find a particular document, You will have waited for several hours. That feeling is like to return to 286 times.In order to solve this thorny problem, the next generation Windows operating system Longhorn decided with the previous non-Windows diametrically with the programming model. The core is Avalon (development code). Avalon is the new Windows GUI library. New Longhorn into the Indigo (Web services) and WinFS (file system) of the new function. Including Avalon, these three new function called hell. Longhorn is the founder of a new "local" API. Although now is to the Win32 API compatibility and grow, However, to use the new Longhorn functions, under normal circumstances the use of hell. Max belongs to the present. NET Framework in the city. Present. NET Framework used in the category, which has hell, DLL support for the procedural mechanisms and the operation. NET basically the same.. NET Framework in SQL Server Yukon Availability when major version upgrade ( Major VersionUp), the specific date is the end of 2004. In the Yukon. NET Framework to run. In the storage process (Stored Procedures) use. NET Framework The class library. Yukon operations. NET Framework version 2.0. Supplementary to the present. NET Framework 1.1 is no relevant category of multimedia. WinFS use Yukon engines. In other words, Longhorn, the file system will use database engine.This time you understand, the next generation Windows operating system, the whole document data management will be introduced SQL Server configuration management, when Our computer data querycapabilities, data integration capability will be greatly enhanced. This of course, that the rich keep saying that the "seamless calculation" is a critical step on Microsoft, Let database software and operating systems integration projects century is undoubtedly a gamble, which, if successful, Microsoft will gradually become the dominant database, but if it fails, The almost even harden the next generation Windows listing of the normal schedule.Microsoft has provided some tools for SQL server and client applications on the network between the transmission of data increases secret. However, the Microsoft product manager said Kirsten Ward, plans to release next year a new SQL Server database will be stored in the data encryption, Hacker attacks increase defense capabilities.Microsoft earlier this year "SQL Server 2005" release time postponed until the first half of next year. The database software will enhance the launch of Microsoft database computing power and better with Oracle and IBM compete. Microsoft will also introduce a unified storage concept, locating and retrieving data more convenient. Oracle in Windows and Unix database market has been in a leading position. However, the recently adopted this year, Microsoft SQL Server to increase more advanced functions have also made remarkable progress.In addition, Microsoft will also provide a service called "Best Practices Analyzer Tool" (best practice analyzer tool) software. Database administrators can use the software using Microsoft editor of the Guide database software debugging. This applies to software tools for Microsoft database software current version "SQL Server 2000" and to provide a database administrator in various fields Operations Guide, For example, how to improve performance and how to conduct more effective data backup and so on.Ward said that the software tool also includes an "Upgrade Advisor" procedure. This procedure can scan database programs and warned "SQL Server 2000" users to make the necessary amendments changed so that the procedures compatible with the upcoming launch of the "SQL Server 2005."(Source : China Computer Education)中文译文微软未来的“灵魂”—SQL Server 2005探密作者:陈宝林SQL Server的发展“简史”在开始本文之前,先让我们来看一下微软SQL Server的发展“简史”。

数据库外文参考文献及翻译.

数据库外文参考文献及翻译.

数据库外文参考文献及翻译数据库外文参考文献及翻译数据库管理系统——实施数据完整性一个数据库,只有用户对它特别有信心的时候。

这就是为什么服务器必须实施数据完整性规则和商业政策的原因。

执行SQL Server的数据完整性的数据库本身,保证了复杂的业务政策得以遵循,以及强制性数据元素之间的关系得到遵守。

因为SQL Server的客户机/服务器体系结构允许你使用各种不同的前端应用程序去操纵和从服务器上呈现同样的数据,这把一切必要的完整性约束,安全权限,业务规则编码成每个应用,是非常繁琐的。

如果企业的所有政策都在前端应用程序中被编码,那么各种应用程序都将随着每一次业务的政策的改变而改变。

即使您试图把业务规则编码为每个客户端应用程序,其应用程序失常的危险性也将依然存在。

大多数应用程序都是不能完全信任的,只有当服务器可以作为最后仲裁者,并且服务器不能为一个很差的书面或恶意程序去破坏其完整性而提供一个后门。

SQL Server使用了先进的数据完整性功能,如存储过程,声明引用完整性(DRI),数据类型,限制,规则,默认和触发器来执行数据的完整性。

所有这些功能在数据库里都有各自的用途;通过这些完整性功能的结合,可以实现您的数据库的灵活性和易于管理,而且还安全。

声明数据完整性声明数据完整原文请找腾讯3249114六,维-论'文.网 定义一个表时指定构成的主键的列。

这就是所谓的主键约束。

SQL Server使用主键约束以保证所有值的唯一性在指定的列从未侵犯。

通过确保这个表有一个主键来实现这个表的实体完整性。

有时,在一个表中一个以上的列(或列的组合)可以唯一标志一行,例如,雇员表可能有员工编号( emp_id )列和社会安全号码( soc_sec_num )列,两者的值都被认为是唯一的。

这种列经常被称为替代键或候选键。

这些项也必须是唯一的。

虽然一个表只能有一个主键,但是它可以有多个候选键。

SQL Server的支持多个候选键概念进入唯一性约束。

数据库英文参考文献(最新推荐120个)

数据库英文参考文献(最新推荐120个)

由于我国经济的高速发展,计算机科学技术在当前各个科技领域中迅速发展,成为了应用最广泛的技术之一.其中数据库又是计算机科学技术中发展最快,应用最广泛的重要分支之一.它已成为计算机信息系统和计算机应用系统的重要技术基础和支柱。

下面是数据库英文参考文献的分享,希望对你有所帮助。

数据库英文参考文献一:[1]Nú?ez Matías,Weht Ruben,Nú?ez Regueiro Manuel. Searching for electronically two dimensional metals in high-throughput ab initio databases[J]. Computational Materials Science,2020,182.[2]Izabela Karsznia,Marta Przychodzeń,Karolina Sielicka. Methodology of the automatic generalization of buildings, road networks, forests and surface waters: a case study based on the Topographic Objects Database in Poland[J]. Geocarto International,2020,35(7).[3]Alankrit Chaturvedi. Secure Cloud Migration Challenges and Solutions[J]. Journal of Research in Science and Engineering,2020,2(4).[4]Ivana Nin?evi? Pa?ali?,Maja ?uku?i?,Mario Jadri?. Smart city research advances in Southeast Europe[J]. International Journal of Information Management,2020.[5]Jongseong Kim,Unil Yun,Eunchul Yoon,Jerry Chun-Wei Lin,Philippe Fournier-Viger. One scan based high average-utility pattern mining in static and dynamic databases[J]. Future Generation Computer Systems,2020.[6]Jo?o Peixoto Martins,António Andrade-Campos,Sandrine Thuillier. Calibration of Johnson-Cook Model Using Heterogeneous Thermo-Mechanical Tests[J]. Procedia Manufacturing,2020,47.[7]Anna Soriani,Roberto Gemignani,Matteo Strano. A Metamodel for the Management of Large Databases: Toward Industry 4.0 in Metal Forming[J]. Procedia Manufacturing,2020,47.[8]Ayman Elbadawi,Karim Mahmoud,Islam Y. Elgendy,Mohammed Elzeneini,Michael Megaly,Gbolahan Ogunbayo,Mohamed A. Omer,Michelle Albert,Samir Kapadia,Hani Jneid. Racial disparities in the utilization and outcomes of transcatheter mitral valve repair: Insights from a national database[J]. Cardiovascular Revascularization Medicine,2020.[9]Maurizio Boccia,Antonio Sforza,Claudio Sterle. Simple Pattern Minimality Problems: Integer Linear Programming Formulations and Covering-Based Heuristic Solving Approaches[J]. INFORMS Journal on Computing,2020.[10]. Inc.; Patent Issued for Systems And User Interfaces For Dynamic Access Of Multiple Remote Databases And Synchronization Of Data Based On User Rules (USPTO 10,628,448)[J]. Computer Technology Journal,2020.[11]. Bank of America Corporation; Patent Issued for System For Electronic Data Verification, Storage, And Transfer (USPTO 10,628,058)[J]. Computer Technology Journal,2020.[12]. Information Technology - Database Management; Data from Technical University Munich (TU Munich) Advance Knowledge in Database Management (Make the most out of your SIMD investments: counter control flow divergence in compiled query pipelines)[J]. Computer Technology Journal,2020.[13]. Information Technology - Database Management; Studies from Pontifical Catholic University Update Current Data on Database Management (General dynamic Yannakakis: conjunctive queries with theta joins under updates)[J]. Computer Technology Journal,2020.[14]Kimothi Dhananjay,Biyani Pravesh,Hogan James M,Soni Akshay,Kelly Wayne. Learning supervised embeddings for large scale sequence comparisons.[J]. PloS one,2020,15(3).[15]. Information Technology; Studies from University of California San Diego (UCSD) Reveal New Findings on Information Technology (A Physics-constrained Data-driven Approach Based On Locally Convex Reconstruction for Noisy Database)[J]. Information Technology Newsweekly,2020.[16]. Information Technology; Researchers from National Institute of Information and Communications Technology Describe Findings in Information Technology (Efficient Discovery of Weighted Frequent Neighborhood Itemsets in Very Large Spatiotemporal Databases)[J]. Information Technology Newsweekly,2020.[17]. Information Technology; Investigators at Gdansk University of Technology Report Findings in Information Technology (A Framework for Accelerated Optimization of Antennas Using Design Database and Initial Parameter Set Estimation)[J]. Information Technology Newsweekly,2020.[18]. Information Technology; Study Results from Palacky University Update Understanding of Information Technology (Evaluation of Replication Mechanisms on Selected Database Systems)[J]. Information Technology Newsweekly,2020.[19]Runfola Daniel,Anderson Austin,Baier Heather,Crittenden Matt,Dowker Elizabeth,Fuhrig Sydney,Goodman Seth,Grimsley Grace,Layko Rachel,MelvilleGraham,Mulder Maddy,Oberman Rachel,Panganiban Joshua,Peck Andrew,Seitz Leigh,Shea Sylvia,Slevin Hannah,Youngerman Rebecca,Hobbs Lauren. geoBoundaries: A global database of political administrative boundaries.[J]. PloS one,2020,15(4).[20]Dupré Damien,Krumhuber Eva G,Küster Dennis,McKeown Gary J. A performance comparison of eight commercially available automatic classifiers for facial affect recognition.[J]. PloS one,2020,15(4).[21]Partha Pratim Banik,Rappy Saha,Ki-Doo Kim. An Automatic Nucleus Segmentation and CNN Model based Classification Method of White Blood Cell[J]. Expert Systems With Applications,2020,149.[22]Hang Dong,Wei Wang,Frans Coenen,Kaizhu Huang. Knowledge base enrichment by relation learning from social tagging data[J]. Information Sciences,2020,526.[23]Xiaodong Zhao,Dechang Pi,Junfu Chen. Novel trajectory privacy-preserving method based on clustering using differential privacy[J]. Expert Systems With Applications,2020,149.[24]. Information Technology; Researchers at Beijing University of Posts and Telecommunications Have Reported New Data on Information Technology (Mining top-k sequential patterns in transaction database graphs)[J]. Internet Weekly News,2020.[25]Sunil Kumar Sharma. An empirical model (EM: CCO) for clustering, convergence and center optimization in distributive databases[J]. Journal of Ambient Intelligence and Humanized Computing,2020(prepublish).[26]Naryzhny Stanislav,Klopov Nikolay,Ronzhina Natalia,Zorina Elena,Zgoda Victor,Kleyst Olga,Belyakova Natalia,Legina Olga. A database for inventory of proteoform profiles: "2DE-pattern".[J]. Electrophoresis,2020.[27]Noel Varela,Jesus Silva,Fredy Marin Gonzalez,Pablo Palencia,Hugo Hernandez Palma,Omar Bonerge Pineda. Method for the Recovery of Images in Databases of Rice Grains from Visual Content[J]. Procedia Computer Science,2020,170.[28]Ahmad Rabanimotlagh,Prabhu Janakaraj,Pu Wang. Optimal Crowd-Augmented Spectrum Mapping via an Iterative Bayesian Decision Framework[J]. Ad Hoc Networks,2020.[29]Ismail Boucherit,Mohamed Ould Zmirli,Hamza Hentabli,Bakhtiar Affendi Rosdi. Finger vein identification using deeply-fused Convolutional Neural Network[J]. Journal of King Saud University - Computer and Information Sciences,2020.[30]Sachin P. Patel,S.H. Upadhyay. Euclidean Distance based Feature Ranking andSubset Selection for Bearing Fault Diagnosis[J]. Expert Systems With Applications,2020.[31]Julia Fomina,Denis Safikanov,Alexey Artamonov,Evgeniy Tretyakov. Parametric and semantic analytical search indexes in hieroglyphic languages[J]. Procedia Computer Science,2020,169.[32]Selvine G. Mathias,Sebastian Schmied,Daniel Grossmann. An Investigation on Database Connections in OPC UA Applications[J]. Procedia Computer Science,2020,170.[33]Abdourrahmane Mahamane Atto,Alexandre Benoit,Patrick Lambert. Timed-image based deep learning for action recognition in video sequences[J]. Pattern Recognition,2020.[34]Yonis Gulzar,Ali A. Alwan,Abedallah Zaid Abualkishik,Abid Mehmood. A Model for Computing Skyline Data Items in Cloud Incomplete Databases[J]. Procedia Computer Science,2020,170.[35]Xiaohan Yang,Fan Li,Hantao Liu. Deep feature importance awareness based no-reference image quality prediction[J]. Neurocomputing,2020.[36]Dilana Hazer-Rau,Sascha Meudt,Andreas Daucher,Jennifer Spohrs,Holger Hoffmann,Friedhelm Schwenker,Harald C. Traue. The uulmMAC Database—A Multimodal Affective Corpus for Affective Computing in Human-Computer Interaction[J]. Sensors,2020,20(8).[37]Tomá? Pohanka,Vilém Pechanec. Evaluation of Replication Mechanisms on Selected Database Systems[J]. ISPRS International Journal of Geo-Information,2020,9(4).[38]Verheggen Kenneth,Raeder Helge,Berven Frode S,Martens Lennart,Barsnes Harald,Vaudel Marc. Anatomy and evolution of database search engines-a central component of mass spectrometry based proteomic workflows.[J]. Mass spectrometry reviews,2020,39(3).[39]Moscona Leon,Casta?eda Pablo,Masrouha Karim. Citation analysis of the highest-cited articles on developmental dysplasia of the hip.[J]. Journal of pediatric orthopedics. Part B,2020,29(3).[40]Nasseh Daniel,Schneiderbauer Sophie,Lange Michael,Schweizer Diana,Heinemann Volker,Belka Claus,Cadenovic Ranko,Buysse Laurence,Erickson Nicole,Mueller Michael,Kortuem Karsten,Niyazi Maximilian,Marschner Sebastian,Fey Theres. Optimizing the Analytical Value of Oncology-Related Data Based on an In-Memory Analysis Layer: Development and Assessment of the Munich OnlineComprehensive Cancer Analysis Platform.[J]. Journal of medical Internet research,2020,22(4).数据库英文参考文献二:[41]Meiling Chai,Changgeng Li,Hui Huang. A New Indoor Positioning Algorithm of Cellular and Wi-Fi Networks[J]. Journal of Navigation,2020,73(3).[42]Mandy Watson. How to undertake a literature search: a step-by-step guide[J]. British Journal of Nursing,2020,29(7).[43]. Patent Application; "Memorial Facility With Memorabilia, Meeting Room, Secure Memorial Database, And Data Needed For An Interactive Computer Conversation With The Deceased" in Patent Application Approval Process (USPTO 20200089455)[J]. Computer Technology Journal,2020.[44]. Information Technology; Data on Information Technology Detailed by Researchers at Complutense University Madrid (Hr-sql: Extending Sql With Hypothetical Reasoning and Improved Recursion for Current Database Systems)[J]. Computer Technology Journal,2020.[45]. Science - Metabolomics; Study Data from Wake Forest University School of Medicine Update Knowledge of Metabolomics (Software tools, databases and resources in metabolomics: updates from 2018 to 2019)[J]. Computer Technology Journal,2020.[46]. Sigma Computing Inc.; Researchers Submit Patent Application, "GeneratingA Database Query To Dynamically Aggregate Rows Of A Data Set", for Approval (USPTO 20200089796)[J]. Computer Technology Journal,2020.[47]. Machine Learning; Findings on Machine Learning Reported by Investigators at Tongji University (Comparing Machine Learning Algorithms In Predicting Thermal Sensation Using Ashrae Comfort Database Ii)[J]. Computer Technology Journal,2020.[48]. Sigma Computing Inc.; "Generating A Database Query Using A Dimensional Hierarchy Within A Graphical User Interface" in Patent Application Approval Process (USPTO 20200089794)[J]. Computer Technology Journal,2020.[49]Qizhi He,Jiun-Shyan Chen. A physics-constrained data-driven approach based on locally convex reconstruction for noisy database[J]. Computer Methods in Applied Mechanics and Engineering,2020,363.[50]José A. Delgado-Osuna,Carlos García-Martínez,JoséGómez-Barbadillo,Sebastián Ventura. Heuristics for interesting class association rule mining a colorectal cancer database[J]. Information Processing andManagement,2020,57(3).[51]Edival Lima,Thales Vieira,Evandro de Barros Costa. Evaluating deep models for absenteeism prediction of public security agents[J]. Applied Soft Computing Journal,2020,91.[52]S. Fareri,G. Fantoni,F. Chiarello,E. Coli,A. Binda. Estimating Industry 4.0 impact on job profiles and skills using text mining[J]. Computers in Industry,2020,118.[53]Estrela Carlos,Pécora Jesus Djalma,Dami?o Sousa-Neto Manoel. The Contribution of the Brazilian Dental Journal to the Brazilian Scientific Research over 30 Years.[J]. Brazilian dental journal,2020,31(1).[54]van den Oever L B,Vonder M,van Assen M,van Ooijen P M A,de Bock G H,Xie X Q,Vliegenthart R. Application of artificial intelligence in cardiac CT: From basics to clinical practice.[J]. European journal of radiology,2020,128.[55]Li Liu,Deborah Silver,Karen Bemis. Visualizing events in time-varying scientific data[J]. Journal of Visualization,2020,23(2–3).[56]. Information Technology - Database Management; Data on Database Management Discussed by Researchers at Arizona State University (Architecture of a Distributed Storage That Combines File System, Memory and Computation In a Single Layer)[J]. Information Technology Newsweekly,2020.[57]. Information Technology - Database Management; New Findings from Guangzhou Medical University Update Understanding of Database Management (GREG-studying transcriptional regulation using integrative graph databases)[J]. Information Technology Newsweekly,2020.[58]. Technology - Laser Research; Reports from Nicolaus Copernicus University in Torun Add New Data to Findings in Laser Research (Nonlinear optical study of Schiff bases using Z-scan technique)[J]. Journal of Technology,2020.[59]Loeffler Caitlin,Karlsberg Aaron,Martin Lana S,Eskin Eleazar,Koslicki David,Mangul Serghei. Improving the usability and comprehensiveness of microbial databases.[J]. BMC biology,2020,18(1).[60]Caitlin Loeffler,Aaron Karlsberg,Lana S. Martin,Eleazar Eskin,David Koslicki,Serghei Mangul. Improving the usability and comprehensiveness of microbial databases[J]. BMC Biology,2020,18(1).[61]Dean H. Barrett,Aderemi Haruna. Artificial intelligence and machine learningfor targeted energy storage solutions[J]. Current Opinion in Electrochemistry,2020,21.[62]Chenghao Sun. Research on investment decision-making model from the perspective of “Internet of Things + Big data”[J]. Future Generation Computer Systems,2020,107.[63]Sa?a Adamovi?,Vladislav Mi?kovic,Nemanja Ma?ek,Milan Milosavljevi?,Marko ?arac,Muzafer Sara?evi?,Milan Gnjatovi?. An efficient novel approach for iris recognition based on stylometric features and machine learning techniques[J]. Future Generation Computer Systems,2020,107.[64]Olivier Pivert,Etienne Scholly,Grégory Smits,Virginie Thion. Fuzzy quality-Aware queries to graph databases[J]. Information Sciences,2020,521.[65]Javier Fernando Botía Valderrama,Diego José Luis Botía Valderrama. Two cluster validity indices for the LAMDA clustering method[J]. Applied Soft Computing Journal,2020,89.[66]Amer N. Kadri,Marie Bernardo,Steven W. Werns,Amr E. Abbas. TAVR VS. SAVR IN PATIENTS WITH CANCER AND AORTIC STENOSIS: A NATIONWIDE READMISSION DATABASE REGISTRY STUDY[J]. Journal of the American College of Cardiology,2020,75(11).[67]. Information Technology; Findings from P. Sjolund and Co-Authors Update Knowledge of Information Technology (Whole-genome sequencing of human remains to enable genealogy DNA database searches - A case report)[J]. Information Technology Newsweekly,2020.[68]. Information Technology; New Findings from P. Yan and Co-Researchers in the Area of Information Technology Described (BrainEXP: a database featuring with spatiotemporal expression variations and co-expression organizations in human brains)[J]. Information Technology Newsweekly,2020.[69]. IDERA; IDERA Database Tools Expand Support for Cloud-Hosted Databases[J]. Information Technology Newsweekly,2020.[70]Adrienne Warner,David A. Hurley,Jonathan Wheeler,Todd Quinn. Proactive chat in research databases: Inviting new and different questions[J]. The Journal of Academic Librarianship,2020,46(2).[71]Chidentree Treesatayapun. Discrete-time adaptive controller based on IF-THEN rules database for novel architecture of ABB IRB-1400[J]. Journal of the Franklin Institute,2020.[72]Tian Fang,Tan Han,Cheng Zhang,Ya Juan Yao. Research and Construction of the Online Pesticide Information Center and Discovery Platform Based on Web Crawler[J]. Procedia Computer Science,2020,166.[73]Dinusha Vatsalan,Peter Christen,Erhard Rahm. Incremental clustering techniques for multi-party Privacy-Preserving Record Linkage[J]. Data & Knowledge Engineering,2020.[74]Ying Xin Liu,Xi Yuan Li. Design and Implementation of a Business Platform System Based on Java[J]. Procedia Computer Science,2020,166.[75]Akhilesh Kumar Bajpai,Sravanthi Davuluri,Kriti Tiwary,Sithalechumi Narayanan,Sailaja Oguru,Kavyashree Basavaraju,Deena Dayalan,Kavitha Thirumurugan,Kshitish K. Acharya. Systematic comparison of the protein-protein interaction databases from a user's perspective[J]. Journal of Biomedical Informatics,2020,103.[76]P. Raveendra,V. Siva Reddy,G.V. Subbaiah. Vision based weed recognition using LabVIEW environment for agricultural applications[J]. Materials Today: Proceedings,2020,23(Pt 3).[77]Christine Rosati,Emily Bakinowski. Preparing for the Implementation of an Agnis Enabled Data Reporting System and Comprehensive Research Level Data Repository for All Cellular Therapy Patients[J]. Biology of Blood and Marrow Transplantation,2020,26(3).[78]Zeiser Felipe André,da Costa Cristiano André,Zonta Tiago,Marques Nuno M C,Roehe Adriana Vial,Moreno Marcelo,da Rosa Righi Rodrigo. Segmentation of Masses on Mammograms Using Data Augmentation and Deep Learning.[J]. Journal of digital imaging,2020.[79]Dhaked Devendra K,Guasch Laura,Nicklaus Marc C. Tautomer Database: A Comprehensive Resource for Tautomerism Analyses.[J]. Journal of chemical information and modeling,2020,60(3).[80]Pian Cong,Zhang Guangle,Gao Libin,Fan Xiaodan,Li Fei. miR+Pathway: the integration and visualization of miRNA and KEGG pathways.[J]. Briefings in bioinformatics,2020,21(2).数据库英文参考文献三:[81]Marcello W. M. Ribeiro,Alexandre A. B. Lima,Daniel Oliveira. OLAP parallel query processing in clouds with C‐ParGRES[J]. Concurrency and Computation: Practice and Experience,2020,32(7).[82]Li Gao,Peng Lin,Peng Chen,Rui‐Zhi Gao,Hong Yang,Yun He,Jia‐Bo Chen,Yi ‐Ge Luo,Qiong‐Qian Xu,Song‐Wu Liang,Jin‐Han Gu,Zhi‐Guang Huang,Yi‐Wu Dang,Gang Chen. A novel risk signature that combines 10 long noncoding RNAs to predict neuroblastoma prognosis[J]. Journal of Cellular Physiology,2020,235(4).[83]Julia Krzykalla,Axel Benner,Annette Kopp‐Schneider. Exploratory identification of predictive biomarkers in randomized trials with normal endpoints[J]. Statistics in Medicine,2020,39(7).[84]Jianye Ching,Kok-Kwang Phoon. Measuring Similarity between Site-Specific Data and Records from Other Sites[J]. ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems, Part A: Civil Engineering,2020,6(2).[85]Anne Kelly Knowles,Justus Hillebrand,Paul B. Jaskot,Anika Walke. Integrative, Interdisciplinary Database Design for the Spatial Humanities: the Case of the Holocaust Ghettos Project[J]. International Journal of Humanities and Arts Computing,2020,14(1-2).[86]Sheng-Feng Sung,Pei-Ju Lee,Cheng-Yang Hsieh,Wan-Lun Zheng. Medication Use and the Risk of Newly Diagnosed Diabetes in Patients with Epilepsy: A Data Mining Application on a Healthcare Database[J]. Journal of Organizational and End User Computing (JOEUC),2020,32(2).[87]Rashkovits Rami,Lavy Ilana. Students' Difficulties in Identifying the Use of Ternary Relationships in Data Modeling[J]. International Journal of Information and Communication Technology Education (IJICTE,2020,16(2).[88]Yusuf Akhtar,Dipti Prasad Mukherjee. Context-based ensemble classification for the detection of architectural distortion in a digitised mammogram[J]. IET Image Processing,2020,14(4).[89]Gurpreet Kaur,Sukhwinder Singh,Renu Vig. Medical fusion framework using discrete fractional wavelets and non-subsampled directional filter banks[J]. IET Image Processing,2020,14(4).[90]Qian Liu,Bo Jiang,Jia-lei Zhang,Peng Gao,Zhi-jian Xia. Semi-supervised uncorrelated dictionary learning for colour face recognition[J]. IET Computer Vision,2020,14(3).[91]Yipo Huang,Leida Li,Yu Zhou,Bo Hu. No-reference quality assessment for live broadcasting videos in temporal and spatial domains[J]. IET Image Processing,2020,14(4).[92]Panetta Karen,Wan Qianwen,Agaian Sos,Rajeev Srijith,Kamath Shreyas,Rajendran Rahul,Rao Shishir Paramathma,Kaszowska Aleksandra,Taylor Holly A,Samani Arash,Yuan Xin. A Comprehensive Database for Benchmarking Imaging Systems.[J]. IEEE transactions on pattern analysis and machine intelligence,2020,42(3).[93]Rahnev Dobromir,Desender Kobe,Lee Alan L F,Adler William T,Aguilar-Lleyda David,Akdo?an Ba?ak,Arbuzova Polina,Atlas Lauren Y,Balc? Fuat,Bang Ji Won,Bègue Indrit,Birney Damian P,Brady Timothy F,Calder-Travis Joshua,Chetverikov Andrey,Clark Torin K,Davranche Karen,Denison Rachel N,Dildine Troy C,Double Kit S,Duyan Yaln A,Faivre Nathan,Fallow Kaitlyn,Filevich Elisa,Gajdos Thibault,Gallagher Regan M,de Gardelle Vincent,Gherman Sabina,Haddara Nadia,Hainguerlot Marine,Hsu Tzu-Yu,Hu Xiao,Iturrate I?aki,Jaquiery Matt,Kantner Justin,Koculak Marcin,Konishi Mahiko,Ko? Christina,Kvam Peter D,Kwok Sze Chai,Lebreton Ma?l,Lempert Karolina M,Ming Lo Chien,Luo Liang,Maniscalco Brian,Martin Antonio,Massoni Sébastien,Matthews Julian,Mazancieux Audrey,Merfeld Daniel M,O'Hora Denis,Palser Eleanor R,Paulewicz Borys?aw,Pereira Michael,Peters Caroline,Philiastides Marios G,Pfuhl Gerit,Prieto Fernanda,Rausch Manuel,Recht Samuel,Reyes Gabriel,Rouault Marion,Sackur Jér?me,Sadeghi Saeedeh,Samaha Jason,Seow Tricia X F,Shekhar Medha,Sherman Maxine T,Siedlecka Marta,Skóra Zuzanna,Song Chen,Soto David,Sun Sai,van Boxtel Jeroen J A,Wang Shuo,Weidemann Christoph T,Weindel Gabriel,WierzchońMicha?,Xu Xinming,Ye Qun,Yeon Jiwon,Zou Futing,Zylberberg Ariel. The Confidence Database.[J]. Nature human behaviour,2020,4(3).[94]Taipalus Toni. The Effects of Database Complexity on SQL Query Formulation[J]. Journal of Systems and Software,2020(prepublish).[95]. Information Technology; Investigators from Deakin University Target Information Technology (Conjunctive query pattern structures: A relational database model for Formal Concept Analysis)[J]. Computer Technology Journal,2020.[96]. Machine Learning; Findings from Rensselaer Polytechnic Institute Broaden Understanding of Machine Learning (Self Healing Databases for Predictive Risk Analytics In Safety-critical Systems)[J]. Computer Technology Journal,2020.[97]. Science - Library Science; Investigators from Cumhuriyet University Release New Data on Library Science (Scholarly databases under scrutiny)[J]. Computer Technology Journal,2020.[98]. Information Technology; Investigators from Faculty of Computer Science and Engineering Release New Data on Information Technology (FGSA for optimal quality of service based transaction in real-time database systems under different workload condition)[J]. Computer Technology Journal,2020.[99]Muhammad Aqib Javed,M.A. Naveed,Azam Hussain,S. Hussain. Integrated data acquisition, storage and retrieval for glass spherical tokamak (GLAST)[J]. Fusion Engineering and Design,2020,152.[100]Vinay M.S.,Jayant R. Haritsa. Operator implementation of Result Set Dependent KWS scoring functions[J]. Information Systems,2020,89.[101]. Capital One Services LLC; Patent Issued for Computer-Based Systems Configured For Managing Authentication Challenge Questions In A Database And Methods Of Use (USPTO 10,572,653)[J]. Journal of Robotics & Machine Learning,2020.[102]Ikawa Fusao,Michihata Nobuaki. In Reply to Letter to the Editor Regarding "Treatment Risk for Elderly Patients with Unruptured Cerebral Aneurysm from a Nationwide Database in Japan".[J]. World neurosurgery,2020,135.[103]Chen Wei,You Chao. Letter to the Editor Regarding "Treatment Risk for Elderly Patients with Unruptured Cerebral Aneurysm from a Nationwide Database in Japan".[J]. World neurosurgery,2020,135.[104]Zhitao Xiao,Lei Pei,Lei Geng,Ying Sun,Fang Zhang,Jun Wu. Surface Parameter Measurement of Braided Composite Preform Based on Faster R-CNN[J]. Fibers and Polymers,2020,21(3).[105]Xiaoyu Cui,Ruifan Cai,Xiangjun Tang,Zhigang Deng,Xiaogang Jin. Sketch‐based shape‐constrained fireworks simulation in head‐mounted virtual reality[J]. Computer Animation and Virtual Worlds,2020,31(2).[106]Klaus B?hm,Tibor Kubjatko,Daniel Paula,Hans-Georg Schweiger. New developments on EDR (Event Data Recorder) for automated vehicles[J]. Open Engineering,2020,10(1).[107]Ming Li,Ruizhi Chen,Xuan Liao,Bingxuan Guo,Weilong Zhang,Ge Guo. A Precise Indoor Visual Positioning Approach Using a Built Image Feature Database and Single User Image from Smartphone Cameras[J]. Remote Sensing,2020,12(5).[108]Matthew Grewe,Phillip Sexton,David Dellenbach. Use Risk‐Based Asset Prioritization to Develop Accurate Capital Budgets[J]. Opflow,2020,46(3).[109]Jose R. Salvador,D. Mu?oz de la Pe?a,D.R. Ramirez,T. Alamo. Predictive control of a water distribution system based on process historian data[J]. Optimal Control Applications and Methods,2020,41(2).[110]Esmaeil Nourani,Vahideh Reshadat. Association extraction from biomedicalliterature based on representation and transfer learning[J]. Journal of Theoretical Biology,2020,488.[111]Ikram Saima,Ahmad Jamshaid,Durdagi Serdar. Screening of FDA approved drugs for finding potential inhibitors against Granzyme B as a potent drug-repurposing target.[J]. Journal of molecular graphics & modelling,2020,95.[112]Keiron O’Shea,Biswapriya B. Misra. Software tools, databases and resources in metabolomics: updates from 2018 to 2019[J]. Metabolomics,2020,16(D1).[113]. Information Technology; Researchers from Virginia Polytechnic Institute and State University (Virginia Tech) Describe Findings in Information Technology (A database for global soil health assessment)[J]. Energy & Ecology,2020.[114]Moosa Johra Muhammad,Guan Shenheng,Moran Michael F,Ma Bin. Repeat-Preserving Decoy Database for False Discovery Rate Estimation in Peptide Identification.[J]. Journal of proteome research,2020,19(3).[115]Huttunen Janne M J,K?rkk?inen Leo,Honkala Mikko,Lindholm Harri. Deep learning for prediction of cardiac indices from photoplethysmographic waveform: A virtual database approach.[J]. International journal for numerical methods in biomedical engineering,2020,36(3).[116]Kunxia Wang,Guoxin Su,Li Liu,Shu Wang. Wavelet packet analysis for speaker-independent emotion recognition[J]. Neurocomputing,2020.[117]Fusao Ikawa,Nobuaki Michihata. In Reply to Letter to the Editor Regarding “Treatment Risk for Elderly Patients with Unruptured Cerebral Aneurysm from a Nationwide Database in Japan”[J]. World Neurosurgery,2020,135.[118]Wei Chen,Chao You. Letter to the Editor Regarding “Treatment Risk for Elderly Patients with Unruptured Cerebral Aneurysm from a Nationwide Database in Japan”[J]. World Neurosurgery,2020,135.[119]Lindsey A. Parsons,Jonathan A. Jenks,Andrew J. Gregory. Accuracy Assessment of National Land Cover Database Shrubland Products on the Sagebrush Steppe Fringe[J]. Rangeland Ecology & Management,2020,73(2).[120]Jing Hua,Yilu Xu,Jianjun Tang,Jizhong Liu,Jihao Zhang. ECG heartbeat classification in compressive domain for wearable devices[J]. Journal of Systems Architecture,2020,104.以上就是关于数据库英文参考文献的全部内容,希望看完后对你有所启发。

电子信息工程数据库管理中英文对照外文翻译文献

电子信息工程数据库管理中英文对照外文翻译文献

中英文对照外文翻译文献(文档含英文原文和中文翻译)译文:数据库管理数据库(有时拼成Database)也称为电子数据库,是指由计算机特别组织的用下快速查找和检索的任意的数据或信息集合。

数据库与其它数据处理操作协同工作,其结构要有助于数据的存储、检索、修改和删除。

数据库可存储在磁盘或磁带、光盘或某些辅助存储设备上。

一个数据库由一个文件或文件集合组成。

这些文件中的信息可分解成一个个记录,每个记录有一个或多个域。

域是数据库存储的基本单位,每个域一般含有由数据库描述的属于实体的一个方面或一个特性的信息。

用户使用键盘和各种排序命令,能够快速查找、重排、分组并在查找的许多记录中选择相应的域,建立特定集上的报表。

数据库记录和文件的组织必须确保能对信息进行检索。

早期的系统是顺序组织的(如:字母顺序、数字顺序或时间顺序);直接访问存储设备的研制成功使得通过索引随机访问数据成为可能。

用户检索数据库信息的主要方法是query(查询)。

通常情况下,用户提供一个字符串,计算机在数据库中寻找相应的字符序列,并且给出字符串在何处出现。

比如,用户必须能在任意给定时间快速处理内部数据。

而且,大型企业和其它组织倾向于建立许多独立的文件,其中包含相互关联的甚至重叠的数据,这些数据、处理活动经常需要和其它文件的数据相连。

为满足这些要求,开发邮各种不同类型的数据库管理系统,如:非结构化的数据库、层次型数据库、网络型数据库、关系型数据库、面向对象型数据库。

在非结构化的数据库中,按照实体的一个简单列表组织记录;很多个人计算机的简易数据库是非结构的。

层次型数据库按树型组织记录,每一层的记录分解成更小的属性集。

层次型数据库在不同层的记录集之间提供一个单一链接。

与此不同,网络型数据库在不同记录集之间提供多个链接,这是通过设置指向其它记录集的链或指针来实现的。

网络型数据库的速度及多样性使其在企业中得到广泛应用。

当文件或记录间的关系不能用链表达时,使用关系型数据库。

mysql数据库英文文献

mysql数据库英文文献

mysql数据库英文文献及翻译MySQL architecture is best understood in the context of its history. Thus, the two are discussed in the same chapter.MySQL HistoryMySQL history goes back to 1979 when Monty Widenius, working for a small companycalled TcX, created a reporting tool written in BASIC that ran on a 4 Mhzcomputer with 16 KB RAM. Over time, the tool was rewritten in C and ported to run on Unix. It was still just a low-level storage engine with a reporting front end. The tool was known by the name of Unireg.Working under the adverse conditions of little computational resources, and perhaps building on his God-given talent,Monty developed a habit and ability to write very efficient code naturally. He also developed, or perhaps was gifted from the start,with an unusually acute vision of what needed to be done to the code to make it useful in future development—without knowing in advance much detail about what that future development would be.In addition to the above, with TcX being a very small company and Monty being one of the owners, he had a lot of say in what happened to his code. While there are perhaps a good number of programmers out there with Monty’s talent and ability, for a number of reasons, few get to carry their code around for more than 20 years. Monty did.Monty’s work, talents, and ownership of the code provided a foundation upon which the Miracle of MySQL could be built.Some time in the 1990s, TcX customers began to push for an SQL interface to their data. Several possibilities were considered. One was to load it into a commercial database.Monty was not satisfied with the speed. He tried borrowing mSQL code for the SQL part and integrating it with his low-level storage engine. That did not work well,either. Then came the classic move of a talented,driven programmer: “I’ve had enough of those tools that somebody else wrote that don’t work! I’m writing my own!”Thus in May of 1996 MySQL version 1.0 was released to a limited group, followed by a public release in October 1996 of version 3.11.1. The initial public release provided only a binary distribution for Solaris. A month later, the source and the Linux binary were released.In the next two years, MySQL was ported to a number of other operating systems as the feature set gradually increased. MySQL was originally released under a special license that allowed commercial use to those who were not redistributing it with their software. Special licenses were available for sale to those who wanted to bundle it with their product. Additionally, commercial support was also being sold. This provided TcX with some revenue to justify the further development of MySQL,although the purpose of its original creation had already been fulfilled.During this period MySQL progressed to version 3.22. It supported a decent subset of the SQL language, had an optimizer a lot more sophisticated than one would expect could possibly be written by one person, was extremely fast, and was very stable.Numerous APIs were contributed, so one could write a client in pretty much any existing programming language. However, it still lacked support for transactions,subqueries, foreign keys, stored procedures, and views. The locking happened only at a table level, which in some cases could slow it down to a grinding halt. Someprogrammers unable to get around its limitations still considered it a toy, while others were more than happy to dump their Oracle or SQL Server in favor of MySQL, and deal with the limitations in their code in exchange for improvement in performance and licensing cost savings.Around 1999–2000 a separate company named MySQL AB was established. It hired several developers and established a partnership with Sleepycat to provide an SQL interface for the Berkeley DB data files. Since Berkeley DB had transaction capabilities,this would give MySQL support for transactions, which it previously lacked.After some changes in the code in preparation for integrating Berkeley DB,version 3.23 was released.Although the MySQL developers could never work out all the quirks of the Berkeley DB interface and the Berkeley DB tables were never stable, the effort was not wasted.As a result, MySQL source became equipped with hooks to add any type of storage engine, including a transactional one.By April of 2000, with 原文请找腾讯3249114六~维-论~文.网,ISAM, was reworked and released as MyISAM. Among a number of improvements,full-text search capabilities were now supported. A short-lived partnership with NuSphere to add Gemini, a transactional engine with row-level locking, ended in a lawsuit toward the end of 2001. However, around the same time, Heikki Tuuri approached MySQL AB with a proposal to integrate his own storage engine,InnoDB, which was also capable of transactions and row-level locking.Heikki’s contribut ion integrated much more smoothly with the new table handler interface already polished off by the Berkeley DB integration efforts. The MySQL/InnoDB combination became version 4.0, and was released as alpha in October of 2001. By early 2002 the MySQL/InnoDB combowas stable and instantly took MySQL to another level. Version 4.0 was finally declared production stable in March 2003.It might be worthy of mention that the version number change was not caused by the addition of InnoDB. MySQL developers have always viewed InnoDB as an important addition, but by no means something that they completely depend on for success.Back then, and even now, the addition of a new storage engine is not likely to be celebrated with a version number change. In fact, compared to previous versions,not much was added in version 4.0. Perhaps the most significant addition was the query cache, which greatly improved performance of a large number ofapplications.Replication code on the slave was rewritten to use two threads: one for network I/O from the master, and the other to process the updates. Some improvements were added to the optimizer. The 1506mysql数据库英文文献及翻译client/server protocol became SSL-capable.Version 4.1 was released as alpha in April of 2003, and was declared beta in June of 2004. Unlike version 4.0, it added a number of significant improvements. Perhaps the most significant was subqueries, a feature long-awaited by many users. Spatial indexing support was added to the MyISAM storage engine. Unicode support was implemented. The client/server protocol saw a number of changes. It was made more secure against attacks, and supported prepared statements.In parallel with the alpha version of 4.1, work progressed on yet another development branch: version 5.0, which would add stored procedures, server-side cursors,triggers, views, XA transactions, significant improvements in the query optimizer,and a number of other features. The decision to create a separate development branch was made because MySQL developers felt that it would take a long time to stabilize 4.1 if, on top of all the new features that they were adding to it, they had to deal with the stored procedures. Version 5.0 was finally released as alpha in December 2003. For a while this created quite a bit of confusion—there were two branches in the alpha stage. Eventually 4.1 stabilized (October 2004), and the confusion was resolved.Version 5.0 stabilized a year later, in October of 2005.The first alpha release of 5.1 followed in November 2005, which added a number of improvements, some of which are table data partitioning, row-based replication,event scheduler, and a standardized plug-in API that facilitates the integration of new storage engines and other plug-ins.At this point, MySQL is being actively developed. 5.0 is currently the stable version,while 5.1 is in beta and should soon become stable. New features at this point go into version 5.2.MySQL ArchitectureFor the large part, MySQL architecture defies a formal definition or specification.When most of the code was originally written, it was not done to be a part of some great system in the future, but rather to solve some very specific problems. However,it was written so well and with enough insight that it reached the point where there were enough quality pieces to assemble a database server.Core ModulesI make an attempt in this section to identify the core modules in the system. However,let me add a disclaimer that this is only an attempt to formalize what exists.MySQL developers rarely think in those terms. Rather, they tend to think of files,directories, classes, structures, and functions. It is much more common to hear “This happens in mi_open( )” than to hear “This happens on the MyISAM storage engine level.” MySQL developers know the code so well that they are able to think conceptually on the level of functions, structures, and classes. They will probably find the abstractions in this section rather useless. However, it would be helpful to a person used to thinking in terms of modules and managers.With regard to MySQL, I use the term “module” rather loosely. Unlike what one would typically call a module, in many cases it is not something you can easily pull out and replace with another implementation. The code from one module might be spread across several files, and you often find the code from several different modules in the same file. This is particularly true of the older code. The newer code tends to fit into the pattern of modules better. So in our definition, a module is a piece of code that logically belongs together in some way, and performs a certain critical function in User Authentication Module• Access Control Module• Parser• Command Dispatcher• Query Cache Module• Optimizer• Table Manager• Table Modification Modul es• Table Maintenance Module• Status Reporting Module• Abstracted Storage Engine Interface (Table Handler)• Storage Engine Implementations (MyISAM, InnoDB, MEMORY, Berkeley DB)• Logging Module• Replication Master Module• Replication Slave Module• C lient/Server Protocol API• Low-Level Network I/O API• Core APIInteraction of the Core ModulesWhen the server is started on the command line, the Initialization Module takes control.It parses the configuration file and the command-line arguments, allocates global memory buffers, initializes global variables and structures, loads the access control tables, and performs a number of other initialization tasks. Once the initialization job is complete, the Initialization Module passes control to the Connection Manager, which starts listening for connections from clients in a loop.mysql数据库英文文献及翻译When a client connects to the database server, the Connection Manager performs a number of low-level network protocol tasks and then passes control to the Thread Manager, which in turn supplies a thread tohandle the connection (which from now on will be referred to as the Connection Thread). The Connection Thread might be created anew, or retrieved from the thread cache and called to active duty. Once the Connection Thread receives control, it first invokes the User Authentication Module.The credentials of the connecting user are verified, and the client may now issue requests.The Connection Thread passes the request data to the Command Dispatcher. Some requests, known in the MySQL code terminology as commands, can be accommodated by the Command Dispatcher directly, while more complex ones need to be redirected to another module. A typical command may request the server to run a query, change the active database, report the status, send a continuous dump of the replication updates, close the connection, or perform some other operation.In MySQL server terminology, there are two types of client requests: a query and a command. A query is anything that has to go through the parser. A command is a request that can be executed without the need to invoke the parser. We will use the term query in the context of MySQL internals. Thus, not only a SELECT but also a DELETE or INSERT in our terminology would be called a query. What we would call a query is sometimes called an SQL statement.If full query logging is enabled, the Command Dispatcher will ask the Logging Module to log the query or the command to the plain-text log prior to the dispatch. Thus in the full logging configuration all queries will be logged, even the ones that are not syntactically correct and will never be executed, immediately returning an error.The Command Dispatcher forwards queries to the Parser through the Query Cache Module. The Query Cache Module checks whether the query is of the type that can be cached, and if there exists a previously computed cached result that is still valid.In the case of a hit, the execution is short-circuited at this point, the cached result is returned to the user, and the Connection Thread receives control and is now ready to process another command. If the Query Cache Module reports a miss, the query goes to the Parser, which will make a decision on how to transfer control based on the query type.One can identify the following modules that could continue from that point: the Optimizer, the Table Modification Module, the Table Maintenance Module, the Replication Module, and the Status Reporting Module. Select queries are forwarded to the Optimizer; updates, inserts, deletes, and table-creation and schema-altering queries go to the respective Table Modification Modules; queries that check, repair, update key statistics, or defragment the table go to the Table Maintenance module;queries related to replication go to the Replication Module; and status requests go to the Status Reporting Module. There also exist a number of Table Modification Modules: Delete Module, Create Module, Update Module, Insert Module, and Alter Module.At this point, each of the modules that will receive control from the Parser passes the list of tables involved in the query to the Access Control Module and then, upon success,to the Table Manager, which opens the tables and acquires the necessary locks.Now the table operation module is ready to proceed with its specific task and will issue a number of requests to the Abstracted Storage Engine Module for low-level operations such as inserting or updating a record, retrieving the records based on a key value, or performing an operation on the table level, such as repairing it or updating the index statistics.The Abstracted Storage Engine Module will automatically translate the calls to the corresponding methods of the specific Storage Engine Module via object polymorphism.In other words, when dealing with a Storage Engine object, the caller thinks it is the caller does not need to be aware of the exact object type of the Storage Engine object.As the query or command is being processed, the corresponding module may send parts of the result set to the client as they become available. It may also send warnings or an error message. If an error message is issued, both the client and the server will understand that the query or command has failed and take the appropriate measures.The client will not accept any more result set, warning, or error message data for the given query, while the server will always transfer control to the Connection Thread after issuing an error. Note that since MySQL does not use exceptions for reasons of implementation stability and portability, all calls on all levels must be checked for errors with the appropriate transfer of control in the case of failure.If the low-level module has made a modification to the data in some way and if the binary update logging is enabled, the module will be responsible for asking the Logging Module to log the update event to the binary update log, sometimes known as the replication log, or, among MySQL developers and power users, the binlog. Once the task is completed, the execution flow returns to the Connection Thread,which performs the necessary clean-up and waits for another query or command from the client. The session continues until the client issues the Quit command.In addition to interacting with regular clients, a server may receive a command from a replication slave to continuously read its binary update log. This command will be handled by the Replication Master Module.If the server is configured as a replication slave, the Initialization Module will call the Replication Slave Module, which in turn will start two threads, called the SQL Thread and the I/O thread. They take care of propagating updates that happened on the master to the slave. It is possible for the same server to be configured as both a master and a slave.mysql数据库英文文献及翻译Network communication with a client goes through the Client/Server Protocol Module,which is responsible for packaging the data in the proper format, and depending on the connection settings, compressing it. The Client/Server Protocol Module in turn uses the Low-Level Network I/O module, which is responsible for sending and receiving the data on the socket level in a cross-platform portable way. It is also responsible for encrypting the data using the OpenSSL library calls if the connection options are set appropriately.As they perform their respective tasks, the core components of the server heavily rely on the Core API. The Core API provides a rich functionality set, which includes file I/O, memory management, string manipulation, implementations of various data structures and algorithms, and many other useful capabilities. MySQL developers are encouraged to avoid direct libc calls, and use the Core API to facilitate ports to new platforms and code optimization in the future.Writer:Sasba pacbev译文:深入理解MySQL核心技术姓名:苗月明学号:0651135MySQL的历史与架构MySQL的架构的最好的理解是从他的历史背景中去发现。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
数据库管理系统的组织技术
顺序的、直接的以及其他的文件处理方式常用于单个文件中数据的组织和构造,而DBMS可综合几个文件的数据项以回答用户对信息的查询,这就意味着DBMS能够访问和检索非关键记录字段的数据,即DBMS能够将几个大文件夹中逻辑相关的数据组织并连接在一起。
逻辑结构。确定这些逻辑关系是数据管理者的任务,由数据定义语言完成。DBMS在存储、访问和检索操作过程中可选用以下逻辑构造技术:
链表结构。在该逻辑方式中,记录通过指针链接在一起。指针是记录集中的一个数据项,它指出另一个逻辑相关的记录的存储位置,例如,顾客主文件中的记录将包含每个顾客的姓名和地址,而且该文件中的每个记录都由一个账号标识。在记账期间,顾客可在不同时间购买许多东西。公司保存一个发票文件以反映这下地交易,这种情况下可使用链表结构,以显示给定时间内未支付的发票。顾客文件中的每个记录都包含这样一个字段,该字段指向发票文件中该顾客的第一个发票的记录位置,该发票记录又依次与该顾客的下一个发票记录相连,此链接的最后一个发票记录由一个作为指针的特殊字符标识。
Oracle的数据库管理功能
Oracle包括许多使数据库易于管理的功能,分三部分讨论:Oracle企业管理器、附加包、备份和恢复。
数据库记录和文件的组织必须确保能对信息进行检索。早期的系统是顺序组织的(如:字母顺序、数字顺序或时间顺序);直接访问存储设备的研制成功使得通过索引随机访问数据成为可能。用户检索数据库信息的主要方法是query(查询)。通常情况下,用户提供一个字符串,计算机在数据库中寻找相应的字符序列,并且给出字符串在何处出现。比如,用户必须能在任意给定时间快速处理内部数据。而且,大型企业和其它组织倾向于建立许多独立的文件,其中包含相互关联的甚至重叠的数据,这些数据、处理活动经常需要和其它文件的数据相连。为满足这些要求,开发邮各种不同类型的数据库管理系统,如:非结构化的数据库、层次型数据库、网络型数据库、关系型数据库、面向对象型数据库。
在非结构化的数据库中,按照实体的一个简单列表组织记录;很多个人计算机的简易数据库是非结构的。层次型数据库按树型组织记录,每一层的记录分解成更小的属性集。层次型数据库在不同层的记录集之间提供一个单一链接。与此不同,网络型数据库在不同记录集之间提供多个链接,这是通过设置指向其它记录集的链或指针来实现的。网络型数据库的速度及多样性使其在企业中得到广泛应用。当文件或记录间的关系不能用链表达时,使用关系型数据库。一个表或一个“关系”,就是一个简单的非结构列表。多个关系可通过数学关系提供所需信息。面向对象的数据库存储并处理更复杂的称为对象的数据结构,可组织成有层次的类,其中的每个类可以继承层次链中更高一级类的特性,这种数据库结构最灵活,最具适应性。
物理结构。人们总是为了各自的目的,按逻辑方式设想或组织数据。因此,在一个具体应用中,记录R1和R2是逻辑相连且顺序处理的,但是,在计算机系统中,这些在一个应用中逻辑相邻的记录,物理位置完全可能不在一起。记录在介质和硬件中的物理结构不仅取决于所采用的I/O设备、存储设备及输入输出和存取技术,而且还取决于用户定义的R1和R2中数据的逻辑关系。例如,R1和R2可能是持有信用卡的顾客记录,而顾客要求每两周将货物运送到同一个城市的同一个街区,而从运输部门的管理者看,R1和R2是按地理位置组织的运输记录的顺序项,但是在A/R应用中,可找到R1长表示的顾客,并且可根据其完全不同的账号处理他们的账目。简言之,在许多计算机化的信息记录中,存储记录的物理位置用户是看不见的。
网状结构。网状结构不像树型结构那样不允许树枝相连,它允许节点间多个方向连接,这样,每个节点都可能有几个所有者,中央电视台它又可能拥有任意多个其他数据单元。数据管理软件允许从文件的任一记录开始提取该结构中的所需信息。
关系型结构。关系型结构由许多表格组成,数据则以“关系”的形式存储在这些表中。例如,可建立一些关系表,将大学课程同任课教师及上课地点连接起来。为了找到英语课的上课地点和教师名,首先查询课程/教师关系表得到名字(为“Fitt”),再查询课程/地点关系表得到地点(“Main 142”),当然,也可能有其他关系。这是一个相当新颖的数据库组织技术,将来有望得到广泛应用。
层次(树型)结构。该逻辑方式中,数据单元的多级结构类似一棵“倒立”的树,该树的树根在顶部,而树枝向下延伸。在层次(树型)结构中存在主-从关系,惟一的根数据下是从属的元或节点,而每个元或树枝都只有一个所有者,这样,一个customer(顾客)拥有一个invoice(发票),而invoice(发票)又有从属项。在树型结构中,树枝不能相连。
很多数据库包含自然语言文本信息,可由个人在家中使用。小型及稍大的数据库在商业领域中占有越来越重要的地位。典型的商业应用包括航班预订、产品管理、医院的医疗记录以及保险公司的合法记录。最大型的数据库通常用天政府部门、企业、大专院校等。这些数据库存有诸如摘要、报表、成文的法规、通讯录、报纸、杂志、百科全书、各式目录等资料。索引数据库包含参考书目或用于找到相关书籍、期刊及其它参考文献的索引。目前有上万种可公开访问的数据库,内容包罗万象,从法律、医学、工程到新闻、时事、游戏、分类广告、指南等。科学家、医生、律师、财经分析师、股票经纪人等专家和各类研究者越来越多地依赖这些数据库从大量的信息中做快速的查找访问。
中英文对照资料外文翻译文献
数据库管理
数据库(有时拼成Database)也称为电子数据库,是指由计算机特别组织的用下快作协同工作,其结构要有助于数据的存储、检索、修改和删除。数据库可存储在磁盘或磁带、光盘或某些辅助存储设备上。
一个数据库由一个文件或文件集合组成。这些文件中的信息可分解成一个个记录,每个记录有一个或多个域。域是数据库存储的基本单位,每个域一般含有由数据库描述的属于实体的一个方面或一个特性的信息。用户使用键盘和各种排序命令,能够快速查找、重排、分组并在查找的许多记录中选择相应的域,建立特定集上的报表。
相关文档
最新文档