数据库安全中英文对照外文翻译文献

合集下载

安全工程毕业论文中英文对照资料外文翻译文献

安全工程毕业论文中英文对照资料外文翻译文献

安全工程毕业论文中英文资料外文翻译文献译文:《关于安全评价的几点论述》安全性评价是综合运用安全工效学、安全系统工程等方法对企事业单位员工的安全意识与排故能力、设备的完好性与事故隐患、环境因素的现状及其存在的不安全因素等进行检查、预测和安全性评估,以确定企业的危险程度。

根据存在隐患的对象和部位,针对性地进行整改,将事故消灭于萌芽状态,防患于未然。

这对安全管理具有重要作用。

文章以机械加工企业为例来阐明安全性评价的原理和操作方法。

其它企业也可按行业特点仿此原理和方法提出自己的评价方案,均可收到安全生产的预期效果。

安全评价是对系统的危险性进行定性或定量分析,评价系统发生事故的可能性及严重度。

安全评价是安全管理和决策科学化的基础。

安全评价的内容包括:安全管理绩效评价,人的行为安全性评价,设备、设施的安全性评价,作业环境安全性评价,化学物品安全性评价等。

本文主要采用固有危险程度的定性定量分析和风险程度的定性定量分析方法。

从而得出分析结果,并指出了生产过程中可能出现的危险有害因素,进而提出了相应对策措施,为企业消除事故及安全生产可以提供保障。

通过一系列安全评价方法,得出相应的安全评价结果。

如运用了美国道化学公司的火灾爆炸指数法对供氧装置和供煤装置进行火灾爆炸危险等级评价,并得出了相应的安全补偿系数,同时也运用预先危险性分析法对厂内常见的伤害事故进行分析,得出了事故潜在危险。

一般企业或其它单位在设立时,或运行后都需要进行安全评价。

主要目的是根据企业的生产或拟设立的项目的情况,由相应的安全评价中介公司的评价师进行现场检查,针对安全上的不足,给出整改要求和措施,由企业进行整改,达到安全生产的目的。

评价师根据企业情况编制安全评价报告,经安监局审批后可以作为企业办理各种审批手续的重要资料。

安全评价分为安全预评价(在设立项目前进行),安全现状评价和安全验收评价。

其中生产产品或副产品中有危险化学品的则要进行预评价和验收评价。

安全专业外文文献(中英文对照PDF)

安全专业外文文献(中英文对照PDF)

附录A动态可靠性和安全性评价人为因素技术系统:一个现代科学扎根人类的起源P. Carlo Cacciabue收稿日期:2010年1月7日/接受日期:2010年2月27日施普林格出版社有限公司于2010年在伦敦摘要:本文讨论的要求是人机实际执行互动模式。

前瞻性的回顾分析了设计和安全评估。

对Hollnagel理论能够运用“联合认知”制度全面和详进行分析,鉴定出人为因素的根本原因和潜在的复杂评价中偶然的情况。

然而,死板的应用这些做法有时是过于武断,或根本不可能改善缺乏数据的缺点或构建复杂性建模架构。

本文介绍了两个可行的方法,整体安全性分析是对整个工厂进行控制。

另一种方法是,当明确任务和具体行为需要进行研究,提出的方法Hollnagel被认为是最先进和可以应用种最准确的工具。

关键词:人类认知;可靠性建模;安全评估;根本原因分析1 介绍15年前,在1994年,我对埃里克Hollnagel在我的博士学位论文等这些方面的帮助表示感激。

当然埃里克Hollnagel已成为了我的导师并帮我解除了、试图将机器正规化的权威心理的影响。

我一开始就很尊重博士Hollnagel,很多年前,当我遇到他,他拯救了我,从一些同事之中保护了我将要被他们毁灭的最初想法,这种想法是试图寻找和谐科学和心理学的之间的基础,这是我研究活动的最后25年的方向。

感谢埃里克!我永远不会忘记你,在世界许多角落陪伴着我,并通过头脑帮助我。

(Cacciabue 1994年)。

在那些日子里,需要建立必要的,明确的和无误的模式在人类管理的系统中,这导致许多研究人员严厉批评,它没有和解的可能性,所有的方法和在人类的贡献,旨在简化技术对系统的控制和事故。

第一,集中在行为上,即实际的行动表现。

这种批评的主要依据是一个没有模型的认知,使审议过程和人类精神的典型功能和行为表现影响到他们的上下文相关条件(Hollnagel 1994年),第二,缺乏对审。

在同一年内,制定的概念“第二代人的可靠性的方(Cacciabue和Hollnagel 1993年)和“微型的macrosimulation认知”(Cacciabue和Hollnagel 1995)随着各种技术的发展,在许多情况下是从航空运输和核医学出发,目的在于评估人类的贡献,评估安全系统和安全组织。

数据库中英文对照外文翻译文献

数据库中英文对照外文翻译文献

中英文对照外文翻译Database Management SystemsA database (sometimes spelled data base) is also called an electronic database , referring to any collection of data, or information, that is specially organized for rapid search and retrieval by a computer. Databases are structured to facilitate the storage, retrieval , modification, and deletion of data in conjunction with various data-processing operations .Databases can be stored on magnetic disk or tape, optical disk, or some other secondary storage device.A database consists of a file or a set of files. The information in these files may be broken down into records, each of which consists of one or more fields. Fields are the basic units of data storage , and each field typically contains information pertaining to one aspect or attribute of the entity described by the database . Using keywords and various sorting commands, users can rapidly search , rearrange, group, and select the fields in many records to retrieve or create reports on particular aggregate of data.Complex data relationships and linkages may be found in all but the simplest databases .The system software package that handles the difficult tasks associated with creating ,accessing, and maintaining database records is called a database management system(DBMS).The programs in a DBMS package establish an interface between the database itself and the users of the database.. (These users may be applications programmers, managers and others with information needs, and various OS programs.)A DBMS can organize, process, and present selected data elements form the database. This capability enables decision makers to search, probe, and query database contents in order to extract answers to nonrecurring and unplanned questions that aren’t available in regular reports. These questions might initially be vague and/or poorly defined ,but people can “browse” through the database until they have the needed information. In short, the DBMS will “manage” the stored data items and assemble the needed items from the common database in response to the queries of those who aren’t programmers.A database management system (DBMS) is composed of three major parts:(1)a storage subsystemthat stores and retrieves data in files;(2) a modeling and manipulation subsystem that provides the means with which to organize the data and to add , delete, maintain, and update the data;(3)and an interface between the DBMS and its users. Several major trends are emerging that enhance the value and usefulness of database management systems;Managers: who require more up-to-data information to make effective decisionCustomers: who demand increasingly sophisticated information services and more current information about the status of their orders, invoices, and accounts.Users: who find that they can develop custom applications with database systems in a fraction of the time it takes to use traditional programming languages.Organizations : that discover information has a strategic value; they utilize their database systems to gain an edge over their competitors.The Database ModelA data model describes a way to structure and manipulate the data in a database. The structural part of the model specifies how data should be represented(such as tree, tables, and so on ).The manipulative part of the model specifies the operation with which to add, delete, display, maintain, print, search, select, sort and update the data.Hierarchical ModelThe first database management systems used a hierarchical model-that is-they arranged records into a tree structure. Some records are root records and all others have unique parent records. The structure of the tree is designed to reflect the order in which the data will be used that is ,the record at the root of a tree will be accessed first, then records one level below the root ,and so on.The hierarchical model was developed because hierarchical relationships are commonly found in business applications. As you have known, an organization char often describes a hierarchical relationship: top management is at the highest level, middle management at lower levels, and operational employees at the lowest levels. Note that within a strict hierarchy, each level of management may have many employees or levels of employees beneath it, but each employee has only one manager. Hierarchical data are characterized by this one-to-many relationship among data.In the hierarchical approach, each relationship must be explicitly defined when the database is created. Each record in a hierarchical database can contain only one key field and only one relationship is allowed between any two fields. This can create a problem because data do not always conform to such a strict hierarchy.Relational ModelA major breakthrough in database research occurred in 1970 when E. F. Codd proposed a fundamentally different approach to database management called relational model ,which uses a table asits data structure.The relational database is the most widely used database structure. Data is organized into related tables. Each table is made up of rows called and columns called fields. Each record contains fields of data about some specific item. For example, in a table containing information on employees, a record would contain fields of data such as a person’s last name ,first name ,and street address.Structured query language(SQL)is a query language for manipulating data in a relational database .It is nonprocedural or declarative, in which the user need only specify an English-like description that specifies the operation and the described record or combination of records. A query optimizer translates the description into a procedure to perform the database manipulation.Network ModelThe network model creates relationships among data through a linked-list structure in which subordinate records can be linked to more than one parent record. This approach combines records with links, which are called pointers. The pointers are addresses that indicate the location of a record. With the network approach, a subordinate record can be linked to a key record and at the same time itself be a key record linked to other sets of subordinate records. The network mode historically has had a performance advantage over other database models. Today , such performance characteristics are only important in high-volume ,high-speed transaction processing such as automatic teller machine networks or airline reservation system.Both hierarchical and network databases are application specific. If a new application is developed ,maintaining the consistency of databases in different applications can be very difficult. For example, suppose a new pension application is developed .The data are the same, but a new database must be created.Object ModelThe newest approach to database management uses an object model , in which records are represented by entities called objects that can both store data and provide methods or procedures to perform specific tasks.The query language used for the object model is the same object-oriented programming language used to develop the database application .This can create problems because there is no simple , uniform query language such as SQL . The object model is relatively new, and only a few examples of object-oriented database exist. It has attracted attention because developers who choose an object-oriented programming language want a database based on an object-oriented model. Distributed DatabaseSimilarly , a distributed database is one in which different parts of the database reside on physically separated computers . One goal of distributed databases is the access of informationwithout regard to where the data might be stored. Keeping in mind that once the users and their data are separated , the communication and networking concepts come into play .Distributed databases require software that resides partially in the larger computer. This software bridges the gap between personal and large computers and resolves the problems of incompatible data formats. Ideally, it would make the mainframe databases appear to be large libraries of information, with most of the processing accomplished on the personal computer.A drawback to some distributed systems is that they are often based on what is called a mainframe-entire model , in which the larger host computer is seen as the master and the terminal or personal computer is seen as a slave. There are some advantages to this approach . With databases under centralized control , many of the problems of data integrity that we mentioned earlier are solved . But today’s personal computers, departmental computers, and distributed processing require computers and their applications to communicate with each other on a more equal or peer-to-peer basis. In a database, the client/server model provides the framework for distributing databases.One way to take advantage of many connected computers running database applications is to distribute the application into cooperating parts that are independent of one anther. A client is an end user or computer program that requests resources across a network. A server is a computer running software that fulfills those requests across a network . When the resources are data in a database ,the client/server model provides the framework for distributing database.A file serve is software that provides access to files across a network. A dedicated file server is a single computer dedicated to being a file server. This is useful ,for example ,if the files are large and require fast access .In such cases, a minicomputer or mainframe would be used as a file server. A distributed file server spreads the files around on individual computers instead of placing them on one dedicated computer.Advantages of the latter server include the ability to store and retrieve files on other computers and the elimination of duplicate files on each computer. A major disadvantage , however, is that individual read/write requests are being moved across the network and problems can arise when updating files. Suppose a user requests a record from a file and changes it while another user requests the same record and changes it too. The solution to this problems called record locking, which means that the first request makes others requests wait until the first request is satisfied . Other users may be able to read the record, but they will not be able to change it .A database server is software that services requests to a database across a network. For example, suppose a user types in a query for data on his or her personal computer . If the application is designed with the client/server model in mind ,the query language part on the personal computer simple sends the query across the network to the database server and requests to be notified when the data are found.Examples of distributed database systems can be found in the engineering world. Sun’s Network Filing System(NFS),for example, is used in computer-aided engineering applications to distribute data among the hard disks in a network of Sun workstation.Distributing databases is an evolutionary step because it is logical that data should exist at the location where they are being used . Departmental computers within a large corporation ,for example, should have data reside locally , yet those data should be accessible by authorized corporate management when they want to consolidate departmental data . DBMS software will protect the security and integrity of the database , and the distributed database will appear to its users as no different from the non-distributed database .In this information age, the data server has become the heart of a company. This one piece of software controls the rhythm of most organizations and is used to pump information lifeblood through the arteries of the network. Because of the critical nature of this application, the data server is also the one of the most popular targets for hackers. If a hacker owns this application, he can cause the company's "heart" to suffer a fatal arrest.Ironically, although most users are now aware of hackers, they still do not realize how susceptible their database servers are to hack attacks. Thus, this article presents a description of the primary methods of attacking database servers (also known as SQL servers) and shows you how to protect yourself from these attacks.You should note this information is not new. Many technical white papers go into great detail about how to perform SQL attacks, and numerous vulnerabilities have been posted to security lists that describe exactly how certain database applications can be exploited. This article was written for the curious non-SQL experts who do not care to know the details, and as a review to those who do use SQL regularly.What Is a SQL Server?A database application is a program that provides clients with access to data. There are many variations of this type of application, ranging from the expensive enterprise-level Microsoft SQL Server to the free and open source mySQL. Regardless of the flavor, most database server applications have several things in common.First, database applications use the same general programming language known as SQL, or Structured Query Language. This language, also known as a fourth-level language due to its simplistic syntax, is at the core of how a client communicates its requests to the server. Using SQL in its simplest form, a programmer can select, add, update, and delete information in a database. However, SQL can also be used to create and design entire databases, perform various functions on the returned information, and even execute other programs.To illustrate how SQL can be used, the following is an example of a simple standard SQL query and a more powerful SQL query:Simple: "Select * from dbFurniture.tblChair"This returns all information in the table tblChair from the database dbFurniture.Complex: "EXEC master..xp_cmdshell 'dir c:\'"This short SQL command returns to the client the list of files and folders under the c:\ directory of the SQL server. Note that this example uses an extended stored procedure that is exclusive to MS SQL Server.The second function that database server applications share is that they all require some form of authenticated connection between client and host. Although the SQL language is fairly easy to use, at least in its basic form, any client that wants to perform queries must first provide some form of credentials that will authorize the client; the client also must define the format of the request and response.This connection is defined by several attributes, depending on the relative location of the client and what operating systems are in use. We could spend a whole article discussing various technologies such as DSN connections, DSN-less connections, RDO, ADO, and more, but these subjects are outside the scope of this article. If you want to learn more about them, a little Google'ing will provide you with more than enough information. However, the following is a list of the more common items included in a connection request.Database sourceRequest typeDatabaseUser IDPasswordBefore any connection can be made, the client must define what type of database server it is connecting to. This is handled by a software component that provides the client with the instructions needed to create the request in the correct format. In addition to the type of database, the request type can be used to further define how the client's request will be handled by the server. Next comes the database name and finally the authentication information.All the connection information is important, but by far the weakest link is the authentication information—or lack thereof. In a properly managed server, each database has its own users with specifically designated permissions that control what type of activity they can perform. For example, a user account would be set up as read only for applications that need to only access information. Another account should be used for inserts or updates, and maybe even a third account would be used for deletes.This type of account control ensures that any compromised account is limited in functionality. Unfortunately, many database programs are set up with null or easy passwords, which leads to successful hack attacks.译文数据库管理系统介绍数据库(database,有时拼作data base)又称为电子数据库,是专门组织起来的一组数据或信息,其目的是为了便于计算机快速查询及检索。

外文文献-中文翻译-数据库

外文文献-中文翻译-数据库

外文文献-中文翻译-数据库英文原文2:《DBA Survivor: Become a Rock Star DBA》by Thomas LaRock,Published By Apress.2010You know that a database is a collection of logically related data elements that may be structured in various ways lo meet the multiple processing and retrieval needs of organizations and individuals. There’s nothing new about databases—early ones were chiseled in stone, penned on scrolls, and written on index cards. But now databases are commonly recorded on magnetizable media, and computer programs are required to perform the necessary storage and retrieval operations.Yo u’ll see in the following pages that complex data relationships and linkages may be found in all but the simplest databases. The system software package that handles the difficult tasks associated with creating, accessing, and maintaining database records is called a database management system (DBMS) .The programs in a DBMS package establish an interface between the database itself and the users of the database. (These users may be applications programmers, managers and others with information needs, and various OS programs.)A DBMS can organize, process, and present selected data elements from the database. This capability enables decision makers to search, probe, and query database contents in order to extract answers to nonrecurring and unplanned questions (hat aren't available in regular reports. These questions might initially be vague and / or poorly defined, but peo ple can "browse” through the database until they have the needed information. Inshort, the DBMS will “m anage”the stored data items and assemble the needed items from the common database in response to the queries of those who aren’t10programmers. In a file-oriented system, users needing special information may communicate their needs to a programmer, who, when time permits, will write one or more programs to extract the data and prepare the information[4].The availability of a DBMS, however, offers users a much faster alternative communications path.If the DBMS provides a way to interactively and update the database, as well as interrogate it capability allows for managing personal data-Aces however, it does not automatically leave an audit trail of actions and docs not provide the kinds of control a necessary in a multiuser organization. These-controls arc only available when a set of application programs arc customized for each data entry and updating function.Software for personal computers which perform me of the DBMS functions have been very popular. Personal computers were intended for use by individuals for personal information storage and process- These machines have also been used extensively small enterprises, professionals like doctors, acrylics, engineers, lasers and so on .By the nature of intended usage, database systems on these machines except from several of the requirements of full doge database systems. Since data sharing is not tended, concurrent operations even less so. the fewer can be less complex. Security and integrity maintenance arc de-emphasized or absent. As data limes will be small, performance efficiency is also important. In fact, the only aspect of a database system that is important is data Independence. Data-dependence, as stated earlier, means that applicant programs and user queries need not recognizant physical organization of data on secondary storage. The importance of this aspect, particularly for the personal computer user, is that this greatly simplifies database usage. The user can store, access and manipulate data a( a high level (close to (he application) and be totally shielded from the10low level (close to the machine) details of data organization. We will not discuss details of specific PC DBMS software packages here. Let us summarize in the following the strengths and weaknesses of personal computer data-base software systems:The most obvious positive factor is the user friendliness of the software. A user with no prior computer background would be able to use the system to store personal and professional data, retrieve and perform relayed processing. The user should, of course, satiety himself about the quality of software and the freedom from errors (bugs) so that invest-merits in data arc protected.For the programmer implementing applications with them, the advantage lies in the support for applications development in terms of input screen generations, output report generation etc. offered by theses stems.The main negative point concerns absence of data protection features. Unless encrypted, data cane accessed by whoever has access to the machine Data can be destroyed through mistakes or malicious intent. The second weakness of many of the PC-based systems is that of performance. If data volumes grow up to a few thousands of records, performance could be a bottleneck.For organization where growth in data volumes is expected, availability of. the same or compatible software on large machines should be considered.This is one of the most common misconceptions about database management systems that are used in personal computers. Thoroughly comprehensive and sophisticated business systems can be developed in dBASE, Paradox and other DBMSs. However, they are created by experienced programmers using the DBMS's own programming language. Thai is not the same as users who create and manage personal10files that are not part of the mainstream company system.Transaction Management of DatabaseThe objective of long-duration transactions is to model long-duration, interactive Database access sessions in application environments. The fundamental assumption about short-duration of transactions that underlies the traditional model of transactions is inappropriate for long-duration transactions. The implementation of the traditional model of transactions may cause intolerably long waits when transactions aleph to acquire locks before accessing data, and may also cause a large amount of work to be lost when transactions are backed out in response to user-initiated aborts or system failure situations.The objective of a transaction model is to pro-vide a rigorous basis for automatically enforcing criterion for database consistency for a set of multiple concurrent read and write accesses to the database in the presence of potential system failure situations. The consistency criterion adopted for traditional transactions is the notion of scrializability. Scrializa-bility is enforced in conventional database systems through theuse of locking for automatic concurrency control, and logging for automatic recovery from system failure situations. A “transaction’’ that doesn't provide a basis for automatically enforcing data-base consistency is not really a transaction. To be sure, a long-duration transaction need not adopt seri-alizability as its consistency criterion. However, there must be some consistency criterion.Version System Management of DatabaseDespite a large number of proposals on version support in the context of computer aided design and software engineering, the absence of a consensus on version semantics10has been a key impediment to version support in database systems. Because of the differences between files and databases, it is intuitively clear that the model of versions in database systems cannot be as simple as that adopted in file systems to support software engineering.For data-bases, it may be necessary to manage not only versions of single objects (e.g. a software module, document, but also versions of a collection of objects (e.g. a compound document, a user manual, etc. and perhaps even versions of the schema of database (c.g. a table or a class, a collection of tables or classes).Broadly, there arc three directions of research and development in versioning. First is the notion of a parameterized versioning", that is, designing and implementing a versioning system whose behavior may be tailored by adjusting system parameters This may be the only viable approach, in view of the fact that there are various plausible choices for virtually every single aspect of versioning.The second is to revisit these plausible choices for every aspect of versioning, with the view to discardingsome of themes either impractical or flawed. The third is the investigation into the semantics and implementation of versioning collections of objects and of versioning the database.There is no consensus of the definition of the te rm “management information system”. Some writers prefer alternative terminology such as “information processing system”, "information and decision syste m, “organizational information syste m”, or simply “i nformat ion system” to refer to the computer-based information processing system which supports the operations, management, and decision-making functions of an organization. This text uses “MIS” because i t is descriptive and generally understood; it also frequently uses "information system”instead of ''MIS” t o refer to an organizational information system.10A definition of a management information system, as the term is generally understood, is an integrated, user-machine system for providing information 丨o support operations, management, and decision-making functions in an organization. The system utilizes computer hardware and software; manual procedures: models for analysis planning, control and decision making; and a database. The fact that it is an integrated system does not mean that it is a single, monolithic structure: rather, ii means that the parts fit into an overall design. The elements of the definition arc highlighted below: Computer-based user-machine system.Conceptually, a management information can exist without computer, but it is the power of the computer which makes MIS feasible. The question is not whether computers should be used in management information system, but the extent to whichinformation use should be computerized. The concept of a user-machine system implies that some (asks are best performed humans, while others are best done by machine. The user of an MIS is any person responsible for entering input da(a, instructing the system, or utilizing the information output of the system. For many problems, the user and the computer form a combined system with results obtained through a set of interactions between the computer and the user.User-machine interaction is facilitated by operation in which the user's input-output device (usually a visual display terminal) is connected lo the computer. The computer can be a personal computer serving only one user or a large computer that serves a number of users through terminals connected by communication lines. The user input-output device permits direct input of data and immediate output of results. For instance, a person using The computer interactively in financial planning poses 4t what10if* questions by entering input at the terminal keyboard; the results are displayed on the screen in a few second.The computer-based user-machine characteristics of an MIS affect the knowledge requirements of both system developer and system user, “computer-based” means that the designer of a management information system must have a knowledge of computers and of their use in processing. The “user-machine” concept means the system designer should also understand the capabilities of humans as system components (as information processors) and the behavior of humans as users of information.Information system applications should not require users Co be computer experts. However, users need to be able lo specify(heir information requirements; some understanding of computers, the nature of information, and its use in various management function aids users in this task.Management information system typically provide the basis for integration of organizational information processing. Individual applications within information systems arc developed for and by diverse sets of users. If there are no integrating processes and mechanisms, the individual applications may be inconsistent and incompatible. Data item may be specified differently and may not be compatible across applications that use the same data. There may be redundant development of separate applications when actually a single application could serve more than one need. A user wanting to perform analysis using data from two different applications may find the task very difficult and sometimes impossible.The first step in integration of information system applications is an overall information system plan. Even though application systems are implemented one at a10time, their design can be guided by the overall plan, which determines how they fit in with other functions. In essence, the information system is designed as a planed federation of small systems.Information system integration is also achieved through standards, guidelines, and procedures set by the MIS function. The enforcement of such standards and procedures permit diverse applications to share data, meet audit and control requirements, and be shares by multiple users. For instance, an application may be developed to run on a particular small computer. Standards for integration may dictate that theequipment selected be compatible with the centralized database. The trend in information system design is toward separate application processing form the data used to support it. The separate database is the mechanism by which data items are integrated across many applications and made consistently available to a variety of users. The need for a database in MIS is discussed below.The term “information” and “data” are frequently used interchangeably; However, information is generally defined as data that is meaningful or useful to The recipient. Data items are therefore the raw material for producing information.The underlying concept of a database is that data needs to be managed in order to be available for processing and have appropriate quality. This data management includes both software and organization. The software to create and manage a database is a database management system.When all access to any use of database is controlled through a database management system, all applications utilizing a particular data item access the same data item which is stored in only one place. A single updating of the data item updates it for10all uses. Integration through a database management system requires a central authority for the database. The data can be stored in one central computer or dispersed among several computers; the overriding requirement is that there be an organizational function to exercise control.It is usually insufficient for human recipients to receive only raw data or even summarized data. Data usually needs to be processed and presented in such a way that Che result is directed toward the decision to be made. To do this, processing of dataitems is based on a decision model.For example, an investment decision relative to new capital expenditures might be processed in terms of a capital expenditure decision model.Decision models can be used to support different stages in the decision-making process. “Intelligence’’ models can be used to search for problems and/or opportunities. Models can be used to identify and analyze possible solutions. Choice models such as optimization models maybe used to find the most desirable solution.In other words, multiple approaches are needed to meet a variety of decision situations. The following are examples and the type of model that might be included in an MIS to aid in analysis in support of decision-making; in a comprehensive information system, the decision maker has available a set of general models that can be applied to many analysis and decision situations plus a set of very specific models for unique decisions. Similar models are available tor planning and control. The set of models is the model base for the MIS.Models are generally most effective when the manager can use interactive dialog (o build a plan or to iterate through several decision choices under different conditions.10中文译文2:《数据库幸存者:成为一个摇滚名明星》众所周知,数据库是逻辑上相关的数据元的汇集.这些数据元可以按不同的结构组织起来,以满足单位和个人的多种处理和检索的需要。

数据库中英文对照表

数据库中英文对照表

DBA词典:数据库设计常用词汇中英文对照表1. Access method(访问方法):此步骤包括从文件中存储和检索记录。

2. Alias(别名):某属性的另一个名字。

在SQL中,可以用别名替换表名。

3. Alternate keys(备用键,ER/关系模型):在实体/表中没有被选为主健的候选键。

4. Anomalies(异常)参见更新异常(update anomalies)5. Application design(应用程序设计):数据库应用程序生命周期的一个阶段,包括设计用户界面以及使用和处理数据库的应用程序。

6. Attribute(属性)(关系模型):属性是关系中命名的列。

7. Attribute(属性)(ER模型):实体或关系中的一个性质。

8. Attribute inheritance(属性继承):子类成员可以拥有其特有的属性,并且继承那些与超类有关的属性的过程。

9. Base table(基本表):一个命名的表,其记录物理的存储在数据库中。

10. Binary relationship(二元关系):一个ER术语,用于描述两个实体间的关系。

例如,panch Has Staff。

11. Bottom-up approach(自底向上方法):用于数据库设计,一种设计方法学,他从标识每个设计组建开始,然后将这些组件聚合成一个大的单元。

在数据库设计中,可以从表示属性开始底层设计,然后将这些属性组合在一起构成代表实体和关系的表。

12. Business rules(业务规则):由用户或数据库的管理者指定的附加规则。

13. Candidate key(候选键,ER关系模型):仅包含唯一标识实体所必须得最小数量的属性/列的超键。

14. Cardinality(基数):描述每个参与实体的可能的关系数目。

15. Centralized approach(集中化方法,用于数据库设计):将每个用户试图的需求合并成新数据库应用程序的一个需求集合16. Chasm trap(深坑陷阱):假设实体间存在一根,但某些实体间不存在通路。

数据库外文参考文献及翻译

数据库外文参考文献及翻译

数据库外文参考文献及翻译数据库外文参考文献及翻译SQL ALL-IN-ONE DESK REFERENCE FOR DUMMIESData Files and DatabasesI. Irreducible complexityAny software system that performs a useful function is going to be complex. The more valuable the function, the more complex its implementation will be. Regardless of how the data is stored, the complexity remains. The only question is where that complexity resides. Any non-trivial computer application has two major components: the program the data. Although an application’s level of complexity depends on the task to be performed, developers have some control over the location of that complexity. The complexity may reside primarily in the program part of the overall system, or it may reside in the data part.Operations on the data can be fast. Because the programinteracts directly with the data, with no DBMS in the middle, well-designed applications can run as fast as the hardware permits. What could be better? A data organization that minimizes storage requirements and at the same time maximizes speed of operation seems like the best of all possible worlds. But wait a minute . Flat file systems came into use in the 1940s. We have known about them for a long time, and yet today they have been almost entirely replaced by database s ystems. What’s up with that? Perhaps it is the not-so-beneficial consequences。

大数据外文翻译文献

大数据外文翻译文献

大数据外文翻译文献(文档含中英文对照即英文原文和中文翻译)原文:What is Data Mining?Many people treat data mining as a synonym for another popularly used term, “Knowledge Discovery in Databases”, or KDD. Alternatively, others view data mining as simply an essential step in the process of knowledge discovery in databases. Knowledge discovery consists of an iterative sequence of the following steps:· data cleaning: to remove noise or irrelevant data,· data integration: where multiple data sources may be combined,·data selection : where data relevant to the analysis task are retrieved from the database,·data transformation : where data are transformed or consolidated into forms appropriate for mining by performing summary or aggregation operations, for instance,·data mining: an essential process where intelligent methods are applied in order to extract data patterns,·pattern evaluation: to identify the truly interesting patterns representing knowledge based on some interestingness measures, and ·knowledge presentation: where visualization and knowledge representation techniques are used to present the mined knowledge to the user .The data mining step may interact with the user or a knowledge base. The interesting patterns are presented to the user, and may be stored as new knowledge in the knowledge base. Note that according to this view, data mining is only one step in the entire process, albeit an essential one since it uncovers hidden patterns for evaluation.We agree that data mining is a knowledge discovery process. However, in industry, in media, and in the database research milieu, the term “data mining” is becoming more popular than the longer term of “knowledge discovery in databases”. Therefore, in this book, we choose to use the term “data mining”. We adop t a broad view of data mining functionality: data mining is the process of discovering interestingknowledge from large amounts of data stored either in databases, data warehouses, or other information repositories.Based on this view, the architecture of a typical data mining system may have the following major components:1. Database, data warehouse, or other information repository. This is one or a set of databases, data warehouses, spread sheets, or other kinds of information repositories. Data cleaning and data integration techniques may be performed on the data.2. Database or data warehouse server. The database or data warehouse server is responsible for fetching the relevant data, based on the user’s data mining request.3. Knowledge base. This is the domain knowledge that is used to guide the search, or evaluate the interestingness of resulting patterns. Such knowledge can include concept hierarchies, used to organize attributes or attribute values into different levels of abstraction. Knowledge such as user beliefs, which can be used to assess a pattern’s interestingness based on its unexpectedness, may also be included. Other examples of domain knowledge are additional interestingness constraints or thresholds, and metadata (e.g., describing data from multiple heterogeneous sources).4. Data mining engine. This is essential to the data mining system and ideally consists of a set of functional modules for tasks such ascharacterization, association analysis, classification, evolution and deviation analysis.5. Pattern evaluation module. This component typically employs interestingness measures and interacts with the data mining modules so as to focus the search towards interesting patterns. It may access interestingness thresholds stored in the knowledge base. Alternatively, the pattern evaluation module may be integrated with the mining module, depending on the implementation of the data mining method used. For efficient data mining, it is highly recommended to push the evaluation of pattern interestingness as deep as possible into the mining process so as to confine the search to only the interesting patterns.6. Graphical user interface. This module communicates between users and the data mining system, allowing the user to interact with the system by specifying a data mining query or task, providing information to help focus the search, and performing exploratory data mining based on the intermediate data mining results. In addition, this component allows the user to browse database and data warehouse schemas or data structures, evaluate mined patterns, and visualize the patterns in different forms.From a data warehouse perspective, data mining can be viewed as an advanced stage of on-1ine analytical processing (OLAP). However, data mining goes far beyond the narrow scope of summarization-styleanalytical processing of data warehouse systems by incorporating more advanced techniques for data understanding.While there may be many “data mining systems” on the market, not all of them can perform true data mining. A data analysis system that does not handle large amounts of data can at most be categorized as a machine learning system, a statistical data analysis tool, or an experimental system prototype. A system that can only perform data or information retrieval, including finding aggregate values, or that performs deductive query answering in large databases should be more appropriately categorized as either a database system, an information retrieval system, or a deductive database system.Data mining involves an integration of techniques from mult1ple disciplines such as database technology, statistics, machine learning, high performance computing, pattern recognition, neural networks, data visualization, information retrieval, image and signal processing, and spatial data analysis. We adopt a database perspective in our presentation of data mining in this book. That is, emphasis is placed on efficient and scalable data mining techniques for large databases. By performing data mining, interesting knowledge, regularities, or high-level information can be extracted from databases and viewed or browsed from different angles. The discovered knowledge can be applied to decision making, process control, information management, query processing, and so on. Therefore,data mining is considered as one of the most important frontiers in database systems and one of the most promising, new database applications in the information industry.A classification of data mining systemsData mining is an interdisciplinary field, the confluence of a set of disciplines, including database systems, statistics, machine learning, visualization, and information science. Moreover, depending on the data mining approach used, techniques from other disciplines may be applied, such as neural networks, fuzzy and or rough set theory, knowledge representation, inductive logic programming, or high performance computing. Depending on the kinds of data to be mined or on the given data mining application, the data mining system may also integrate techniques from spatial data analysis, Information retrieval, pattern recognition, image analysis, signal processing, computer graphics, Web technology, economics, or psychology.Because of the diversity of disciplines contributing to data mining, data mining research is expected to generate a large variety of data mining systems. Therefore, it is necessary to provide a clear classification of data mining systems. Such a classification may help potential users distinguish data mining systems and identify those that best match their needs. Data mining systems can be categorized according to various criteria, as follows.1) Classification according to the kinds of databases mined.A data mining system can be classified according to the kinds of databases mined. Database systems themselves can be classified according to different criteria (such as data models, or the types of data or applications involved), each of which may require its own data mining technique. Data mining systems can therefore be classified accordingly.For instance, if classifying according to data models, we may have a relational, transactional, object-oriented, object-relational, or data warehouse mining system. If classifying according to the special types of data handled, we may have a spatial, time -series, text, or multimedia data mining system , or a World-Wide Web mining system . Other system types include heterogeneous data mining systems, and legacy data mining systems.2) Classification according to the kinds of knowledge mined.Data mining systems can be categorized according to the kinds of knowledge they mine, i.e., based on data mining functionalities, such as characterization, discrimination, association, classification, clustering, trend and evolution analysis, deviation analysis , similarity analysis, etc.A comprehensive data mining system usually provides multiple and/or integrated data mining functionalities.Moreover, data mining systems can also be distinguished based on the granularity or levels of abstraction of the knowledge mined, includinggeneralized knowledge(at a high level of abstraction), primitive-level knowledge(at a raw data level), or knowledge at multiple levels (considering several levels of abstraction). An advanced data mining system should facilitate the discovery of knowledge at multiple levels of abstraction.3) Classification according to the kinds of techniques utilized.Data mining systems can also be categorized according to the underlying data mining techniques employed. These techniques can be described according to the degree of user interaction involved (e.g., autonomous systems, interactive exploratory systems, query-driven systems), or the methods of data analysis employed(e.g., database-oriented or data warehouse-oriented techniques, machine learning, statistics, visualization, pattern recognition, neural networks, and so on ) .A sophisticated data mining system will often adopt multiple data mining techniques or work out an effective, integrated technique which combines the merits of a few individual approaches.什么是数据挖掘?许多人把数据挖掘视为另一个常用的术语—数据库中的知识发现或KDD的同义词。

信息系统和数据库开发中英文对照外文翻译文献

信息系统和数据库开发中英文对照外文翻译文献

中英文对照外文翻译文献(文档含英文原文和中文翻译)Information System Development and DatabaseDevelopmentIn many organizations, database development from the beginning of enterprise data modeling, data modeling enterprises determine the scope of the database and the general content. This step usually occurs in an organization's information system planning process, it aims to help organizations create an overall data description or explanation, and not the design of a specific database. A specific database for one or more information systems provide data and the corporate data model (which may involve a number of databases) described by the organization maintaining the scope of the data. Data modeling in the enterprise, you review of the current system, the need to support analysis of the nature of the business areas, the need for further description of the abstract data, and planning one or more database developmentproject. Figure 1 shows Pine Valley furniture company's enterprise data model of a part.1.1 Information System ArchitectureSenior data model is only general information system architecture (ISA) or a part of an organization's information system blueprint. In the information system planning, you can build an enterprise data model as a whole information system architecture part. According to Zachman (1987), Sowa and Zachman (1992) views of an information system architecture consists of the following six key components:DataManipulation of data processing (of a data flow diagram can be used, with the object model methods, or other symbols that).Networks, which organizations and in organizations with its main transmission of data between business partners (it can connect through the network topology map and to demonstrate).People who deal with the implementation of data and information and is the source and receiver (in the process model for the data shows that the sender and the receiver).Implementation of the events and time points (they can use state transition diagram and other means.)The reasons for the incident and data processing rules (often in the form of text display, but there are also a number of charts for the planning tools such as decision tables).1.2 Information EngineeringInformation systems planners in accordance with the specific information system planning methods developed information system architecture. Information engineering is a popular and formal methods. Information engineering is a data-oriented creation and maintenance of the information system. Information engineering is because the data-oriented, so when you begin to understand how the database is defined by the logo and when information engineering a concise explanation is very helpful. Information Engineering follow top-down planning approach, in which specific information systems from a wide range of informationneeds in the understanding derived from (for example, we need about customers, products, suppliers, sales and processing of the data center), rather than merging many detailed information requested ( orders such as a screen or in accordance with the importation of geographical sales summary report). Top-down planning will enable developers to plan more comprehensive information system, consider system components provide an integrated approach to enhance the information system and the relationship between the business objectives of the understanding, deepen their understanding of information systems throughout the organization in understanding the impact.Information Engineering includes four steps: planning, analysis, design and implementation. The planning stage of project information generated information system architecture, including enterprise data model.1.3 Information System PlanningInformation systems planning objective is to enable IT organizations and the business strategy closely integrated, such integration for the information systems and technology to make the most of the investment interest is very important. As the table as a description, information engineering approach the planning stage include three steps, we in the follow-up of three sections they discussed.1. Critical factors determining the planningPlanning is the key factor that organizational objectives, critical success factors and problem areas. These factors determine the purpose of the establishment of planning and environment planning and information systems linked to strategic business planning. Table 2 shows the Pine Valley furniture company's key planning a number of possible factors, these factors contribute to the information systems manager for the new information systems and databases clubs top priority to deal with the demand. For example, given the imprecise sales forecasts this problem areas, information systems managers in the organization may be stored in the database additional historical sales data, new market research data and new product test data.2. The planning organizations set targetsOrganizations planning targets defined scope of business, and business scope will limit the subsequent analysis and information systems may change places. Five key planning targets as follows:● organizational units in the various sectors.● organizations location of the place of business operations.● functions of the business support organizations handling mission of the relevant group. Unlike business organizations function modules, in fact a function can be assigned to various organizations modules (for example, product development function is the production and sale of the common responsibility of the Ministry).● types of entities managed by the organization on the people, places and things of the major types of data.● Information System data set processing software applications and support procedures.3. To set up a business modelA comprehensive business model including the functions of each enterprise functional decomposition model, the enterprise data model and the various planning matrix. Functional decomposition is the function of the organization for a more detailed decomposition process, the functional decomposition is to simplify the analysis of the issue, distracted and identify components and the use of the classical approach. Pine Valley furniture company in order to function in the functional decomposition example in figure 2 below. In dealing with business functions and support functions of the full set, multiple databases, is essential to a specific database therefore likely only to support functions (as shown in Figure 2) provide a subset of support. In order to reduce data redundancy and to make data more meaningful, has a complete, high-level business view is very helpful.The use of specific enterprise data model to describe the symbol. Apart from the graphical description of this type of entity, a complete enterprise data model should also include a description of each entity type description of business operations and a summary of that business rules. Business rules determine the validity of the data.An enterprise data model includes not only the types of entities, including the link between the data entities, as well as various other objects planning links. Showed that the linkage between planning targets a common form of matrix. Because of planning matrix need not be explicit modeling database can be clearly described business needs, planning matrix is an important function. Regular planning matrix derived from theoperational rules, it will help social development activities that top priority will be sorting and development activities under the top-down view through an enterprise-wide approach for the development of these activities. There are many types of planning matrix is available, their commonalities are:● locations - features show business function in which the implementation of operational locations.● unit - functions which showed that business function or business unit responsible for implementation.● Information System - data entities to explain how each information system interact with each data entity (for example, whether or not each system in each entity have the data to create, retrieve, update and delete).● support functions - data in each functional entities in the data set for the acquisition, use, update and delete.● Information System - target indication for each information system to support business objectives.Data entities matrix. Such a matrix can be used for a variety of purposes, including the following three objectives:1) identify gaps in the data entities to indicate the types of entities not use any function or functions which do not use any entity.2) found that the loss of each functional entities involved in the inspection staff through the matrix to identify any possible loss of the entity.3) The distinction between development activities if the priority to the top of a system development function for a high-priority (probably because it important organizational objectives related), then this area used by entities in the development of the database has a high priority. Hoffer, George and Valacich (2002) are the works of the matrix on how to use the planning and completion of the Information Engineering.The planning system more complete description.2 database development processBased on information engineering information systems planning database is a source of development projects. These new database development projects is usuallyin order to meet the strategic needs of organizations, such as improving customer support, improve product and inventory management, or a more accurate sales forecast. However, many more database development project is the bottom-up approach emerging, such as information system user needs specific information to complete their work, thus beginning a project request, and as other information systems experts found that organizations need to improve data management and begin new projects. Bottom-up even in the circumstances, to set up an enterprise data model is also necessary to understand the existing database can provide the necessary data, otherwise, the new database, data entities and attributes can be added to the current data resources to the organization. Both the strategic needs or operational information needs of each database development projects normally concentrated in a database. Some projects only concentrated in the database definition, design and implementation of a database, as a follow-up to the basis of the development of information systems. However, in most cases, the database and associated information processing function as a complete information systems development project was part of the development.2.1 System Development Life CycleGuide management information system development projects is the traditional process of system development life cycle (SDLC). System development life cycle is an organization of the database designers and programmers information system composed of the Panel of Experts detailed description, development, maintenance and replacement of the entire information system steps. This process is because Waterfall than for every step into the adjacent the next step, that is, the information system is a specification developed by a piece of land, every piece of the output is under an input. However shown in the figure, these steps are not purely linear, each of the steps overlap in time (and thus can manage parallel steps), but when the need to reconsider previous decisions, but also to roll back some steps ahead. (And therefore water can be put back in the waterfall!)Figure 4 on the system development life cycle and the purpose of each stage of the product can be delivered concise notes. The system development life cycle including each stage and database development-related activities, therefore, the question of database management systems throughout the entire development process. In Figure 5 we repeat of the system development life cycle stage of the seven, and outlines thecommon database at each stage of development activities. Please note that the systems development life cycle stages and database development steps一一对应exists between the relationship between the concept of modeling data in both systems development life cycle stages between.Enterprise ModelingDatabase development process from the enterprise modeling (system development life cycle stage of the project feasibility studies, and to choose a part), Organizations set the scope and general database content. Enterprise modeling in information systems planning and other activities, these activities determine which part of information systems need to change and strengthen the entire organization and outlines the scope of data. In this step, check the current database and information systems, development of the project as the main areas of the nature of the business, with a very general description of each term in the development of information systems when needed data. Each item only when it achieved the expected goals of organizations can be when the next step.Conceptual Data ModelingOne has already begun on the Information System project, the concept of data modeling phase of the information systems needs of all the data. It is divided into two stages. First, it began the project in the planning stage and the establishment of a plan similar to Figure 1. At the same time outlining the establishment of other documents to the existing database without considering the circumstances specific development projects in the scope of the required data. This category only includes high-level data (entities), and main contact. Then in the system development life-cycle analysis stage must have a management information system set the entire organization Details of the data model definition of all data attributes, listing all data types that all data inter-entity business linkages, defining description of the full data integrity rules. In the analysis phase, but also the concept of inspection data model (also called the concept behind the model) and the goal of information systems used to explain other aspects of the model of consistency categories, such as processing steps, rules and data processing time of timing. However, even if the concept is such detailed data model is only preliminary, because follow-up information system life cycle activities in the design of services, statements, display and inquiries may find that missing element or mistakes. Therefore, the concept of data often said that modeling is atop-down manner, its areas of operation from the general understanding of the driver, rather than the specific information processing activities by the driver.3. Logical Database DesignLogical database design from two perspectives database development. First, the concept of data model transform into relational database theory based on the criteria that means - between. Then, as the design of information systems, every computer procedures (including procedures for the input and output format), database support services, statements, and inquiries revealed that a detailed examination. In this so-called Bottom-up analysis, accurate verification of the need to maintain the database and the data in each affairs, statements and so on the needs of those in the nature of the data.For each separate statements, services, and so on the analysis must take into account a specific, limited but complete database view. When statements, services, and other analysis might be necessary to change the concept of data model. Especially in large-scale projects, the different analytical systems development staff and the team can work independently in different procedures or in a centralized, the details of their work until all the logic design stage may be displayed. In these circumstances, logic database design stage must be the original concept of data model and user view these independent or merged into a comprehensive design. In logic design information systems also identify additional information processing needs of these new demands at this time must be integrated into the logic of earlier identified in the database design.Logical database design is based on the final step for the formation of good data specifications and determine the rules, the combination, the data after consultation specifications or converted into basic atomic element. Most of today's database, these rules from the relational database theory and the process known as standardization. This step is the result of management of these data have not cited any database management system for a complete description of the database map. Logical database design completed, we began to identify in detail the logic of the computer program and maintenance, the report contents of the database for inquiries.4. Physical database design and definitionPhysical database design and definition phase decisions computer memory (usuallydisk) database in the organization, definition of According to the library management system for physical structure, the procedures outlined processing services, produce the desired management information and decision support statements. The objective of this stage is to design an effective and safe management of all data-processing database, the physical database design to closely integrate the information systems of other physical aspects of the design, including procedures, computer hardware, operating systems and data communications networks.5. Database ImplementationThe database prepared by the realization stage, testing and installation procedures for handling databases. Designers can use the standard programming language (such as COBOL, C or Visual Basic), the dedicated database processing languages (such as SQL), or the process of the non-exclusive language programming in order to produce a statement of the fixed format, the result will be displayed, and may also include charts. In achieving stage, but also the completion of all the database files, training users for information systems (database) user setup program. The final step is to use existing sources of information (documents legacy applications and databases and now needs new data) loading data. Loading data is often the first step in data from existing files and databases to an intermediate format (such as binary or text files) and then to turn intermediate loading data to a new database. Finally, running databases and related applications for the actual user maintenance and retrieval of data. In operation, the regular backup database and the database when damaged or affected resume database.6. Database maintenanceDuring the database in the progressive development of database maintenance. In this step, in order to meet changing business conditions, in order to correct the erroneous database design, database applications or processing speed increase, delete or change the structure of the database. When a procedure or failure of the computer database affect or damage the database may also be reconstruction. This step usually is the longest in the database development process step, as it continued to databases and related applications throughout the life cycle, the development of each database can be seen as a brief database development process and data modeling concepts arise, logical and physical database design and database to achieve dealing with the changes.2.2 Information System developed by other meansSystem Development Life Cycle minor changes in law or its variant of the often used to guide information systems and database development. Information System is a life-cycle methodology, it is highly structured approach, which includes many checks and balances to ensure that every step of produce accurate results, and new or alternative information system and it must communications or data definitions consistent existing system needs consistency. System development life cycle because of the regular need to have a working system for a long time been criticized because only work in the system until the end of the whole process generated. More and more organizations now use rapid application development method, it is a includes analysis, design and implementation of steps to repeat the rapid iterative process until convergence to users the system so far. Rapid Application Development Act required the database has been in existence, and enhance system is mainly to the application of data retrieval application, but not to those who generate and modify database applications.The most widely used method of rapid application development is one of the prototype. The prototype system is a method of iterative development process, analysts and users through close co-operation, continuing to revise the system will eventually convert all the needs of a working system. Figure 6 shows prototype of the process. In this diagram we contains notes, briefly describes each stage of the prototype of the database development activities. Normally, when information systems problems were identified, tried only a rough concept of data modeling. In the development of the initial prototype, the design of the user wants to display and statements, and that any new database needs and define a term prototype database. This is usually a new database, copy the part of the existing system, but might also added some new content. When the need for new content, these elements are usually from external data sources, such as market research data, the general economic indicators or industry standards.When a prototype of a new version to repeat the achievement and maintenance of database activities. Usually only a minimum level of security and integrity control, because at this time the focus is as soon as possible to produce a prototype version can be used. But document management project also deferred to the final, only be used in the delivery of user training. Finally, once constructed an acceptable prototype,developers, and users will be the final decision of whether to prototype delivery and the use of the database. If the system (including database) efficiency is very low, then the system and database will be re-programming and re-organization in order to achieve the desired performance.Along with visual programming tools (such as Visual Basic, Java, Visual C + + and fourth generation language) increasingly popular use of visual programming tools can easily change the user interface with the system, the prototype is becoming the choice of system development methodology. Customers using the prototype method statements and show changes to the content and layout is quite easy. In the process, the new database needs were identified, so it is the development of the use of the existing database should be amended. There is even the possibility of a need for a new database system prototype method, in such circumstances, when the system demand in the iterative process of development in the ever-changing needs access to sample data, the construction or reconstruction of the database prototype.3 database development of the three-tier architecture modelIn this article on the front of the database development process mentioned in the interpretation of a system development project on the establishment of the several different, but related database view or model:● conceptual model (in the analysis stage of the establishment).● external model or user view (in the analysis phase and the establishment of logical design phase).● physical model or internal model (in the physical design phase of the establishment).Figure 7 describes the database view that the relationship between the three, it is important to remember that they are the same organizations database view or model. In other words, each organization has a database of the physical model, a concept model and one or more users view.Therefore, the three-tier architecture model using the same data set observe the different ways definition database.Concept models on the full database structure, has nothing to do with the technical specifications. Conceptual model definition do not involve the entire database datastored in the computer how the secondary memory. Usually, the conceptual model by entities - links (E-R) map or object modeling symbols such a graphical format to describe, we have this type of concept model called the data model. In addition, the conceptual model specification as a metadata stored in the database or data dictionary.Physical models including conceptual model of how data stored in computer memory in the two specifications. Analysts and the database design is as important to the physical database (physical mode) definition, it provides information on the distribution and management of data storage and access of the physical memory space of two full database technology specifications.Database development and database technology database is among the three models divided into basis. Database development projects may have a role to only deal with these three views of a related work. For example, a beginner may be designed for one or more procedures external model, and an experienced developer will design the physical model or conceptual model. Database design issues at different levels are quite different.4 three-tier structure of the database positioning systemObviously, all the good things in the database are, and the "three"!When designing a database, you have to choose where to store data. This option in the physical database design stage. Database is divided into individual databases, the Working Group database, departmental databases, corporate databases and the Internet database. Individuals often by the end-user database design and development of their own, just by database experts to give training and advice to help, it only contains individual end-users interested in the data. Sometimes, personal database from the database or enterprise Working Group extracted from the database, such circumstances database prepared by some experts from the regular routine to create local database. Sector Working Group database and the database is often the end-user, business experts and the central database system experts development. The collaborative work of these officers is necessary because in the design of the database to be shared by a large number of issues weigh: processing speed, ease of use, data definition differences and other similar problems. Due to corporate databases and the Internet database broad impact, large-scale, it is normally concentrated in the database development team has received professional training to develop a database of experts.1. Customers layerA desktop or notebook also known as that layer, which specialized management user interface and system localization data in this layer can be implemented on the Web scripting tasks.2. Server / Web serverHTTP protocol handling, scripting tasks, the implementation of computing and provide data access, the layer known as processing services layer.3. Enterprise Server (Minicomputer or mainframe) layerThe implementation of complex computing and inter-organizational management from multiple data sources of data integration, also known as data services layer.In an organization, hierarchical database and information system architecture for distributed computing and the client / server architecture of the concept of correlation. Client / server architecture based on a LAN environment, including servers (referred to as database server or database engine) database software implementation from the client workstation database orders, each customer applications focus on their user interface functions. In fact, the whole concept of the database (as well as the application of these databases to handle routine) as a distributed database or the separate but related physical database distribution in the local PC workstation, server intermediate (working group or sector) and one center server (departments or enterprises ). Simply said that the use of client / server architecture for:● it can handle multiple processors on the same application at the same time, improve application response time and data processing speed.● It can use each computer platform of the best data processing (such as PC Minicom Advanced user interface with the mainframe and computing speed).● can mix various client technology (Intel or Motorola processor assembly of personal computers, computer networks, information kiosks, etc.) and public data sharing. In addition, you can change the technology at any layer and other layers only a small influence on the system module.● able to handle close to the data source to be addressed to improve response time and reduce network traffic.。

数据库外文翻译外文文献英文文献数据库安全

数据库外文翻译外文文献英文文献数据库安全

Database Security“Why do I need to secure my database server? No one can access it —it’s in a DMZ protected by the firewall!” This is often the response when it is recommended that such devices are included within a security health check. In fact, database security is paramount in defending an organizations information, as it may be indirectly exposed to a wider audience than realized.This is the first of two articles that will examine database security. In this article we will discuss general database security concepts and common problems. In the next article we will focus on specific Microsoft SQL and Oracle security concerns.Database security has become a hot topic in recent times. With more and more people becoming increasingly concerned with computer security, we are finding that firewalls and Web servers are being secured more than ever(though this does not mean that there are not still a large number of insecure networks out there). As such, the focus is expanding to consider technologies such as databases with a more critical eye.◆Common sense securityBefore we discuss the issues relating to database security it is prudent to high- light the necessity to secure the underlying operating system and supporting technologies. It is not worth spending a lot of effort securing a database if a vanilla operating system is failing to provide a secure basis for the hardening of the data- base. There are a large number of excellent documents in the public domain detailing measures that should be employed when installing various operating systems.One common problem that is often encountered is the existence of a database on the same server as a web server hosting an Internet (or Intranet) facing application. Whilst this may save the cost of purchasing a separate server, it does seriously affect the security of the solution. Where this is identified, it is often the case that the database is openly connected to the Internet. One recent example I can recall is an Apache Web server serving an organizations Internet offering, with an Oracle database available on the Internet on port 1521. When investigating this issue further it was discovered that access to the Oracle server was not protected (including lack of passwords), which allowed the server to be stopped. The database was not required from an Internet facing perspective, but the use of default settings and careless security measures rendered the server vulnerable.The points mentioned above are not strictly database issues, and could be classified as architectural and firewall protection issues also, but ultimately it is the database that is compromised. Security considerations have to be made from all parts of a public facing net- work. You cannot rely on someone or something else within your organization protecting your database fr om exposur e.◆ Attack tools are now available for exploiting weaknesses in SQL and OracleI came across one interesting aspect of database security recently while carrying out a security review for a client. We were performing a test against an intranet application, which used a database back end (SQL) to store client details. The security review was proceeding well, with access controls being based on Windows authentication. Only authenticated Windows users were able to see data belonging to them. The application itself seemed to be handling input requests, rejecting all attempts to access the data- base directly.We then happened to come across a backup of the application in the office in which we were working. This media contained a backup of the SQL database, which we restored onto our laptop. All security controls which were in place originally were not restored with the database and we were able to browse the complete database, with no restrictions in place to protect the sensitive data. This may seem like a contrived way of compromising the security of the system, but does highlight an important point. It is often not the direct approach that is taken to attack a target, and ultimately the endpoint is the same; system compromise. A backup copy of the database may be stored on the server, and thus facilitates access to the data indirectly.There is a simple solution to the problem identified above. SQL 2000 can be configured to use password protection for backups. If the backup is created with password protection, this password must be used when restoring the password. This is an effective and uncomplicated method of stopping simple capture of backup data. It does however mean that the password must be remembered!◆Curr ent tr endsThere are a number of current trends in IT security, with a number of these being linked to database security.The focus on database security is now attracting the attention of the attackers. Attack tools are now available for exploiting weaknesses in SQL and Oracle. The emergence of these tools has raised the stakes and we have seen focused attacks against specific data- base ports on servers exposed to the Internet.One common theme running through the security industry is the focus on application security, and in particular bespoke Web applications. With he functionality of Web applications becoming more and more complex, it brings the potential for more security weaknesses in bespoke application code. In order to fulfill the functionality of applications, the backend data stores are commonly being used to format the content of Web pages. This requires more complex coding at the application end. With developers using different styles in code development, some of which are not as security conscious as other, this can be the source of exploitable errors.SQL injection is one such hot topic within the IT security industry at the moment. Discussions are now commonplace among technical security forums, with more and more ways and means of exploiting databases coming to light all the time. SQL injection is a misleading term, as the concept applies to other databases, including Oracle, DB2 and Sybase.◆ What is SQL Injection?SQL Injection is simply the method of communication with a database using code or commands sent via a method or application not intended by the developer. The most common form of this is found in Web applications. Any user input that is handled by the application is a common source of attack. One simple example of mishandling of user input is highlighted in Figure 1.Many of you will have seen this common error message when accessing web sites, and often indicates that the user input has not been correctly handled. On getting this type of error, an attacker will focus in with more specific input strings.Specific security-related coding techniques should be added to coding standard in use within your organization. The damage done by this type of vulnerability can be far reaching, though this depends on the level of privileges the application has in relation to the database.If the application is accessing data with full administrator type privileges, then maliciously run commands will also pick up this level of access, and system compromise is inevitable. Again this issue is analogous to operating system security principles, where programs should only be run with the minimum of permissions that is required. If normal user access is acceptable, then apply this restriction.Again the problem of SQL security is not totally a database issue. Specific database command or requests should not be allowed to pass through theapplication layer. This can be prevented by employing a “secure coding” approach.Again this is veering off-topic, but it is worth detailing a few basic steps that should be employed.The first step in securing any application should be the validation and control of user input. Strict typing should be used where possible to control specific data (e.g. if numeric data is expected), and where string based data is required, specific non alphanumeric characters should be prohibited where possible. Where this cannot be performed, consideration should be made to try and substitute characters (for example the use of single quotes, which are commonly used in SQL commands).Specific security-related coding techniques should be added to coding standard in use within your organization. If all developers are using the same baseline standards, with specific security measures, this will reduce the risk of SQL injection compromises.Another simple method that can be employed is to remove all procedures within the database that are not required. This restricts the extent that unwanted or superfluous aspects of the database could be maliciously used. This is analogous to removing unwanted services on an operating system, which is common security practice.◆ OverallIn conclusion, most of the points I have made above are common sense security concepts, and are not specific to databases. However all of these points DO apply to databases and if these basic security measures are employed, the security of your database will be greatly improved.The next article on database security will focus on specific SQL and Oracle security problems, with detailed examples and advice for DBAs and developers.There are a lot of similarities between database security and general IT security, with generic simple security steps and measures that can be (and should be) easily implemented to dramatically improve security. While these may seem like common sense, it is surprising how many times we have seen that common security measures are not implemented and so causea security exposure.◆User account and password securityOne of the basic first principals in IT security is “make su re you have a good password”. Within this statement I have assumed that a password is set in the first place, though this is often not the case.I touched on common sense security in my last article, but I think it is important to highlight this again. As with operating systems, the focus of attention within database account security is aimed at administrationaccounts. Within SQL this will be the SA account and within Oracle it may be the SYSDBA or ORACLE account.It is very common for SQL SA accounts to have a password of ‘SA’ or even worse a blank password, which is just as common. This password laziness breaks the most basic security principals, and should be stamped down on. Users would not be allowed to have a blank password on their own domain account, so why should valuable system resources such as databases be allowed to be left unprotected. For instance, a blank ‘SA’password will enable any user with client software (i.e. Microsoft query analyser or enterprise manager to ‘manage’ the SQL server and databases).With databases being used as the back end to Web applications, the lack of password control can result in a total compromise of sensitive information. With system level access to the database it is possible not only to execute queries into the database, create/modify/delete tables etc, but also to execute what are known as Stored Procedures.数据库安全“为什么要确保数据库服务安全呢?任何人都不能访问-这是一个非军事区的保护防火墙”,当我们被建议使用一个带有安全检查机制的装置时,这是通常的反应。

安全专业外文文献(中英文对照PDF)

安全专业外文文献(中英文对照PDF)

附录A动态可靠性和安全性评价人为因素技术系统:一个现代科学扎根人类的起源P. Carlo Cacciabue收稿日期:2010年1月7日/接受日期:2010年2月27日施普林格出版社有限公司于2010年在伦敦摘要:本文讨论的要求是人机实际执行互动模式。

前瞻性的回顾分析了设计和安全评估。

对Hollnagel理论能够运用“联合认知”制度全面和详进行分析,鉴定出人为因素的根本原因和潜在的复杂评价中偶然的情况.然而,死板的应用这些做法有时是过于武断,或根本不可能改善缺乏数据的缺点或构建复杂性建模架构.本文介绍了两个可行的方法,整体安全性分析是对整个工厂进行控制.另一种方法是,当明确任务和具体行为需要进行研究,提出的方法Hollnagel被认为是最先进和可以应用种最准确的工具.关键词:人类认知;可靠性建模;安全评估;根本原因分析1 介绍15年前,在1994年,我对埃里克Hollnagel在我的博士学位论文等这些方面的帮助表示感激。

当然埃里克Hollnagel已成为了我的导师并帮我解除了、试图将机器正规化的权威心理的影响。

我一开始就很尊重博士Hollnagel,很多年前,当我遇到他,他拯救了我,从一些同事之中保护了我将要被他们毁灭的最初想法,这种想法是试图寻找和谐科学和心理学的之间的基础,这是我研究活动的最后25年的方向.感谢埃里克!我永远不会忘记你,在世界许多角落陪伴着我,并通过头脑帮助我。

(Cacciabue 1994年)。

在那些日子里,需要建立必要的,明确的和无误的模式在人类管理的系统中,这导致许多研究人员严厉批评,它没有和解的可能性,所有的方法和在人类的贡献,旨在简化技术对系统的控制和事故.第一,集中在行为上,即实际的行动表现。

这种批评的主要依据是一个没有模型的认知,使审议过程和人类精神的典型功能和行为表现影响到他们的上下文相关条件(Hollnagel 1994年),第二,缺乏对审.在同一年内,制定的概念“第二代人的可靠性的方(Cacciabue和Hollnagel 1993年)和“微型的macrosimulation认知”(Cacciabue和Hollnagel 1995)随着各种技术的发展,在许多情况下是从航空运输和核医学出发,目的在于评估人类的贡献,评估安全系统和安全组织.这些问题一直是核心的科学调查问题.在90年代的Hollnagel,出版了两本关于危害和风险人类活动的分析(Hollnagel 1993年,1998年)基本书籍。

数据库管理系统毕业论文中英文资料对照外文翻译文献综述

数据库管理系统毕业论文中英文资料对照外文翻译文献综述

数据库管理系统毕业论文中英文资料对照外文翻译文献综述中英文资料对照外文翻译文献综述英文翻译数据库管理系统的介绍Raghu Ramakrishnan数据库(database,有时被拼作data base)又称为电子数据库,是专门组织起来的一组数据或信息,其目的是为了便于计算机快速查询及检索。

数据库的结构是专门设计的,在各种数据处理操作命令的支持下,可以简化数据的存储、检索、修改和删除。

数据库可以存储在磁盘、磁带、光盘或其他辅助存储设备上。

数据库由一个或一套文件组成,其中的信息可以分解为记录,每一条记录又包含一个或多个字段(或称为域)。

字段是数据存取的基本单位。

数据库用于描述实体,其中的一个字段通常表示与实体的某一属性相关的信息。

通过关键字以及各种分类(排序)命令,用户可以对多条记录的字段进行查询,重新整理,分组或选择,以实体对某一类数据的检索,也可以生成报表。

所有数据库(除最简单的)中都有复杂的数据关系及其链接。

处理与创建,访问以及维护数据库记录有关的复杂任务的系统软件包叫做数据库管理系统(DBMS)。

DBMS软件包中的程序在数据库与其用户间建立接口。

(这些用户可以是应用程序员,管理员及其他需要信息的人员和各种操作系统程序)DBMS可组织、处理和表示从数据库中选出的数据元。

该功能使决策者能搜索、探查和查询数据库的内容,从而对正规报告中没有的,不再出现的且无法预料的问题做出回答。

这些问题最初可能是模糊的并且(或者)是定义不恰当的,但是人们可以浏览数据库直到获得所需的信息。

简言之,DBMS将“管理”存储的数据项和从公共数据库中汇集所需的数据项用以回答非程序员的询问。

DBMS由3个主要部分组成:(1)存储子系统,用来存储和检索文件中的数据;(2)建模和操作子系统,提供组织数据以及添加、删除、维护、更新数据的方法;(3)用户和DBMS之间的接口。

在提高数据库管理系统的价值和有效性方面正在展现以下一些重要发展趋势:1.管理人员需要最新的信息以做出有效的决策。

数据采集系统中英文对照外文翻译文献

数据采集系统中英文对照外文翻译文献

中英文对照外文翻译(文档含英文原文和中文翻译)Data Acquisition SystemsData acquisition systems are used to acquire process operating data and store it on,secondary storage devices for later analysis. Many or the data acquisition systems acquire this data at very high speeds and very little computer time is left to carry out any necessary, or desirable, data manipulations or reduction. All the data are stored on secondary storage devices and manipulated subsequently to derive the variables ofin-terest. It is very often necessary to design special purpose data acquisition systems and interfaces to acquire the high speed process data. This special purpose design can be an expensive proposition.Powerful mini- and mainframe computers are used to combine the data acquisition with other functions such as comparisons between the actual output and the desirable output values, and to then decide on the control action which must be taken to ensure that the output variables lie within preset limits. The computing power required will depend upon the type of process control system implemented. Software requirements for carrying out proportional, ratio or three term control of process variables are relatively trivial, and microcomputers can be used to implement such process control systems. It would not be possible to use many of the currently available microcomputers for the implementation of high speed adaptive control systems which require the use of suitable process models and considerable online manipulation of data.Microcomputer based data loggers are used to carry out intermediate functions such as data acquisition at comparatively low speeds, simple mathematical manipulations of raw data and some forms of data reduction. The first generation of data loggers, without any programmable computing facilities, was used simply for slow speed data acquisition from up to one hundred channels. All the acquired data could be punched out on paper tape or printed for subsequent analysis. Such hardwired data loggers are being replaced by the new generation of data loggers which incorporate microcomputers and can be programmed by the user. They offer an extremely good method of collecting the process data, using standardized interfaces, and subsequently performing the necessary manipulations to provide the information of interest to the process operator. The data acquired can be analyzed to establish correlations, if any, between process variables and to develop mathematical models necessary for adaptive and optimal process control.The data acquisition function carried out by data loggers varies from one to 9 in system to another. Simple data logging systems acquire data from a few channels while complex systems can receive data from hundreds, or even thousands, of input channels distributed around one or more processes. The rudimentary data loggers scan the selected number of channels, connected to sensors or transducers, in a sequential manner and the data are recorded in a digital format. A data logger can be dedicated in the sense that it can only collect data from particular types of sensors and transducers. It is best to use a nondedicated data logger since any transducer or sensor can be connected to the channels via suitable interface circuitry. This facility requires the use of appropriate signal conditioning modules.Microcomputer controlled data acquisition facilitates the scanning of a large number of sensors. The scanning rate depends upon the signal dynamics which means that some channels must be scanned at very high speeds in order to avoid aliasing errors while there is very little loss of information by scanning other channels at slower speeds. In some data logging applications the faster channels require sampling at speeds of up to 100 times per second while slow channels can be sampled once every five minutes. The conventional hardwired, non-programmable data loggers sample all the channels in a sequential manner and the sampling frequency of all the channels must be the same. This procedure results in the accumulation of very large amounts of data, some of which is unnecessary, and also slows down the overall effective sampling frequency. Microcomputer based data loggers can be used to scan some fast channels at a higher frequency than other slow speed channels.The vast majority of the user programmable data loggers can be used to scan up to 1000 analog and 1000 digital input channels. A small number of data loggers, with a higher degree of sophistication, are suitable for acquiring data from up to 15, 000 analog and digital channels. The data from digital channels can be in the form of Transistor- Transistor Logic or contact closure signals. Analog data must be converted into digital format before it is recorded and requires the use of suitable analog to digital converters (ADC).The characteristics of the ADC will define the resolution that can be achieved and the rate at which the various channels can be sampled. An in-crease in the number of bits used in the ADC improves the resolution capability. Successive approximation ADC's arefaster than integrating ADC's. Many microcomputer controlled data loggers include a facility to program the channel scanning rates. Typical scanning rates vary from 2 channels per second to 10, 000 channels per second.Most data loggers have a resolution capability of ±0.01% or better, It is also pos-sible to achieve a resolution of 1 micro-volt. The resolution capability, in absolute terms, also depends upon the range of input signals, Standard input signal ranges are 0-10 volt, 0-50 volt and 0-100 volt. The lowest measurable signal varies form 1 t, volt to 50, volt. A higher degree of recording accuracy can be achieved by using modules which accept data in small, selectable ranges. An alternative is the auto ranging facil-ity available on some data loggers.The accuracy with which the data are acquired and logged-on the appropriate storage device is extremely important. It is therefore necessary that the data acquisi-tion module should be able to reject common mode noise and common mode voltage. Typical common mode noise rejection capabilities lie in the range 110 dB to 150 dB. A decibel (dB) is a tern which defines the ratio of the power levels of two signals. Thus if the reference and actual signals have power levels of N, and Na respectively, they will have a ratio of n decibels, wheren=10 Log10(Na /Nr)Protection against maximum common mode voltages of 200 to 500 volt is available on typical microcomputer based data loggers.The voltage input to an individual data logger channel is measured, scaled and linearised before any further data manipulations or comparisons are carried out.In many situations, it becomes necessary to alter the frequency at which particu-lar channels are sampled depending upon the values of data signals received from a particular input sensor. Thus a channel might normally be sampled once every 10 minutes. If, however, the sensor signals approach the alarm limit, then it is obviously desirable to sample that channel once every minute or even faster so that the operators can be informed, thereby avoiding any catastrophes. Microcomputer controlledintel-ligent data loggers may be programmed to alter the sampling frequencies depending upon the values of process signals. Other data loggers include self-scanning modules which can initiate sampling.The conventional hardwired data loggers, without any programming facilities, simply record the instantaneous values of transducer outputs at a regular samplingin-terval. This raw data often means very little to the typical user. To be meaningful, this data must be linearised and scaled, using a calibration curve, in order to determine the real value of the variable in appropriate engineering units. Prior to the availability of programmable data loggers, this function was usually carried out in the off-line mode on a mini- or mainframe computer. The raw data values had to be punched out on pa-per tape, in binary or octal code, to be input subsequently to the computer used for analysis purposes and converted to the engineering units. Paper tape punches are slow speed mechanical devices which reduce the speed at which channels can be scanned. An alternative was to print out the raw data values which further reduced the data scanning rate. It was not possible to carry out any limit comparisons or provide any alarm information. Every single value acquired by the data logger had to be recorded eventhough it might not serve any useful purpose during subsequent analysis; many data values only need recording when they lie outside the pre-set low and high limits.If the analog data must be transmitted over any distance, differences in ground potential between the signal source and final location can add noise in the interface design. In order to separate common-mode interference form the signal to be recorded or processed, devices designed for this purpose, such as instrumentation amplifiers, may be used. An instrumentation amplifier is characterized by good common-mode- rejection capability, a high input impedance, low drift, adjustable gain, and greater cost than operational amplifiers. They range from monolithic ICs to potted modules, and larger rack-mounted modules with manual scaling and null adjustments. When a very high common-mode voltage is present or the need for extremely-lowcom-mon-mode leakage current exists(as in many medical-electronics applications),an isolation amplifier is required. Isolation amplifiers may use optical or transformer isolation.Analog function circuits are special-purpose circuits that are used for a variety of signal conditioning operations on signals which are in analog form. When their accu-racy is adequate, they can relieve the microprocessor of time-consuming software and computations. Among the typical operations performed are multiplications, division, powers, roots, nonlinear functions such as for linearizing transducers, rimsmeasure-ments, computing vector sums, integration and differentiation, andcurrent-to-voltage or voltage- to-current conversion. Many of these operations can be purchased in available devices as multiplier/dividers, log/antilog amplifiers, and others.When data from a number of independent signal sources must be processed by the same microcomputer or communications channel, a multiplexer is used to channel the input signals into the A/D converter.Multiplexers are also used in reverse, as when a converter must distribute analog information to many different channels. The multiplexer is fed by a D/A converter which continually refreshes the output channels with new information.In many systems, the analog signal varies during the time that the converter takes to digitize an input signal. The changes in this signal level during the conversion process can result in errors since the conversion period can be completed some time after the conversion command. The final value never represents the data at the instant when the conversion command is transmitted. Sample-hold circuits are used to make an acquisition of the varying analog signal and to hold this signal for the duration of the conversion process. Sample-hold circuits are common in multichannel distribution systems where they allow each channel to receive and hold the signal level.In order to get the data in digital form as rapidly and as accurately as possible, we must use an analog/digital (A/D) converter, which might be a shaft encoder, a small module with digital outputs, or a high-resolution, high-speed panel instrument. These devices, which range form IC chips to rack-mounted instruments, convert ana-log input data, usually voltage, into an equivalent digital form. The characteristics of A/D converters include absolute and relative accuracy, linearity, monotonic, resolu-tion, conversion speed, and stability. A choice of input ranges, output codes, and other features are available. The successive-approximation technique is popular for a large number ofapplications, with the most popular alternatives being the counter-comparator types, and dual-ramp approaches. The dual-ramp has been widely-used in digital voltmeters.D/A converters convert a digital format into an equivalent analog representation. The basic converter consists of a circuit of weighted resistance values or ratios, each controlled by a particular level or weight of digital input data, which develops the output voltage or current in accordance with the digital input code. A special class of D/A converter exists which have the capability of handling variable reference sources. These devices are the multiplying DACs. Their output value is the product of the number represented by the digital input code and the analog reference voltage, which may vary form full scale to zero, and in some cases, to negative values.Component Selection CriteriaIn the past decade, data-acquisition hardware has changed radically due to ad-vances in semiconductors, and prices have come down too; what have not changed, however, are the fundamental system problems confronting the designer. Signals may be obscured by noise, rfi,ground loops, power-line pickup, and transients coupled into signal lines from machinery. Separating the signals from these effects becomes a matter for concern.Data-acquisition systems may be separated into two basic categories:(1)those suited to favorable environments like laboratories -and(2)those required for hostile environments such as factories, vehicles, and military installations. The latter group includes industrial process control systems where temperature information may be gathered by sensors on tanks, boilers, wats, or pipelines that may be spread over miles of facilities. That data may then be sent to a central processor to provide real-time process control. The digital control of steel mills, automated chemical production, and machine tools is carried out in this kind of hostile environment. The vulnerability of the data signals leads to the requirement for isolation and other techniques.At the other end of the spectrum-laboratory applications, such as test systems for gathering information on gas chromatographs, mass spectrometers, and other sophis-ticated instruments-the designer's problems are concerned with the performing of sen-sitive measurements under favorable conditions rather than with the problem ofpro-tecting the integrity of collected data under hostile conditions.Systems in hostile environments might require components for wide tempera-tures, shielding, common-mode noise reduction, conversion at an early stage, redun-dant circuits for critical measurements, and preprocessing of the digital data to test its reliability. Laboratory systems, on the other hand, will have narrower temperature ranges and less ambient noise. But the higher accuracies require sensitive devices, and a major effort may be necessary for the required signal /noise ratios.The choice of configuration and components in data-acquisition design depends on consideration of a number of factors:1. Resolution and accuracy required in final format.2. Number of analog sensors to be monitored.3. Sampling rate desired.4. Signal-conditioning requirement due to environment and accuracy.5. Cost trade-offs.Some of the choices for a basic data-acquisition configuration include:1 .Single-channel techniques.A. Direct conversion.B. Preamplification and direct conversion.C. Sample-hold and conversion.D. Preamplification, sample-hold, and conversion.E. Preamplification, signal-conditioning, and direct conversion.F. Preamplification, signal-conditioning, sample-hold, and conversion.2. Multichannel techniques.A. Multiplexing the outputs of single-channel converters.B. Multiplexing the outputs of sample-holds.C. Multiplexing the inputs of sample-holds.D. Multiplexing low-level data.E. More than one tier of multiplexers.Signal-conditioning may include:1. Radiometric conversion techniques.B. Range biasing.D. Logarithmic compression.A. Analog filtering.B. Integrating converters.C. Digital data processing.We shall consider these techniques later, but first we will examine some of the components used in these data-acquisition system configurations.MultiplexersWhen more than one channel requires analog-to-digital conversion, it is neces-sary to use time-division multiplexing in order to connect the analog inputs to a single converter, or to provide a converter for each input and then combine the converter outputs by digital multiplexing.Analog MultiplexersAnalog multiplexer circuits allow the timesharing of analog-to-digital converters between a numbers of analog information channels. An analog multiplexer consists of a group of switches arranged with inputs connected to the individual analog channels and outputs connected in common(as shown in Fig. 1).The switches may be ad-dressed by a digital input code.Many alternative analog switches are available in electromechanical and solid-state forms. Electromechanical switch types include relays, stepper switches,cross-bar switches, mercury-wetted switches, and dry-reed relay switches. The best switching speed is provided by reed relays(about 1 ms).The mechanical switches provide high do isolation resistance, low contact resistance, and the capacity to handle voltages up to 1 KV, and they are usually inexpensive. Multiplexers using mechanical switches are suited to low-speed applications as well as those having high resolution requirements. They interface well with the slower A/D converters, like the integrating dual-slope types. Mechanical switches have a finite life, however, usually expressed innumber of operations. A reed relay might have a life of 109 operations, which wouldallow a 3-year life at 10 operations/second.Solid-state switch devices are capable of operation at 30 ns, and they have a life which exceeds most equipment requirements. Field-effect transistors(FETs)are used in most multiplexers. They have superseded bipolar transistors which can introduce large voltage offsets when used as switches.FET devices have a leakage from drain to source in the off state and a leakage from gate or substrate to drain and source in both the on and off states. Gate leakage in MOS devices is small compared to other sources of leakage. When the device has a Zener-diode-protected gate, an additional leakage path exists between the gate and source.Enhancement-mode MOS-FETs have the advantage that the switch turns off when power is removed from the MUX. Junction-FET multiplexers always turn on with the power off.A more recent development, the CMOS-complementary MOS-switch has the advantage of being able to multiplex voltages up to and including the supply voltages. A±10-V signal can be handled with a ±10-V supply.Trade-off Considerations for the DesignerAnalog multiplexing has been the favored technique for achieving lowest system cost. The decreasing cost of A/D converters and the availability of low-cost, digital integrated circuits specifically designed for multiplexing provide an alternative with advantages for some applications. A decision on the technique to use for a givensys-tem will hinge on trade-offs between the following factors:1. Resolution. The cost of A/D converters rises steeply as the resolution increases due to the cost of precision elements. At the 8-bit level, the per-channel cost of an analog multiplexer may be a considerable proportion of the cost of a converter. At resolutions above 12 bits, the reverse is true, and analog multiplexing tends to be more economical.2. Number of channels. This controls the size of the multiplexer required and the amount of wiring and interconnections. Digital multiplexing onto a common data bus reduces wiring to a minimum in many cases. Analog multiplexing is suited for 8 to 256 channels; beyond this number, the technique is unwieldy and analog errors be-come difficult to minimize. Analog and digital multiplexing is often combined in very large systems.3. Speed of measurement, or throughput. High-speed A/D converters can add a considerable cost to the system. If analog multiplexing demands a high-speedcon-verter to achieve the desired sample rate, a slower converter for each channel with digital multiplexing can be less costly.4. Signal level and conditioning. Wide dynamic ranges between channels can be difficult with analog multiplexing. Signals less than 1V generally require differential low-level analog multiplexing which is expensive, with programmable-gain amplifiers after the MUX operation. The alternative of fixed-gain converters on each channel, with signal-conditioning designed for the channel requirement, with digital multi-plexing may be more efficient.5. Physical location of measurement points. Analog multiplexing is suitedfor making measurements at distances up to a few hundred feet from the converter, since analog lines may suffer from losses, transmission-line reflections, and interference. Lines may range from twisted wire pairs to multiconductor shielded cable, depending on signal levels, distance, and noise environments. Digital multiplexing is operable to thousands of miles, with the proper transmission equipment, for digital transmission systems can offer the powerful noise-rejection characteristics that are required for29 Data Acquisition Systems long-distance transmission.Digital MultiplexingFor systems with small numbers of channels, medium-scale integrated digital multiplexers are available in TTL and MOS logic families. The 74151 is a typical example. Eight of these integrated circuits can be used to multiplex eight A/D con-verters of 8-bit resolution onto a common data bus.This digital multiplexing example offers little advantages in wiring economy, but it is lowest in cost, and the high switching speed allows operation at sampling rates much faster than analog multiplexers. The A/D converters are required only to keep up with the channel sample rate, and not with the commutating rate. When large numbers of A/D converters are multiplexed, the data-bus technique reduces system interconnections. This alone may in many cases justify multiple A/D converters. Data can be bussed onto the lines in bit-parallel or bit-serial format, as many converters have both serial and parallel outputs. A variety of devices can be used to drive the bus, from open collector and tristate TTL gates to line drivers and optoelectronic isolators. Channel-selection decoders can be built from 1-of-16 decoders to the required size. This technique also allows additional reliability in that a failure of one A/D does not affect the other channels. An important requirement is that the multiplexer operate without introducing unacceptable errors at the sample-rate speed. For a digital MUX system, one can determine the speed from propagation delays and the time required to charge the bus capacitance.Analog multiplexers can be more difficult to characterize. Their speed is a func-tion not only of internal parameters but also external parameters such as channel, source impedance, stray capacitance and the number of channels, and the circuit lay-out. The user must be aware of the limiting parameters in the system to judge their ef-fect on performance.The nonideal transmission and open-circuit characteristics of analog multiplexers can introduce static and dynamic errors into the signal path. These errors include leakage through switches, coupling of control signals into the analog path, and inter-actions with sources and following amplifiers. Moreover, the circuit layout can com-pound these effects.Since analog multiplexers may be connected directly to sources which may have little overload capacity or poor settling after overloads, the switches should have a break-before-make action to prevent the possibility of shorting channels together. It may be necessary to avoid shorted channels when power is removed and a chan-nels-off with power-down characteristic is desirable. In addition to the chan-nel-addressing lines, which are normally binary-coded, it is useful to have inhibited or enable lines to turn all switches off regardless of the channel being addressed. This simplifies the external logic necessary to cascade multiplexers and can also be useful in certain modes of channeladdressing. Another requirement for both analog and digital multiplexers is the tolerance of line transients and overload conditions, and the ability to absorb the transient energy and recover without damage.数据采集系统数据采集系统是用来获取数据处理和存储在二级存储设备,为后来的分析。

计算机 数据库 外文文献翻译 中英文

计算机 数据库 外文文献翻译 中英文

科技外文文献Microsoft Future "Soul" - SQL Server 2005 Exploration SecretAuthor : CHEN Bao-linSQL Server development "Brief History"At the beginning of this before, let us look at Microsoft SQL Server development "Brief History."1988 : SQL Server from Microsoft and Sybase common development, running on OS / 2 platform.1993-09-14 : SQL Server 4.2, a desktop database system contains less functional. Integration with Windows and to provide easy-to-use user interface.1994 : Microsoft and Sybase database in cooperation in the development of suspension.1995 : SQL Server 6.0, code-named "SQL95" Microsoft rewriting most of the core system. Provide a low-cost small business application database program.1996-04-16 : SQL Server 6.5, This version brings significant performance improvement and providing a wide variety of useful functions.1998-11-16 : SQL Server 7.0, code-named "Sphinx." Completely rewritten core database engine, providing small and medium business applications database program, contains the initial Web support. SQL Server starting from this version has been widely used.2000-08-07 : the birth of SQL Server 2000, code-named "Shiloh." Microsoft to produce the product has been defined as enterprise-class database system, which includes three components (DB, OLAP, English Query). Rich front-end tools, improved development tools, and XML support, the promotion of this version of the promotion and application. And contains the following several versions.Enterprise Edition : through the deployment of cluster TB-class support services giant databases and thousands of concurrent users online.Standard Edition : to support SMEs.Personal version : support desktop applications.Developer : staff development for enterprises and Windows CE build enterprise applications.Window CE Version : can be applied to any Windows CE mobile devices.2003-04-24 : SQL Server 2000, 64-bit version. Codenamed "Liberty" has been and Unix / Linux Oracle compete.2005-11-07 : SQL Server 2005, codenamed "Yukon" Microsoft SQL Server products to the latest version. Microsoft commented that the status of this product took five years of major changes, a landmark product. Microsoft SQL Server 4.2 to 2005. Microsoft since the early 1990s to enter the database market, SQL Server 2005 until the launch, behaved like an enterprise database from the market to lead the followers of the restructuring, sword was sharpened for 10 years, through many a storm, Microsoft already enterprises database management perspective extends to a broader and deeper realm, the paper attempts to explore the history, Aggregate Microsoft SQL Server formative history.1987 Sysbase developed Unix systems running SQL Server version. In 1988, Microsoft invited the then momentum in the database fields are busy Sysbase. joint development of SQL server. "Sima heart erased", Microsoft tried to enter the database market moves obviously, and, database market is bound to whip up some wind action. Sure enough, after 10 years of market access database for the intense period of the Warring States. 1993-04-12, Microsoft SQL Server version 4.2. And before the introduction of Windows NT echoed that Microsoft officially entered the enterprise applications market. And the SQL Server database and the enterprise is the most important. Although SQL Server 4.2 while still just a desktop version, but there has been considerablepotential. 1994, Microsoft and Sybase formal suspension of the database development cooperation This meaningfully.From 1995 to 2000, Microsoft has adopted 6.0, 6.5,7.0, 2000 Version 4. From the perspective view, SQL Server 2000 version has been able to provide the following services.Online Services (On-line services) : "On-Line" refers to real-time online users use data services.Online transaction processing OLTP (On-Line Transaction Processing) : OLTP operation by the order-processing services transactions, or transactions follow completion or undoes all the principles. It also did not include the type of services. This is a sector that is the most universal and most widely forms of service. Analysis of online services OLAP (On-Line Analytical Processing) : OLAP is a kind of multidimensional data display (such as data warehousing, data mart, data cube), usually to do data mining. As OLTP used to operate and SQL data definition, OLAP is used and MDX (MultiDimensional Expressions) visit and definitions of data. From the technical structure of SQL Server 2000, as follows.Data structure•physical structure of data structure.•logical framework : how to define Tables, ro ws, columns, and other data objectsData Processing• data processing storage engine : it is responsible for dealing with how the data retention.• engine : it is responsible for how the data for the visit and relations.• SQL Server Agent : it is respo nsible for task scheduling and events management.Data manipulation• DB APIs : ADO (ActiveX Data Objects).OLE DB (linking and embedding data objects).DB-Library for C + +.ODBC (Open Data Internet).ESQL (Embedded SQL.)• URLs (uniform resource locat or address).• English inquiries (English Query).SQL Server Enterprise Manager.Tools : Inquiry analyzers, DTS (Data Transformation Services), Backup and restore and replication, metadata services, storage expansion process, SQL tracking, can be used for performance tuning.Experiences from users, SQL Server 2000 version of a number of new characteristics, such as XML support, many examples of support, data warehouse and business intelligence to enhance performance and scalability will improve, operating guide, and the inquiries, DTS, Transact SQL enhancements.From the license price, Microsoft SQL Server 2000, the price and total cost of ownership (TCO) only to the Oracle or D B2 2 / 1 to 1 / 3.In summary, Microsoft high-performance low-cost access to the product concept on the market success SQL Server 2000 database can meet the OLTP and OLAP application deployment, and better performance, and prices relative Oracle, DB2 and other databases low. Meanwhile, SQL Server 2000 Enterprise Edition also includes the standard version and other versions to meet different levels of user demand, These factors prompted the SQL Server 2000 was a significant part of the SME market share Microsoft has the opportunity to enter the mainstream database vendors ranks.At the same time, we should realize that SQL Server 2000 and Oracle launched late in the G 10 high-end enterprise-level functions in surviving deficient, so bridging the gap to catch up on the historic mission to the code-named "Yukon," the new version.Killer code-named "Yukon"From the 1989 release of Microsoft SQL Server 1.0 is now a full 15 years. In that 15 years of SQL Server fromscratch, from small to large, experiencing a once legendary. It has not only eroded with IBM, Oracle database market share, and the next generation of SQL Server has begun to gradually become the next Windows operating system core. China and the Bill Gates mouth • The constant repetition of "seamless calculation" is the core of Yukon, The code-named "Yukon," the next generation of our database will be brought into what kind of world? Internet "soft" pillarIn today's era of the network, data searching,data storage, classification of data, etc. All this has become the Internet network constitutes the "soft" pillars, and the database system is the pillar of the most critical. If there is no database support, we would never be able to Google or Baidu in the search for the information they need. can not use the convenient electronic mailbox, but that Network World because it is a large database consisting of.According to IDC's latest data show that the global database software market seems to be stirring Tension 2003 total revenue reached 13.6 billion U.S. dollars, compared with 2002's 12.6 billion U.S. dollars have increased. Oracle, IBM and Microsoft now controls 75% market share. Oracle last year for a market share of 39.8%, 31.3% for IBM, Microsoft to 12.1%.What is the database? In the University's computer textbooks, the database is being interpreted in this way : The database is the computer application system in a specialized data resource management system. There are many forms of data, such as text, digital, symbols, graphics, images and voices, and so on. All computer data system to deal with the subject. People familiar approach of a document is produced, will soon compile a program processing documents, will be covered by the procedural requirements of data organized into data files, documentation of procedures to call. Data files and program files maintain a certain relationship. Computer Application in the rapid development of the situation, by means of such a document will highlight deficiencies. For example, it allows poor definitive data, facilitate transplantation, in different documents stored information much duplication and waste of storage space, Update inconvenience. Database system will solve this problem. Database systems from the application of specific procedures, but based on the data management, All data will be stored in a database, scientific organizations, and by means of the database management system, using it as an intermediary, with a variety of applications or application interface to make it easy access to the data in the database.This note describes is indeed very detailed, but you may not always seem dizziness, In fact, a simple database that is after a group of computer collation of data stored in one or more documents, and the management of the database software called on the database management system. A general database system (104217) can be divided into the database (Database ) and Data Management System (Database Management System, DBMS) in two parts, all of these constitute the Internet is a "soft" pillars all.Microsoft's SQL Server database software, as many of the upgrade from 6.5 to the 7.0 version, gradually become mainstream database software, and SQL Server 2000 also proved that the Windows operating system can bear the same high-end data application, as the mainstream business application of database management software. It broke the rule by the large Unix database software myth and the next generation of SQL Server 2005 there will be what kind of change?Live Yukon core secretsMicrosoft in the next version of SQL Server (codenamed "Yukon") at the planning stage , considered more of the future development of the database, and SQL Server programming capabilities. Microsoft's internal development staff had long been aware that the future must introduce a more unified programming model but for a different data model to provide more flexibility. The unified programming model means that the ordinary data access and operation tasks can be carried out through various channels. For example, you can choose to use XML or Framework, or Transact-S QL (T-SQL) code, and so on.Such planning will result is a new database programming platform, which in many ways a natural extension. First, host. NET Framework common language runtime (CLR) to the function of the process of expansion of database programming and managed code area. Secondly,. NET framework provides a host integration from within SQL Server powerful object database functions. XML is the in-depth support functions through the XML data typeto achieve, and It has a data type of relationship between all the functions. In addition, also added a pair of XML Query (XQuery) and XML structure definition language (XSD) standard server support. Finally, SQL Server Yukon includes T-SQL language to enhance the important function.XML in SQL Server Yukon's history really began with SQL Server 2000. SQL Server 2000 with the introduction of the XML format to relational data. large load and segmentation XML documents and databases will be open targets for XML-based Web services, and other functions, However Yukon provide a more senior XML Query function, After perfecting the Y ukon will be full play all of the advantages of XML. XML Why so critical? In fact, from the initial XML an alternative HTML said the technical development of a line format, now be seen as a storage format. XML lasting memory has drawn widespread attention, the Internet has also been a lot of XML data type applications. XML itself can be an across any platform data format, It started as a file format for use, as XML in the enterprise has been widely recognized, Users began to use XML to solve thorny business problems, such as data integration. This makes as a data storage format XML development today, Because XML can be displayed on any platform to produce the same results, XML has become a mainstream database storage format. This built-in the Yukon comprehensive XML support will trigger a new database technology revolution.These new programming models and enhanced common language to create a series of programmable, They complement and expand the current relational database model. This architecture has the ultimate aim is to build more scalable, more reliable, more robust applications, and to enhance the development of efficiency. These models Another result is a service called SQL Agent new application framework -- for Asynchronous sources delivering the Distributed Application Framework.Yukon joining century gambleConstantly talking before we say a string of technology advantages, then you may very curious, Why should we introduce this appears to be a high-end database application software technologies? Perhaps we should kick the answer.The richest on Earth doing computer predictions for the future, he believes, in the next world, every one ordinary computer will have a large enough super hard disks, At that time the hard disk is no longer simply an 80 GB is likely to be 80 TB, Although it is only a change GB TB, but that means hard disk capacity of a full upgrade of 1000 times. And the existing Windows disk data storage NTFS format, simply unable to cope with such a large capacity hard disk data search. Said an image of the example, if the 100 TB of disk space on your computer, At that time, or you use Windows XP, You collate debris disk of the time required is likely to be for two days and two nights, if you want to find a particular document, You will have waited for several hours. That feeling is like to return to 286 times.In order to solve this thorny problem, the next generation Windows operating system Longhorn decided with the previous non-Windows diametrically with the programming model. The core is Avalon (development code). Avalon is the new Windows GUI library. New Longhorn into the Indigo (Web services) and WinFS (file system) of the new function. Including Avalon, these three new function called hell. Longhorn is the founder of a new "local" API. Although now is to the Win32 API compatibility and grow, However, to use the new Longhorn functions, under normal circumstances the use of hell. Max belongs to the present. NET Framework in the city. Present. NET Framework used in the category, which has hell, DLL support for the procedural mechanisms and the operation. NET basically the same.. NET Framework in SQL Server Yukon Availability when major version upgrade ( Major VersionUp), the specific date is the end of 2004. In the Yukon. NET Framework to run. In the storage process (Stored Procedures) use. NET Framework The class library. Yukon operations. NET Framework version 2.0. Supplementary to the present. NET Framework 1.1 is no relevant category of multimedia. WinFS use Yukon engines. In other words, Longhorn, the file system will use database engine.This time you understand, the next generation Windows operating system, the whole document data management will be introduced SQL Server configuration management, when Our computer data querycapabilities, data integration capability will be greatly enhanced. This of course, that the rich keep saying that the "seamless calculation" is a critical step on Microsoft, Let database software and operating systems integration projects century is undoubtedly a gamble, which, if successful, Microsoft will gradually become the dominant database, but if it fails, The almost even harden the next generation Windows listing of the normal schedule.Microsoft has provided some tools for SQL server and client applications on the network between the transmission of data increases secret. However, the Microsoft product manager said Kirsten Ward, plans to release next year a new SQL Server database will be stored in the data encryption, Hacker attacks increase defense capabilities.Microsoft earlier this year "SQL Server 2005" release time postponed until the first half of next year. The database software will enhance the launch of Microsoft database computing power and better with Oracle and IBM compete. Microsoft will also introduce a unified storage concept, locating and retrieving data more convenient. Oracle in Windows and Unix database market has been in a leading position. However, the recently adopted this year, Microsoft SQL Server to increase more advanced functions have also made remarkable progress.In addition, Microsoft will also provide a service called "Best Practices Analyzer Tool" (best practice analyzer tool) software. Database administrators can use the software using Microsoft editor of the Guide database software debugging. This applies to software tools for Microsoft database software current version "SQL Server 2000" and to provide a database administrator in various fields Operations Guide, For example, how to improve performance and how to conduct more effective data backup and so on.Ward said that the software tool also includes an "Upgrade Advisor" procedure. This procedure can scan database programs and warned "SQL Server 2000" users to make the necessary amendments changed so that the procedures compatible with the upcoming launch of the "SQL Server 2005."(Source : China Computer Education)中文译文微软未来的“灵魂”—SQL Server 2005探密作者:陈宝林SQL Server的发展“简史”在开始本文之前,先让我们来看一下微软SQL Server的发展“简史”。

计算机安全漏洞中英文对照外文翻译文献

计算机安全漏洞中英文对照外文翻译文献

计算机安全漏洞中英文对照外文翻译文献(文档含英文原文和中文翻译)Talking about security loopholesreference to the core network security business objective is to protect the sustainability of the system and data security, This two of the main threats come from the worm outbreaks, hacking attacks, denial of service attacks, Trojan horse. Worms, hacker attacks problems and loopholes closely linked to, if there is major security loopholes have emerged, the entire Internet will be faced with a major challenge. While traditional Trojan and little security loopholes, but recently many Trojan are clever use of the IE loophole let you browse the website at unknowingly were on the move.Security loopholes in the definition of a lot, I have here is a popular saying: can be used to stem the "thought" can not do, and are safety-related deficiencies. Thisshortcoming can be a matter of design, code realization of the problem.Different perspective of security loo phole sIn the classification of a specific procedure is safe from the many loopholes in classification.1. Classification from the user groups:● Public loopholes in the software category. If the loopholes in Windows, IEloophole, and so on.● specialized software loophole. If Oracl e loopholes, Apache, etc. loopholes.2. Data from the perspective include :● could not reasonably be read and read data, including the memory of thedata, documents the data, Users input data, the data in the database, network,data transmission and so on.● designated can be written into the designated places (including the localpaper, memory, databases, etc.)● Input data can be implemented (including native implementation,according to Shell code execution, by SQL code execution, etc.)3. From the point of view of the scope of the role are :● Remote loopholes, an attacker could use the network and directly throughthe loopholes in the attack. Such loopholes great harm, an attacker can createa loophole through other people's computers operate. Such loopholes and caneasily lead to worm attacks on Windows.● Local loopholes, the attacker must have the machine premise accesspermissions can be launched to attack the loopholes. Typical of the localauthority to upgrade loopholes, loopholes in the Unix system are widespread,allow ordinary users to access the highest administrator privileges.4. Trigger conditions from the point of view can be divided into:● Initiative trigger loopholes, an attacker can take the initiative to use theloopholes in the attack, If direct access to computers.● Passive trigger loopholes must be computer operators can be carried outattacks with the use of the loophole. For example, the attacker made to a mailadministrator, with a special jpg image files, if the administrator to open image files will lead to a picture of the software loophole was triggered, thereby system attacks, but if managers do not look at the pictures will not be affected by attacks.5. On an operational perspective can be divided into:● File operation type, mainly for the operation of the target file path can be controlled (e.g., parameters, configuration files, environment variables, the symbolic link HEC), this may lead to the following two questions: ◇Content can be written into control, the contents of the documents can be forged. Upgrading or authority to directly alter the important data (such as revising the deposit and lending data), this has many loopholes. If history Oracle TNS LOG document can be designated loopholes, could lead to any person may control the operation of the Oracle computer services;◇information content can be output Print content has been contained to a screen to record readable log files can be generated by the core users reading papers, Such loopholes in the history of the Unix system crontab subsystem seen many times, ordinary users can read the shadow of protected documents;● Memory coverage, mainly for memory modules can be specified, write content may designate such persons will be able to attack to enforce the code (buffer overflow, format string loopholes, PTrace loopholes, Windows 2000 history of the hardware debugging registers users can write loopholes), or directly alter the memory of secrets data.● logic errors, such wide gaps exist, but very few changes, so it is difficult to discern, can be broken down as follows : ◇loopholes competitive conditions (usually for the design, typical of Ptrace loopholes, The existence of widespread document timing of competition) ◇wrong tactic, usually in design. If the history of the FreeBSD Smart IO loopholes. ◇Algorithm (usually code or design to achieve), If the history of Microsoft Windows 95/98 sharing passwordcan easily access loopholes. ◇Imperfections of the design, such as TCP / IP protocol of the three-step handshake SYN FLOOD led to a denial of service attack. ◇realize the mistakes (usually no problem for the design, but the presence of coding logic wrong, If history betting system pseudo-random algorithm)● External orders, Typical of external commands can be controlled (via thePATH variable, SHELL importation of special characters, etc.) and SQL injection issues.6. From time series can be divided into:● has long found loopholes: manufacturers already issued a patch or repairmethods many people know already. Such loopholes are usually a lot of people have had to repair macro perspective harm rather small.● recently discovered loophole: manufacturers just made patch or repairmethods, the people still do not know more. Compared to greater danger loopholes, if the worm appeared fool or the use of procedures, so will result in a large number of systems have been attacked.● 0day: not open the loophole in the private transactions. Usually such loopholesto the public will not have any impact, but it will allow an attacker to the target by aiming precision attacks, harm is very great.Different perspective on the use of the loopholesIf a defect should not be used to stem the "original" can not do what the (safety-related), one would not be called security vulnerability, security loopholes and gaps inevitably closely linked to use.Perspective use of the loopholes is:● Data Perspective: visit had not visited the data, including reading and writing.This is usually an attacker's core purpose, but can cause very serious disaster (such as banking data can be written).● Competence Perspective: Major Powers to bypass or permissions. Permissionsare usually in order to obtain the desired data manipulation capabilities.● Usability p erspective: access to certain services on the system of controlauthority, this may lead to some important services to stop attacks and lead to a denial of service attack.● Authentication bypass: usually use certification system and the loopholes willnot authorize to access. Authentication is usually bypassed for permissions or direct data access services.● Code execution perspective: mainly procedures for the importation of thecontents as to implement the code, obtain remote system access permissions or local system of higher authority. This angle is SQL injection, memory type games pointer loopholes (buffer overflow, format string, Plastic overflow etc.), the main driving. This angle is usually bypassing the authentication system, permissions, and data preparation for the reading.Loopholes explore methods mustFirst remove security vulnerabilities in software BUG in a subset, all software testing tools have security loopholes to explore practical. Now that the "hackers" used to explore the various loopholes that there are means available to the model are:● fuzz testing (black box testing), by constructing procedures may lead toproblems of structural input data for automatic testing.● FOSS audit (White Box), now have a series of tools that can assist in thedetection of the safety procedures BUG. The most simple is your hands the latest version of the C language compiler.● IDA anti-compilation of the audit (gray box testing), and above the sourceaudit are very similar. The only difference is that many times you can obtain software, but you can not get to the source code audit, But IDA is a very powerful anti-Series platform, let you based on the code (the source code is in fact equivalent) conducted a safety audit.● dynamic tracking, is the record of proceedings under different conditions andthe implementation of all security issues related to the operation (such as file operations), then sequence analysis of these operations if there are problems, it is competitive category loopholes found one of the major ways. Other tracking tainted spread also belongs to this category.● patch, the software manufacturers out of the question usually addressed in thepatch. By comparing the patch before and after the source document (or the anti-coding) to be aware of the specific details of loopholes.More tools with which both relate to a crucial point: Artificial need to find a comprehensive analysis of the flow path coverage. Analysis methods varied analysis and design documents, source code analysis, analysis of the anti-code compilation, dynamic debugging procedures.Grading loopholesloopholes in the inspection harm should close the loopholes and the use of the hazards related Often people are not aware of all the Buffer Overflow Vulnerability loopholes are high-risk. A long-distance loophole example and better delineation:●Remote access can be an OS, application procedures, version information.●open unnecessary or dangerous in the service, remote access to sensitiveinformation systems.● Remote can be restricted for the documents, data reading.●remotely important or restricted documents, data reading.● may be limited for long-range document, data revisions.● Remote can be restricted for important documents, data changes.● Remote c an be conducted without limitation in the important documents, datachanges, or for general service denial of service attacks.● Remotely as a normal user or executing orders for system and network-leveldenial of service attacks.● may be remote managem ent of user identities to the enforcement of the order(limited, it is not easy to use).● can be remote management of user identities to the enforcement of the order(not restricted, accessible).Almost all local loopholes lead to code execution, classified above the 10 points system for:●initiative remote trigger code execution (such as IE loophole).● passive trigger remote code execution (such as Word gaps / charting softwareloopholes).DEMOa firewall segregation (peacekeeping operation only allows the Department of visits) networks were operating a Unix server; operating systems only root users and users may oracle landing operating system running Apache (nobody authority), Oracle (oracle user rights) services.An attacker's purpose is to amend the Oracle database table billing data. Its possible attacks steps:● 1. Access peacekeeping operation of the network. Access to a peacekeepingoperation of the IP address in order to visit through the firewall to protect the UNIX server.● 2. Apache s ervices using a Remote Buffer Overflow Vulnerability direct accessto a nobody's competence hell visit.● 3. Using a certain operating system suid procedure of the loophole to upgradetheir competence to root privileges.● 4. Oracle sysdba landing into t he database (local landing without a password).● 5. Revised target table data.Over five down for process analysis:●Step 1: Authentication bypass●Step 2: Remote loopholes code execution (native), Authentication bypassing● Step 3: permissions, auth entication bypass● Step 4: Authentication bypass● Step 5: write data安全漏洞杂谈网络安全的核心目标是保障业务系统的可持续性和数据的安全性,而这两点的主要威胁来自于蠕虫的暴发、黑客的攻击、拒绝服务攻击、木马。

数据库外文参考文献及翻译.

数据库外文参考文献及翻译.

数据库外文参考文献及翻译数据库外文参考文献及翻译数据库管理系统——实施数据完整性一个数据库,只有用户对它特别有信心的时候。

这就是为什么服务器必须实施数据完整性规则和商业政策的原因。

执行SQL Server的数据完整性的数据库本身,保证了复杂的业务政策得以遵循,以及强制性数据元素之间的关系得到遵守。

因为SQL Server的客户机/服务器体系结构允许你使用各种不同的前端应用程序去操纵和从服务器上呈现同样的数据,这把一切必要的完整性约束,安全权限,业务规则编码成每个应用,是非常繁琐的。

如果企业的所有政策都在前端应用程序中被编码,那么各种应用程序都将随着每一次业务的政策的改变而改变。

即使您试图把业务规则编码为每个客户端应用程序,其应用程序失常的危险性也将依然存在。

大多数应用程序都是不能完全信任的,只有当服务器可以作为最后仲裁者,并且服务器不能为一个很差的书面或恶意程序去破坏其完整性而提供一个后门。

SQL Server使用了先进的数据完整性功能,如存储过程,声明引用完整性(DRI),数据类型,限制,规则,默认和触发器来执行数据的完整性。

所有这些功能在数据库里都有各自的用途;通过这些完整性功能的结合,可以实现您的数据库的灵活性和易于管理,而且还安全。

声明数据完整性声明数据完整原文请找腾讯3249114六,维-论'文.网 定义一个表时指定构成的主键的列。

这就是所谓的主键约束。

SQL Server使用主键约束以保证所有值的唯一性在指定的列从未侵犯。

通过确保这个表有一个主键来实现这个表的实体完整性。

有时,在一个表中一个以上的列(或列的组合)可以唯一标志一行,例如,雇员表可能有员工编号( emp_id )列和社会安全号码( soc_sec_num )列,两者的值都被认为是唯一的。

这种列经常被称为替代键或候选键。

这些项也必须是唯一的。

虽然一个表只能有一个主键,但是它可以有多个候选键。

SQL Server的支持多个候选键概念进入唯一性约束。

电子信息工程数据库管理中英文对照外文翻译文献

电子信息工程数据库管理中英文对照外文翻译文献

中英文对照外文翻译文献(文档含英文原文和中文翻译)译文:数据库管理数据库(有时拼成Database)也称为电子数据库,是指由计算机特别组织的用下快速查找和检索的任意的数据或信息集合。

数据库与其它数据处理操作协同工作,其结构要有助于数据的存储、检索、修改和删除。

数据库可存储在磁盘或磁带、光盘或某些辅助存储设备上。

一个数据库由一个文件或文件集合组成。

这些文件中的信息可分解成一个个记录,每个记录有一个或多个域。

域是数据库存储的基本单位,每个域一般含有由数据库描述的属于实体的一个方面或一个特性的信息。

用户使用键盘和各种排序命令,能够快速查找、重排、分组并在查找的许多记录中选择相应的域,建立特定集上的报表。

数据库记录和文件的组织必须确保能对信息进行检索。

早期的系统是顺序组织的(如:字母顺序、数字顺序或时间顺序);直接访问存储设备的研制成功使得通过索引随机访问数据成为可能。

用户检索数据库信息的主要方法是query(查询)。

通常情况下,用户提供一个字符串,计算机在数据库中寻找相应的字符序列,并且给出字符串在何处出现。

比如,用户必须能在任意给定时间快速处理内部数据。

而且,大型企业和其它组织倾向于建立许多独立的文件,其中包含相互关联的甚至重叠的数据,这些数据、处理活动经常需要和其它文件的数据相连。

为满足这些要求,开发邮各种不同类型的数据库管理系统,如:非结构化的数据库、层次型数据库、网络型数据库、关系型数据库、面向对象型数据库。

在非结构化的数据库中,按照实体的一个简单列表组织记录;很多个人计算机的简易数据库是非结构的。

层次型数据库按树型组织记录,每一层的记录分解成更小的属性集。

层次型数据库在不同层的记录集之间提供一个单一链接。

与此不同,网络型数据库在不同记录集之间提供多个链接,这是通过设置指向其它记录集的链或指针来实现的。

网络型数据库的速度及多样性使其在企业中得到广泛应用。

当文件或记录间的关系不能用链表达时,使用关系型数据库。

关于计算机网络数据库安全技术方案浅探中英文对照

关于计算机网络数据库安全技术方案浅探中英文对照

关于计算机网络数据库安全技术方案浅探On the computer network database security technology scheme of in English论文摘要:随着因特网和数据库技术的迅速发展,网络数据库的安全性问题显得尤为重要,并已经成为现今网络信息系统建设中的一个最为关键的问题。

本文简要概述了现今网络数据库技术所面临的安全性威胁,以此为出发点,对计算机网络数据库安全技术方案进行了相关探讨。

Abstract: with the advent of the Internet and the rapid development of database technology, network database security issues appear particularly important, and has become one of the network information system construction is one of the most critical problem. This article providesa brief overview of current network database technology facing security threats, as a starting point, on the computer network database security technology scheme of.计算机网络环境中的信息存储和管理都是由网络数据库来实现的,而随着计算机网络技术的广泛普及和快速发展,网络数据库的安全性已经成为整个计算机网络安全领域中的一个极为重要的问题。

网络数据库是一种开放环境下的信息仓库,存储着大量非常重要的数据信息,一旦遭受各个方面的不可预测的安全攻击,就将给用户带来不可估量的损失,如此大的安全隐患不得不让我们纳入考虑范畴并加以防范。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

中英文对照外文翻译文献(文档含英文原文和中文翻译)Database Security in a Web Environment IntroductionDatabases have been common in government departments and commercial enterprises for many years. Today, databases in any organization are increasingly opened up to a multiplicity of suppliers, customers, partners and employees - an idea that would have been unheard of a few years ago. Numerous applications and their associated data are now accessed by a variety of users requiring different levels of access via manifold devices and channels – often simultaneously. For example:• Online banks allow customers to perform a variety of banking operations - via the Internet and over the telephone – whilst maintaining the privacy of account data.• E-Commerce merchants and their Service Providers must store customer, order and payment data on their merchant server - and keep it secure.• HR departments allow employees to update their personal information –whilst protecting certain management information from unauthorized access.• The medical profession must protect the confidentiality of patient data –whilst allowing essential access for treatment.• Online brokerages need to be able to provide large numbers of simultaneous users with up-to-date and accurate financial information.This complex landscape leads to many new demands upon system security. The global growth of complex web-based infrastructures is driving a need for security solutions that provide mechanisms to segregate environments; perform integrity checking and maintenance; enable strong authentication andnon-repudiation; and provide for confidentiality. In turn, this necessitates comprehensive business and technical risk assessment to identify the threats,vulnerabilities and impacts, and from this define a security policy. This leads to security definitions throughout the infrastructure - operating system, database management system, middleware and network.Financial, personal and medical information systems and some areas of government have strict requirements for security and privacy. Inappropriate disclosure of sensitive information to the wrong parties can have severe social, legal and regulatory consequences. Failure to address the basics can result in substantial direct and consequential financial losses - witness the fraud losses through the compromise of several million credit card numbers in merchants’ databases [Occf], plus associated damage to brand-image and loss of consumer confidence.This article discusses some of the main issues in database and web server security, and also considers important architecture and design issues.A Simple ModelAt the simplest level, a web server system consists of front-end software and back-end databases with interface software linking the two. Normally, the front-end software will consist of server software and the network server operating system, and the back-end database will be a relational orobject-oriented database fulfilling a variety of functions, including recording transactions, maintaining accounts and inventory. The interface software typically consists of Common Gateway Interface (CGI) scripts used to receive information from forms on web sites to perform online searches and to update the database.Depending on the infrastructure, middleware may be present; in addition, security management subsystems (with session and user databases) that address the web server’s and related applications’ requirements for authentication, accesscontrol and authorization may be present. Communications between this subsystem and either the web server, middleware or database are via application program interfaces (APIs)..This simple model is depicted in Figure 1.Security can be provided by the following components:• Web server.• Middleware.• Operating system.. Figure 1: A Simple Model.• Database and Database Management System.• Security management subsystem.The security of such a system addressesAspects of authenticity, integrity and confidentiality and is dependent on the security of the individual components and their interactions. Some of the most common vulnerabilities arise from poor configuration, inadequate change control procedures and poor administration. However, even if these areas are properlyaddressed, vulnerabilities still arise. The appropriate combination of people, technology and processes holds the key to providing the required physical and logical security. Attention should additionally be paid to the security aspects of planning, architecture, design and implementation.In the following sections, we consider some of the main security issues associated with databases, database management systems, operating systems and web servers, as well as important architecture and design issues. Our treatment seeks only to outline the main issues and the interested reader should refer to the references for a more detailed description.Database SecurityDatabase management systems normally run on top of an operating system and provide the security associated with a database. Typical operating system security features include memory and file protection, resource access control and user authentication. Memory protection prevents the memory of one program interfering with that of another and limits access and use of the objects employing techniques such as memory segmentation. The operating system also protects access to other objects (such as instructions, input and output devices, files and passwords) by checking access with reference to access control lists. Security mechanisms in common operating systems vary tremendously and, for those that are lacking, there exists special-purpose security software that can be integrated with the existing environment. However, this can be an expensive, time-consuming task and integration difficulties may also adversely impact application behaviors.Most database management systems consist of a number of modules - including database querying and database and file management - along with authorization, concurrent access and database description tables. Thesemanagement systems also use a variety of languages: a data definition language supports the logical definition of the database; developers use a data manipulation language; and a query language is used by non-specialist end-users.Database management systems have many of the same security requirements as operating systems, but there are significant differences since the former are particularly susceptible to the threat of improper disclosure, modification of information and also denial of service. Some of the most important security requirements for database management systems are: • Multi-Level Access Control.• Confidentiality.• Reliability.• Integrity.• Recovery.These requirements, along with security models, are considered in the following sections.Multi-Level Access ControlIn a multi-application and multi-user environment, administrators, auditors, developers, managers and users – collectively called subjects - need access to database objects, such as tables, fields or records. Access control restricts the operations available to a subject with respect to particular objects and is enforced by the database management system. Mandatory access controls require that each controlled object in the database must be labeled with a security level, whereas discretionary access controls may be applied at the choice of a subject.Access control in database management systems is more complicated than in operating systems since, in the latter, all objects are unrelated whereas in a database the converse is true. Databases are also required to make accessdecisions based on a finer degree of subject and object granularity. In multi-level systems, access control can be enforced by the use of views - filtered subsets of the database - containing the precise information that a subject is authorized to see.A general principle of access control is that a subject with high level security should not be able to write to a lower level object, and this poses a problem for database management systems that must read all database objects and write new objects. One solution to this problem is to use a trusted database management system.ConfidentialitySome databases will inevitably contain what is considered confidential data. For example, it could be inherently sensitive or its source may be sensitive, or it may belong to a sensitive table, thus making it difficult to determine what is actually confidential. Disclosure is also difficult to define, as it can be direct, indirect, involve the disclosure of bounds or even mere existence.An inference problem exists in database management systems whereby users can infer sensitive information from relatively insensitive queries. A trivial example is a request for information about the average salary of an employee and the number of employees turns out to be just one, thus revealing the employee’s salary. However, much more sophisticated statistical inference attacks can also be mounted. This highlights the fact that, although the data itself may be properly controlled, confidential information may still leak out.Controls can take several forms: not divulging sensitive information to unauthorized parties (which depends on the respective subject and object security levels), logging what each user knows or masking response data. The first control can be implemented fairly easily, the second quickly becomesunmanageable for a large number of users and the third leads to imprecise responses, and also exemplifies the trade-off between precision and security. Polyinstantiation refers to multiple instances of a data object existing in the database and it can provide a partial solution to the inference problem whereby different data values are supplied, depending on the security level, in response to the same query. However, this makes consistency management more difficult.Another issue that arises is when the security level of an aggregate amount is different to that of its elements (a problem commonly referred to as aggregation). This can be addressed by defining appropriate access control using views.Reliability, Integrity and RecoveryArguably, the most important requirements for databases are to ensure that the database presents consistent information to queries and can recover from any failures. An important aspect of consistency is that transactions execute atomically; that is, they either execute completely or not at all.Concurrency control addresses the problem of allowing simultaneous programs access to a shared database, while avoiding incorrect behavior or interference. It is normally addressed by a scheduler that uses locking techniques to ensure that the transactions are serial sable and independent. A common technique used in commercial products is two-phase locking (or variations thereof) in which the database management system controls when transactions obtain and release their locks according to whether or not transaction processing has been completed. In a first phase, the database management system collects the necessary data for the update: in a second phase, it updates the database. This means that the database can recover from incomplete transactions by repeatingeither of the appropriate phases. This technique can also be used in a distributed database system using a distributed scheduler arrangement.System failures can arise from the operating system and may result in corrupted storage. The main copy of the database is used for recovery from failures and communicates with a cached version that is used as the working version. In association with the logs, this allows the database to recover to a very specific point in the event of a system failure, either by removing the effects of incomplete transactions or applying the effects of completed transactions. Instead of having to recover the entire database after a failure, recovery can be made more efficient by the use of check pointing. It is used during normal operations to write additional updated information - such as logs, before-images of incomplete transactions, after-images of completed transactions - to the main database which reduces the amount of work needed for recovery. Recovery from failures in distributed systems is more complicated, since a single logical action is executed at different physical sites and the prospect of partial failure arises.Logical integrity, at field level and for the entire database, is addressed by the use of monitors to check important items such as input ranges, states and transitions. Error-correcting and error-detecting codes are also used.Security ModelsVarious security models exist that address different aspects of security in operating systems and database management systems. For example, theBell-LaPadula model defines security in terms of mandatory access control and addresses confidentiality only. The Bell LaPadula models, and other models including the Biba model for integrity, are described more fully in [Cast95] and [Pfle89]. These models are implementation-independent and provide a powerfulinsight into the properties of secure systems, lead to design policies and principles, and some form the basis for security evaluation criteria.Web Server SecurityWeb servers are now one of the most common interfaces between users and back-end databases, and as such, their security becomes increasingly important. Exploitation of vulnerabilities in the web server can lead to unforeseen attacks on middleware and backend databases, bypassing any controls that may be in place. In this section, we focus on common web server vulnerabilities and how the authentication requirements of web servers and databases are met.In general, a web server platform should not be shared with other applications and should be the only machine allowed to access the database. Using a firewall can provide additional security - either between the web server and users or between the web server and back-end database - and often the web server is placed on a de-militarized zone (DMZ) of a firewall. While firewalls can be used to block certain incoming connections, they must allow HTTP (and HTTPS) connections through to the web server, and so attacks can still be launched via the ports associated with these connections.VulnerabilitiesVulnerabilities appear on a weekly basis and, here, we prefer to focus on some general issues rather than specific attacks. Common web server vulnerabilities include:• No policy exists.• The default configuration is on.• Reusable passwords appear in clear.• Unnecessary ports available for network services are not disabled.• New security holes are not tracked. Even if they are, well-known vulnerabilities are not always fixed as the source code patches are not applied by system administrator and old programs are not re-compiled or removed.• Security tools are not used to scan the network for weaknesses and changes or to detect intrusions.• Faulty and buggy software - for example, buffer overflow and stack smashingAttacks• Automatic directory listings - this is of particular concern for the interface software directories.• Server root files are generally visible or accessible.• Lack of logs and bac kups.• File access is often not explicitly configured by the system administrator according to the security policy. This applies to configuration, client, administration and log files, administration programs, and CGI program sources and executables. CGI scripts allow dynamic web pages and make program development (in, for example, Perl) easy and rapid. However, their successful exploitation may allow execution of malicious programs, launching ofdenial-of-service attacks and, ultimately, privilege escalation on a server.Web Server and Database AuthenticationWhile user, browser and web server authentication are relatively well understood [Garf97], [Ghos98] and [Tree98], the introduction of additional components, such as databases and middleware, raise a number of authentication issues. There are a variety of options for authentication in a simple model (Figure 1). Firstly, both the web server and database management system can individually authenticate a user. This option requires the user to authenticatetwice which may be unacceptable in certain applications, although a singlesign-on device (which aims to manage authentication in a user-transparent way) may help. Secondly, a common approach is for the database to automatically grant user access based on web server authentication. However, this option should only be used for accessing publicly available information. Finally, the database may grant user access employing the web server authentication credentials as a basis for its own user authentication, using security management subsystems (Figure 1). We consider this last option in more detail.Web-based communications use the stateless HTTP protocol with the implication that state, and hence authentication, is not preserved when browsing successive web pages. Cookies, or files placed on user’s machine by a web server, were developed as a means of addressing this issue and are often used to provide authentication. However, after initial authentication, there is typically no re authentication per page in the same realm, only the use of unencrypted cookies (sometimes in association with IP addresses). This approach provides limited security as both cookies and IP addresses can be tampered with or spoofed.A stronger authentication method, commonly used by commercial implementations, uses digitally signed cookies. This allows additional systems, such as databases, to use digitally signed cookie data, including a session ID, as a basis for authentication. When a user has been authenticated by a web server (using a password, for example), a session ID is assigned and is stored in a security management subsystem database. When a user subsequently requests information from a database, the database receives a copy of the session ID, the security management subsystem checks this session ID against its local copy and, if authentication is successful, user access is granted to the database.The session ID is typically transmitted in the clear between the web server and database, but may be protected by SSL or even by physical security measures. The communications between the browser and web servers, and the web servers and security management subsystem (and its databases), are normally protected by SSL and use a web server security API that is used to digitally sign and verify browser cookies. The communications between the back-end databases and security management subsystem (and its databases) are also normally protected by SSL and use a database security API that verifies session Ids originating from the database and provides additional user authorization credentials. The web server security API is generally proprietary while, for the database security API, many vendors have adopted standards such as the Generic Security Services API (GSS-API) or CORBA [RFC2078] and [Corba].Architecture and DesignSecurity requirements for designing, building and implementing databases are important so that the systems, as part of the overall infrastructure, meet their requirements in actual operation. The various security models provide an important insight into the design requirements for databases and their management systems.Secure Database Management System ArchitecturesIn multi-level database management systems, a variety of architectures are possible: trusted subject, integrity locked, kernels and replicated. Trusted subject is used by most of the leading database management system vendors and can be integrated in existing products. Basically, the trusted subject architecture allows users to access a database via an un trusted front-end, a trusted database management system and trusted operating system. The operating systemprovides physical access to the database and the database management system provides multilevel object protection.The other architectures - integrity locked, kernels and replicated - all vary in detail, but they use a trusted front-end and an un trusted database management system. For details of these architectures and research prototypes, the reader is referred to [Cast95]. Different architectures are suited to different environments: for example, the trusted subject architecture is less integrated with the underlying operating system and is best suited when a trusted path can be assured between applications and the database management system.Secure Database Management System DesignAs discussed above, there are several fundamental differences between operating system and database management system design, including object granularity, multiple data types, data correlations and multi-level transactions. Other differences include the fact that database management systems include both physical and logical objects and that the database lifecycle is normally longer.These differences must be reflected in the design requirements which include:• Access, flow and infer ence controls.• Access granularity and modes.• Dynamic authorization.• Multi-level protection.• Polyinstantiation.• Auditing.• Performance.These requirements should be considered alongside basic information integrity principles, such as:• Well-formed transactions - to ensure that transactions are correct and consistent.• Continuity of operation - to ensure that data can be properly recovered, depending on the extent of a disaster.• Authorization and role management – to ensure that distinct roles are defined and users are authorized.• Authenticated users - to ensure that users are authenticated.• Least privilege - to ensure that users have the minimal privilege necessary to perform their tasks.• Separation of duties - to ensure that no single individual has access to critical data.• Delegation of authority - to ensure that the database management system policies are flexible enough to meet the organization’s requirements.Of course, some of these requirements and principles are not met by the database management system, but by the operating system and also by organizational and procedural measures.Database Design MethodologyVarious approaches to design exist, but most contain the same main stages. The principle aim of a design methodology is to provide a robust, verifiable design process and also to separate policies from how policies are actually implemented. An important requirement during any design process is that different design aspects can be merged and this equally applies to security.A preliminary analysis should be conducted that addresses the system risks, environment, existing products and performance. Requirements should then beanalyzed with respect to the results of a risk assessment. Security policies should be developed that include specification of granularity, privileges and authority.These policies and requirements form the input to the conceptual design that concentrates on subjects, objects and access modes without considering implementation details. Its purpose is to express information and process flows in a complete and consistent way.The logical design takes into account the operating system and database management system that will be used and which of the security requirements can be provided by which mechanisms. The physical design considers the actual physical realization of the logical design and, indeed, may result in a revision of the conceptual and logical phases due to physical constraints.Security AssuranceOnce a product has been developed, its security assurance can be assessed by a number of methods including formal verification, validation, penetration testing and certification. For example, if a database is to be certified as TCSEC Class B1, then it must implement the Bell-LaPadula mandatory access control model in which each controlled object in the database must be labeled with a security level.Most of these methods can be costly and lengthy to perform and are typically specific to particular hardware and software configurations. However, the international Common Criteria certification scheme provides the added benefit of a mutual recognition arrangement, thus avoiding the prospect of multiple certifications in different countries.ConclusionThis article has considered some of the security principles that are associated with databases and how these apply in a web based environment. Ithas also focused on important architecture and design principles. These principles have focused mainly on the prevention, assurance and recovery aspects, but other aspects, such as detection, are equally important in formulating a total information protection strategy. For example, host-based intrusion detection systems as well as a robust and tested set of business recovery procedures should be considered.Any fit-for-purpose, secure e-business infrastructure should address all the above aspects: prevention, assurance, detection and recovery. Certain industries are now starting to specify their own set of global, secure e-business requirements. International card payment associations have recently started to require minimum information security standards from electronic commerce merchants handling credit card data, to help manage fraud losses and associated impacts such as brand-image damage and loss of consumer confidence.网络环境下的数据库安全简介数据库在政府部门和商业机构得到普遍应用已经很多年了。

相关文档
最新文档