数据库毕业设计外文翻译

合集下载

毕业论文(设计)外文文献翻译及原文

毕业论文(设计)外文文献翻译及原文

金融体制、融资约束与投资——来自OECD的实证分析R.SemenovDepartment of Economics,University of Nijmegen,Nijmegen(荷兰内梅亨大学,经济学院)这篇论文考查了OECD的11个国家中现金流量对企业投资的影响.我们发现不同国家之间投资对企业内部可获取资金的敏感性具有显著差异,并且银企之间具有明显的紧密关系的国家的敏感性比银企之间具有公平关系的国家的低.同时,我们发现融资约束与整体金融发展指标不存在关系.我们的结论与资本市场信息和激励问题对企业投资具有重要作用这种观点一致,并且紧密的银企关系会减少这些问题从而增加企业获取外部融资的渠道。

一、引言各个国家的企业在显著不同的金融体制下运行。

金融发展水平的差别(例如,相对GDP的信用额度和相对GDP的相应股票市场的资本化程度),在所有者和管理者关系、企业和债权人的模式中,企业控制的市场活动水平可以很好地被记录.在完美资本市场,对于具有正的净现值投资机会的企业将一直获得资金。

然而,经济理论表明市场摩擦,诸如信息不对称和激励问题会使获得外部资本更加昂贵,并且具有盈利投资机会的企业不一定能够获取所需资本.这表明融资要素,例如内部产生资金数量、新债务和权益的可得性,共同决定了企业的投资决策.现今已经有大量考查外部资金可得性对投资决策的影响的实证资料(可参考,例如Fazzari(1998)、 Hoshi(1991)、 Chapman(1996)、Samuel(1998)).大多数研究结果表明金融变量例如现金流量有助于解释企业的投资水平。

这项研究结果解释表明企业投资受限于外部资金的可得性。

很多模型强调运行正常的金融中介和金融市场有助于改善信息不对称和交易成本,减缓不对称问题,从而促使储蓄资金投着长期和高回报的项目,并且提高资源的有效配置(参看Levine(1997)的评论文章)。

因而我们预期用于更加发达的金融体制的国家的企业将更容易获得外部融资.几位学者已经指出建立企业和金融中介机构可进一步缓解金融市场摩擦。

毕业设计外文翻译_英文版

毕业设计外文翻译_英文版

A Design and Implementation of Active NetworkSocket ProgrammingK.L. Eddie Law, Roy LeungThe Edward S. Rogers Sr. Department of Electrical and Computer EngineeringUniversity of TorontoToronto, Canadaeddie@, roy.leung@utoronto.caAbstract—The concept of programmable nodes and active networks introduces programmability into communication networks. Code and data can be sent and modified on their ways to destinations. Recently, various research groups have designed and implemented their own design platforms. Each design has its own benefits and drawbacks. Moreover, there exists an interoperability problem among platforms. As a result, we introduce a concept that is similar to the network socket programming. We intentionally establish a set of simple interfaces for programming active applications. This set of interfaces, known as Active Network Socket Programming (ANSP), will be working on top of all other execution environments in future. Therefore, the ANSP offers a concept that is similar to “write once, run everywhere.” It is an open programming model that active applications can work on all execution environments. It solves the heterogeneity within active networks. This is especially useful when active applications need to access all regions within a heterogeneous network to deploy special service at critical points or to monitor the performance of the entire networks. Instead of introducing a new platform, our approach provides a thin, transparent layer on top of existing environments that can be easily installed for all active applications.Keywords-active networks; application programming interface; active network socket programming;I. I NTRODUCTIONIn 1990, Clark and Tennenhouse [1] proposed a design framework for introducing new network protocols for the Internet. Since the publication of that position paper, active network design framework [2, 3, 10] has slowly taken shape in the late 1990s. The active network paradigm allows program code and data to be delivered simultaneously on the Internet. Moreover, they may get executed and modified on their ways to their destinations. At the moment, there is a global active network backbone, the ABone, for experiments on active networks. Apart from the immaturity of the executing platform, the primary hindrance on the deployment of active networks on the Internet is more on the commercially related issues. For example, a vendor may hesitate to allow network routers to run some unknown programs that may affect their expected routing performance. As a result, alternatives were proposed to allow active network concept to operate on the Internet, such as the application layer active networking (ALAN) project [4] from the European research community. In the ALAN project, there are active server systems located at different places in the networks and active applications are allowed to run in these servers at the application layer. Another potential approach from the network service provider is to offer active network service as the premium service class in the networks. This service class should provide the best Quality of Service (QoS), and allow the access of computing facility in routers. With this approach, the network service providers can create a new source of income.The research in active networks has been progressing steadily. Since active networks introduce programmability on the Internet, appropriate executing platforms for the active applications to execute should be established. These operating platforms are known as execution environments (EEs) and a few of them have been created, e.g., the Active Signaling Protocol (ASP) [12] and the Active Network Transport System (ANTS) [11]. Hence, different active applications can be implemented to test the active networking concept.With these EEs, some experiments have been carried out to examine the active network concept, for example, the mobile networks [5], web proxies [6], and multicast routers [7]. Active networks introduce a lot of program flexibility and extensibility in networks. Several research groups have proposed various designs of execution environments to offer network computation within routers. Their performance and potential benefits to existing infrastructure are being evaluated [8, 9]. Unfortunately, they seldom concern the interoperability problems when the active networks consist of multiple execution environments. For example, there are three EEs in ABone. Active applications written for one particular EE cannot be operated on other platforms. This introduces another problem of resources partitioning for different EEs to operate. Moreover, there are always some critical network applications that need to run under all network routers, such as collecting information and deploying service at critical points to monitor the networks.In this paper, a framework known as Active Network Socket Programming (ANSP) model is proposed to work with all EEs. It offers the following primary objectives.• One single programming interface is introduced for writing active applications.• Since ANSP offers the programming interface, the design of EE can be made independent of the ANSP.This enables a transparency in developing andenhancing future execution environments.• ANSP addresses the interoperability issues among different execution environments.• Through the design of ANSP, the pros and cons of different EEs will be gained. This may help design abetter EE with improved performance in future.The primary objective of the ANSP is to enable all active applications that are written in ANSP can operate in the ABone testbed . While the proposed ANSP framework is essential in unifying the network environments, we believe that the availability of different environments is beneficial in the development of a better execution environment in future. ANSP is not intended to replace all existing environments, but to enable the studies of new network services which are orthogonal to the designs of execution environments. Therefore, ANSP is designed to be a thin and transparent layer on top of all execution environments. Currently, its deployment relies on automatic code loading with the underlying environments. As a result, the deployment of ANSP at a router is optional and does not require any change to the execution environments.II. D ESIGN I SSUES ON ANSPThe ANSP unifies existing programming interfaces among all EEs. Conceptually, the design of ANSP is similar to the middleware design that offers proper translation mechanisms to different EEs. The provisioning of a unified interface is only one part of the whole ANSP platform. There are many other issues that need to be considered. Apart from translating a set of programming interfaces to other executable calls in different EEs, there are other design issues that should be covered, e.g., • a unified thread library handles thread operations regardless of the thread libraries used in the EEs;• a global soft-store allows information sharing among capsules that may execute over different environmentsat a given router;• a unified addressing scheme used across different environments; more importantly, a routing informationexchange mechanism should be designed across EEs toobtain a global view of the unified networks;• a programming model that should be independent to any programming languages in active networks;• and finally, a translation mechanism to hide the heterogeneity of capsule header structures.A. Heterogeneity in programming modelEach execution environment provides various abstractions for its services and resources in the form of program calls. The model consists of a set of well-defined components, each of them has its own programming interfaces. For the abstractions, capsule-based programming model [10] is the most popular design in active networks. It is used in ANTS [11] and ASP [12], and they are being supported in ABone. Although they are developed based on the same capsule model, their respective components and interfaces are different. Therefore, programs written in one EE cannot run in anther EE. The conceptual views of the programming models in ANTS and ASP are shown in Figure 1.There are three distinct components in ANTS: application, capsule, and execution environment. There exist user interfaces for the active applications at only the source and destination routers. Then the users can specify their customized actions to the networks. According to the program function, the applications send one or more capsules to carry out the operations. Both applications and capsules operate on top of an execution environment that exports an interface to its internal programming resources. Capsule executes its program at each router it has visited. When it arrives at its destination, the application at destination may either reply it with another capsule or presents this arrival event to the user. One drawback with ANTS is that it only allows “bootstrap” application.Figure 1. Programming Models in ASP and ANTS.In contrast, ASP does not limit its users to run “bootstrap” applications. Its program interfaces are different from ANTS, but there are also has three components in ASP: application client, environment, and AAContext. The application client can run on active or non-active host. It can start an active application by simply sending a request message to the EE. The client presents information to users and allows its users to trigger actions at a nearby active router. AAContext is the core of the network service and its specification is divided into two parts. One part specifies its actions at its source and destination routers. Its role is similar to that of the application in ANTS, except that it does not provide a direct interface with the user. The other part defines its actions when it runs inside the active networks and it is similar to the functional behaviors of a capsule in ANTS.In order to deal with the heterogeneity of these two models, ANSP needs to introduce a new set of programming interfaces and map its interfaces and execution model to those within the routers’ EEs.B. Unified Thread LibraryEach execution environment must ensure the isolation of instance executions, so they do not affect each other or accessThe authors appreciate the Nortel Institute for Telecommunications (NIT) at the University of Toronto to allow them to access the computing facilitiesothers’ information. There are various ways to enforce the access control. One simple way is to have one virtual machine for one instance of active applications. This relies on the security design in the virtual machines to isolate services. ANTS is one example that is using this method. Nevertheless, the use of multiple virtual machines requires relatively large amount of resources and may be inefficient in some cases. Therefore, certain environments, such as ASP, allow network services to run within a virtual machine but restrict the use of their services to a limited set of libraries in their packages. For instance, ASP provides its thread library to enforce access control. Because of the differences in these types of thread mechanism, ANSP devises a new thread library to allow uniform accesses to different thread mechanisms.C. Soft-StoreSoft-store allows capsule to insert and retrieve information at a router, thus allowing more than one capsules to exchange information within a network. However, problem arises when a network service can execute under different environments within a router. The problem occurs especially when a network service inserts its soft-store information in one environment and retrieves its data at a later time in another environment at the same router. Due to the fact that execution environments are not allowed to exchange information, the network service cannot retrieve its previous data. Therefore, our ANSP framework needs to take into account of this problem and provides soft-store mechanism that allows universal access of its data at each router.D. Global View of a Unified NetworkWhen an active application is written with ANSP, it can execute on different environment seamlessly. The previously smaller and partitioned networks based on different EEs can now be merging into one large active network. It is then necessary to advise the network topology across the networks. However, different execution environments have different addressing schemes and proprietary routing protocols. In order to merge these partitions together, ANSP must provide a new unified addressing scheme. This new scheme should be interpretable by any environments through appropriate translations with the ANSP. Upon defining the new addressing scheme, a new routing protocol should be designed to operate among environments to exchange topology information. This allows each environment in a network to have a complete view of its network topology.E. Language-Independent ModelExecution environment can be programmed in any programming language. One of the most commonly used languages is Java [13] due to its dynamic code loading capability. In fact, both ANTS and ASP are developed in Java. Nevertheless, the active network architecture shown in Figure 2 does not restrict the use of additional environments that are developed in other languages. For instance, the active network daemon, anted, in Abone provides a workspace to execute multiple execution environments within a router. PLAN, for example, is implemented in Ocaml that will be deployable on ABone in future. Although the current active network is designed to deploy multiple environments that can be in any programming languages, there lacks the tool to allow active applications to run seamlessly upon these environments. Hence, one of the issues that ANSP needs to address is to design a programming model that can work with different programming languages. Although our current prototype only considers ANTS and ASP in its design, PLAN will be the next target to address the programming language issue and to improve the design of ANSP.Figure 2. ANSP Framework Model.F. Heterogeneity of Capsule Header StructureThe structures of the capsule headers are different in different EEs. They carries capsule-related information, for example, the capsule types, sources and destinations. This information is important when certain decision needs to be made within its target environment. A unified model should allow its program code to be executed on different environments. However, the capsule header prevents different environments to interpret its information successfully. Therefore, ANSP should carry out appropriate translation to the header information before the target environment receives this capsule.III. ANSP P ROGRAMMING M ODELWe have outlined the design issues encountered with the ANSP. In the following, the design of the programming model in ANSP will be discussed. This proposed framework provides a set of unified programming interfaces that allows active applications to work on all execution environments. The framework is shown in Figure 2. It is composed of two layers integrated within the active network architecture. These two layers can operate independently without the other layer. The upper layer provides a unified programming model to active applications. The lower layer provides appropriate translation procedure to the ANSP applications when it is processed by different environments. This service is necessary because each environment has its own header definition.The ANSP framework provides a set of programming calls which are abstractions of ANSP services and resources. A capsule-based model is used for ANSP, and it is currently extended to map to other capsule-based models used in ANTSand ASP. The mapping possibility to other models remains as our future works. Hence, the mapping technique in ANSP allows any ANSP applications to access the same programming resources in different environments through a single set of interfaces. The mapping has to be done in a consistent and transparent manner. Therefore, the ANSP appears as an execution environment that provides a complete set of functionalities to active applications. While in fact, it is an overlay structure that makes use of the services provided from the underlying environments. In the following, the high-level functional descriptions of the ANSP model are described. Then, the implementations will be discussed. The ANSP programming model is based upon the interactions between four components: application client , application stub , capsule , and active service base.Figure 3. Information Flow with the ANSP.•Application Client : In a typical scenario, an active application requires some means to present information to its users, e.g., the state of the networks. A graphical user interface (GUI) is designed to operate with the application client if the ANSP runs on a non-active host.•Application Stub : When an application starts, it activates the application client to create a new instance of application stub at its near-by active node. There are two responsibilities for the application stub. One of them is to receive users’ instructions from the application client. Another one is to receive incoming capsules from networks and to perform appropriate actions. Typically, there are two types of actions, thatare, to reply or relay in capsules through the networks, or to notify the users regarding the incoming capsule. •Capsule : An active application may contain several capsule types. Each of them carries program code (also referred to as forwarding routine). Since the application defines a protocol to specify the interactions among capsules as well as the application stubs. Every capsule executes its forwarding routine at each router it visits along the path between the source and destination.•Active Service Base : An active service base is designed to export routers’ environments’ services and execute program calls from application stubs and capsules from different EEs. The base is loaded automatically at each router whenever a capsule arrives.The interactions among components within ANSP are shown in Figure 3. The designs of some key components in the ANSP will be discussed in the following subsections. A. Capsule (ANSPCapsule)ANSPXdr decode () ANSPXdr encode () int length ()Boolean execute ()New types of capsule are created by extending the abstract class ANSPCapsule . New extensions are required to define their own forwarding routines as well as their serialization procedures. These methods are indicated below:The execution of a capsule in ANSP is listed below. It is similar to the process in ANTS.1. A capsule is in serial binary representation before it issent to the network. When an active router receives a byte sequence, it invokes decode() to convert the sequence into a capsule. 2. The router invokes the forwarding routine of thecapsule, execute(). 3. When the capsule has finished its job and forwardsitself to its next hop by calling send(), this call implicitly invokes encode() to convert the capsule into a new serial byte representation. length() isused inside the call of encode() to determine the length of the resulting byte sequence. ANSP provides a XDR library called ANSPXdr to ease the jobs of encoding and decoding.B. Active Service Base (ANSPBase)In an active node, the Active Service Base provides a unified interface to export the available resources in EEs for the rest of the ANSP components. The services may include thread management, node query, and soft-store operation, as shown in Table 1.TABLE I. ACTIVE SERVICE BASE FUNCTION CALLSFunction Definition Descriptionboolean send (Capsule, Address) Transmit a capsule towards its destination using the routing table of theunderlying environment.ANSPAddress getLocalHost () Return address of the local host as an ANSPAddress structure. This isuseful when a capsule wants to check its current location.boolean isLocal (ANSPAddress) Return true if its input argument matches the local host’s address andreturn false otherwise.createThread () Create a new thread that is a class ofANSPThreadInterface (discussed later in Section VIA “Unified Thread Abstraction”).putSStore (key, Object) Object getSStore (key) removeSStore (key)The soft-store operations are provided by putSStore(), getSSTore(), and removeSStore(), and they put, retrieve, and remove data respectively. forName (PathName) Supported in ANSP to retrieve a classobject corresponding to the given path name in its argument. This code retrieval may rely on the code loading mechanism in the environment whennecessary.C. Application Client (ANSPClient)boolean start (args[])boolean start (args[],runningEEs) boolean start (args[],startClient)boolean start (args[],startClient, runningEE)Application Client is an interface between users and the nearby active source router. It does the following responsibilities.1. Code registration: It may be necessary to specify thelocation and name of the application code in some execution environments, e.g., ANTS. 2. Application initialization: It includes selecting anexecution environment to execute the application among those are available at the source router. Each active application can create an application client instance by extending the abstract class, ANSPClient . The extension inherits a method, start(), to automatically handle both the registration and initialization processes. All overloaded versions of start() accept a list of arguments, args , that are passed to the application stub during its initialization. An optional argument called runningEEs allows an application client to select a particular set of environment variables, specified by a list of standardized numerical environment ID, the ANEP ID, to perform code registration. If this argument is not specified, the default setting can only include ANTS and ASP. D. Application Stub (ANSPApplication)receive (ANSPCapsule)Application stubs reside at the source and destination routers to initialize the ANSP application after the application clients complete the initialization and registration processes. It is responsible for receiving and serving capsules from the networks as well as actions requested from the clients. A new instance is created by extending the application client abstract class, ANSPApplication . This extension includes the definition of a handling routine called receive(), which is invoked when a stub receives a new capsule.IV. ANSP E XAMPLE : T RACE -R OUTEA testbed has been created to verify the design correctnessof ANSP in heterogeneous environments. There are three types of router setting on this testbed:1. Router that contains ANTS and a ANSP daemonrunning on behalf of ASP; 2. Router that contains ASP and a ANSP daemon thatruns on behalf of ANTS; 3. Router that contains both ASP and ANTS.The prototype is written in Java [11] with a traceroute testing program. The program records the execution environments of all intermediate routers that it has visited between the source and destination. It also measures the RTT between them. Figure 4 shows the GUI from the application client, and it finds three execution environments along the path: ASP, ANTS, and ASP. The execution sequence of the traceroute program is shown in Figure 5.Figure 4. The GUI for the TRACEROUTE Program.The TraceCapsule program code is created byextending the ANSPCapsule abstract class. When execute() starts, it checks the Boolean value of returning to determine if it is returning from the destination. It is set to true if TraceCapsule is traveling back to the source router; otherwise it is false . When traveling towards the destination, TraceCapsule keeps track of the environments and addresses of the routers it has visited in two arrays, path and trace , respectively. When it arrives at a new router, it calls addHop() to append the router address and its environment to these two arrays. When it finally arrives at the destination, it sets returning to false and forwards itself back to the source by calling send().When it returns to source, it invokes deliverToApp() to deliver itself to the application stub that has been running at the source. TraceCapsule carries information in its data field through the networks by executing encode() and decode(), which encapsulates and de-capsulates its data using External Data Representation (XDR) respectively. The syntax of ANSP XDR follows the syntax of XDR library from ANTS. length() in TraceCapsule returns the data length, or it can be calculated by using the primitive types in the XDRlibrary.Figure 5. Flow of the TRACEROUTE Capsules.V. C ONCLUSIONSIn this paper, we present a new unified layered architecture for active networks. The new model is known as Active Network Socket Programming (ANSP). It allows each active application to be written once and run on multiple environments in active networks. Our experiments successfully verify the design of ANSP architecture, and it has been successfully deployed to work harmoniously with ANTS and ASP without making any changes to their architectures. In fact, the unified programming interface layer is light-weighted and can be dynamically deployable upon request.R EFERENCES[1] D.D. Clark, D.L. Tennenhouse, “Architectural Considerations for a NewGeneration of Protocols,” in Proc. ACM Sigcomm’90, pp.200-208, 1990. [2] D. Tennenhouse, J. M. Smith, W. D. Sicoskie, D. J. Wetherall, and G. J.Minden, “A survey of active network research,” IEEE Communications Magazine , pp. 80-86, Jan 1997.[3] D. Wetherall, U. Legedza, and J. Guttag, “Introducing new internetservices: Why and how,” IEEE Network Magazine, July/August 1998. [4] M. Fry, A. Ghosh, “Application Layer Active Networking,” in ComputerNetworks , Vol.31, No.7, pp.655-667, 1999.[5] K. W. Chin, “An Investigation into The Application of Active Networksto Mobile Computing Environments”, Curtin University of Technology, March 2000.[6] S. Bhattacharjee, K. L. Calvert, and E. W. Zegura, “Self OrganizingWide-Area Network Caches”, Proc. IEEE INFOCOM ’98, San Francisco, CA, 29 March-2 April 1998.[7] L. H. Leman, S. J. Garland, and D. L. Tennenhouse, “Active ReliableMulticast”, Proc. IEEE INFOCOM ’98, San Francisco, CA, 29 March-2 April 1998.[8] D. Descasper, G. Parulkar, B. Plattner, “A Scalable, High PerformanceActive Network Node”, In IEEE Network, January/February 1999.[9] E. L. Nygren, S. J. Garland, and M. F. Kaashoek, “PAN: a high-performance active network node supporting multiple mobile code system”, In the Proceedings of the 2nd IEEE Conference on Open Architectures and Network Programming (OpenArch ’99), March 1999. [10] D. L. Tennenhouse, and D. J. Wetherall. “Towards an Active NetworkArchitecture”, In Proceeding of Multimedia Computing and Networking , January 1996.[11] D. J. Wetherall, J. V. Guttag, D. L. Tennenhouse, “ANTS: A toolkit forBuilding and Dynamically Deploying Network Protocols”, Open Architectures and Network Programming, 1998 IEEE , 1998 , Page(s): 117 –129.[12] B. Braden, A. Cerpa, T. Faber, B. Lindell, G. Phillips, and J. Kann.“Introduction to the ASP Execution Environment”: /active-signal/ARP/index.html .[13] “The java language: A white paper,” Tech. Rep., Sun Microsystems,1998.。

数据库毕业设计---外文翻译

数据库毕业设计---外文翻译

附录附录A: 外文资料翻译-原文部分:CUSTOMER TARGETTINGThe earliest determinant of success in the development of a profitable card scheme will lie in the quality of applicants that are attracted by the marketing effort. Not only must there be sufficient creditworthy applicants to avoid fruitless and expensive application processing, but it is critical that the overall mix of new accounts meets the standard necessary to ensure ultimate profitability. For example, the marketing initiatives may attract sufficient volume of applicants that are assessed as above the scorecard cut-off, but the proportion of acceptances in the upper bands may be insufficient to deliver the level of profit and lesser bad debt required to achieve the financial objectives of the scheme.This chapter considers the range of data sources available to support the development of a credit card scheme and the tools that can be applied to maximize the flow of applications from the required categories.Data availabilityThe data that makes up the ingredients from which marketing campaigns can be constructed can come from many diverse sources. Typically, it will fall into four categories:1 the national or regional register of voters;2 the national or regional register of court judgments that records the outcomeof creditor-debtor legislation;3 any national or regional pooled information showing the credit history of clients of the participating lenders; and4 commercially compiled data including and culled from name and address lists, survey results and other market analysis data, e.g. neighborhoods and lifestyle categorization through geo-demographic information systems.The availability and quality of this data will vary from country to country and bureau to bureau.Availability is not only governed by the extent to which the responsible agency has undertaken to record it, but also by the feasibility of accessing the data and the extent (if any) to which local consumer legislation or other considerations (e.g. religious principles) will allow it to be used. Other limitations on the use of available data may lie in the simple impossibility or expense of accessing the information sources, perhaps because necessary consumer consent for divulgence has been withheld or because the records are not yet stored electronically.The local credit information bureaux will be able to provide guidance on all of these matters, as will many local trade or professional associations or the relevant government departments.Data segmentation and AnalysesThe following remarks deal with the ways in which lawfully obtained data may then be processed and analyzed in order to maximize its value as the basis of a marketing prospect list. Examples of the types and uses of data that will play a role in the credit decision area are discussed later in the chapter, within the context of application processing.The key categories into which prospects may be segmented include lifestyle, propensity to purchase specific products (financial or otherwise) and levels of risk. The leading international information bureaux will be able to provide segmentation systems that are able to correlate each of these data categories to provide meaningful prospect lists in rank order. Additionally, many bureaux will have the capability to further enhance the strength and value of the data. Through the selective purchasing of data from bona fide market sources, and by overlaying generic factors deduced from the analysis of the broad mass of industry information that routinely passes through their systems, the best international operators are now able to offer marketing and credit information support that can add significantly to the quality of new applicants.The importance of the role and standard of this data in influencing the quality of the target population for mailings, etc. should not be underestimated. Information that is dated or inaccurate may not only lead a marketer and the organization into embarrassment and damage their reputations, but it will also open the credit card scheme to applicants from outside either the target sector or ,worse still, applicants outside the lender’s view of an acceptable credit risk.From this, it follows that you should seek to use an information bureau whose business principles and operating practices comply with the highest levels of both competence and integrity.Developing the prospect databaseThis is the process by which the raw data streams are brought together and subjected to progressive refinement, with the output representing the refined base from which prospecting can begin in earnest. A wide experience-often across many different markets and countries-in the sourcing, handling and analysis of data inevitably improves the quality of the ideas and systems that a bureau can offer for the development of the prospect database.In summary, the typical shape of the service available from the very best bureaux will support a process that runs as follows:1.collect and consolidate all data to be screened for inclusion;2.merge the various streams;3.sort and classify the data by market and credit categories;4.screen the date using predetermined marketing and credit criteria; and5.consolidate and output the refined list.Bureaux will charge for the use of their expertise and systems.Therefore, consideration should be given to the volumes of data that are to be processed and the costs involved at each stage. The most cost-effective approach to constructing prospect databases only undertakes the lowest-cost screening process within the earlier stages. The more expensive screening processes are not employed until the mass of the data has been reduced by earlier filtering.It is impossible to be prescriptive about the range and levels of service that are available, but reference to one of the major bureaux operating in the region could certainly be a good starting point.Campaign Management and AnalysisAgain, this is an area where excellent support is available from the best-of-breed bureaux. They will provide both the operational support and software capabilities to mount, monitor and analyse your marketing campaign, should you so wish. Their depth of experience and capabilities in the credit sector will often open up income: cost possibilities from the solicitation exercise that would not otherwise be available to the new entrant.The First Important Applications of DBMS’sData items include names and addresses of customers, accounts, loans and their balance, and the connection between customers and their accounts and loans, e.g., who has signature authority over which accounts. Queries for account balances are common, but far more common are modifications representing a single payment from or deposit to an account.As with the airline reservation system, we expect that many tellers and customers (through ATM machines) will be querying and modifying the bank’s data at once. It is vital that simultaneous accesses to an account not cause the effect of an ATM transaction to be lost. Failures cannot be tolerated. For example, once the money has been ejected from an ATM machine ,the bank must record the debit, even if the power immediately fails. On the other hand, it is not permissible for the bank to record the debit and then not deliver the money because the power fails. The proper way to handle this operation is far from obvious and can be regarded as one of the significant achievements in DBMS architecture.Database system changed significantly. Codd proposed that database system should present the user with a view of data organized as tables called relations. Behindthe scenes, there might be a complex data structure that allowed rapid response to a variety of queries. But unlike the user of earlier database systems, the user of a relational system would not be concerned with storage structure. Queries could be expressed in a very high level language, which greatly increased the efficiency of database programmers. Relations are tables. Their columns are headed by attributes.Client –Server ArchitectureMany varieties of modern software use a client-server architecture, in which requests by one process (the client ) are sent to another process (the server) for execution. Database systems are no exception, and it is common to divide the work of the components shown into a server process and one or more client processes.In the simplest client/server architecture, the entire DBMS is a server, except for the query interfaces that the user and send queries or other commands across to the server. For example, relational systems generally use the SQL language for representing requests from the client to the server. The database server then sends the answer, in the form of a table or relation, back to client. The relationship between client and server can get more complex, especially when answers are extremely large. We shall have more to say about this matter in section 1.3.3. there is also a trend to put more work in the client, since the server will be a bottleneck if there are many simultaneous database users.附录B: 外文资料翻译-译文部分:客户目标:最早判断发展可收益卡的成功性是在于受市场影响的被吸引的申请人的质量。

毕业设计外文文献翻译(原文+译文)

毕业设计外文文献翻译(原文+译文)

Environmental problems caused by Istanbul subway excavation and suggestionsfor remediation伊斯坦布尔地铁开挖引起的环境问题及补救建议Ibrahim Ocak Abstract:Many environmental problems caused by subway excavations have inevitably become an important point in city life. These problems can be categorized as transporting and stocking of excavated material, traffic jams, noise, vibrations, piles of dust mud and lack of supplies. Although these problems cause many difficulties,the most pressing for a big city like Istanbul is excava tion,since other listed difficulties result from it. Moreover, these problems are environmentally and regionally restricted to the period over which construction projects are underway and disappear when construction is finished. Currently, in Istanbul, there are nine subway construction projects in operation, covering approximately 73 km in length; over 200 km to be constructed in the near future. The amount of material excavated from ongoing construction projects covers approximately 12 million m3. In this study, problems—primarily, the problem with excavation waste(EW)—caused by subway excavation are analyzed and suggestions for remediation are offered.摘要:许多地铁开挖引起的环境问题不可避免地成为城市生活的重要部分。

毕业设计外文资料翻译sc-pdf

毕业设计外文资料翻译sc-pdf

毕业设计外文资料翻译题目甲醇氧化生产甲醛的银催化剂改性学院化学化工学院专业化学工程与工艺班级0803学生许继盟学号20080207167指导教师倪献智二〇一二年三月十五日Catalysts Today, 1996, (28): 239-244.甲醇氧化生产甲醛的银催化剂的改性A.N.Pestryakov摘 要 银催化剂的性能可用Zr ,La , Rb ,C s 的氧化物改性,改性后的银催化剂的物化性能和催化性能已在甲醇的选择性氧化工艺中研究过,甲醇氧化制甲醛工艺中,质量分数为1%-10%的改性添加物会改变载体银的有效电荷及氧化还原性能、金属分散度和其表面扩散、催化剂表面酸度及结焦程度。

当银催化性能改变时,改性物主要影响催化剂活性位(++δn Ag Ag)。

关键词 银催化剂;甲醇氧化为甲醛 1 简介甲醇选择性氧化生产甲醛工艺中使用大量的载体银催化剂[1-3]。

采用不同的非有机添加物对银催化剂进行改性是提高其性能的最有前景的方法之一。

在银催化剂发现之后,人们致力于对其进行改进,以达到提高其催化活性和寿命,降低银使用量和扩展其工艺操作条件的目的。

广泛使用载体以减少银使用量及防止银在“严酷”条件(600-700 ℃)下烧结也是改性方法之一。

但是载体的堆积有限,不同改性化合物的少量添加(质量分数0.1-10%)可以使银可变的催化性能产生较大差异。

在科技和专利文献中提到过很多不同的添加物,它们能改善并激发银的催化性能[3-14]。

在这其中,研究人员提到改性作用的不同机理:银上金属的电子功能和电子密度改变[7-9],O 2吸附的差异[3,10],催化剂表面酸度[11],催化剂表面的机械堵塞[12],添加物的固有催化性质[13,14]。

然而,所有这些仅描述了催化剂改性的几个分散的方面,并没有涉及添加物对银催化剂改性影响的差异。

也没有考虑改性物对银催化剂活性位电子状态的影响。

在本文中,我们研究了改性物对银的性能影响的几个方面[15-18],目的是在甲醇氧化制甲醛工艺中对稀有和稀土金属氧化物反应及银催化剂的电子属性、物化属性和催化属性进行综合研究。

毕业设计论文 外文文献翻译

毕业设计论文 外文文献翻译

毕业设计(论文)外文参考文献翻译计算机科学与信息工程系系(院)2008 届题目企业即时通Instant Messaging for Enterprises课题类型技术开发课题来源自选学生姓名许帅专业班级 04计算机科学与技术指导老师王占中职称工程师完成日期:2008年4 月 6 日目录I NSTANT M ESSAGING FOR E NTERPRISE (1)1. Tips (1)2. Introduction (1)3. First things first (2)4.The While-Accept loop (4)5. Per-Thread class (6)6. The Client class (7)企业即时通 (9)1.提示 (9)2.简介 (9)3.首先第一件事 (10)4.监听循环 (11)5.单线程类 (13)6.用户端类 (14)Instant Messaging for Enterprise1. TipsIf Java is, in fact, yet another computer programming language, you may question why it is so important and why it is being promoted as a revolutionary step in computer programming. The answer isn’t immediately obvious if you’re coming from a tr aditional programming perspective. Although Java is very useful for solving traditional standalone programming problems, it is also important because it will solve programming problems on the World Wide Web. What is the Web?The Web can seem a bit of a mys tery at first, with all this talk of “surfing,”“presence,” and “home pages.” It’s helpful to step back and see what it really is, but to do this you must understand client/server systems, another aspect of computing that is full of confusing issues. The primary idea of a client/server system is that you have a central repository of information,some kind of data, often in a database。

毕业设计(论文)外文资料翻译(学生用)

毕业设计(论文)外文资料翻译(学生用)

毕业设计外文资料翻译学院:信息科学与工程学院专业:软件工程姓名: XXXXX学号: XXXXXXXXX外文出处: Think In Java (用外文写)附件: 1.外文资料翻译译文;2.外文原文。

附件1:外文资料翻译译文网络编程历史上的网络编程都倾向于困难、复杂,而且极易出错。

程序员必须掌握与网络有关的大量细节,有时甚至要对硬件有深刻的认识。

一般地,我们需要理解连网协议中不同的“层”(Layer)。

而且对于每个连网库,一般都包含了数量众多的函数,分别涉及信息块的连接、打包和拆包;这些块的来回运输;以及握手等等。

这是一项令人痛苦的工作。

但是,连网本身的概念并不是很难。

我们想获得位于其他地方某台机器上的信息,并把它们移到这儿;或者相反。

这与读写文件非常相似,只是文件存在于远程机器上,而且远程机器有权决定如何处理我们请求或者发送的数据。

Java最出色的一个地方就是它的“无痛苦连网”概念。

有关连网的基层细节已被尽可能地提取出去,并隐藏在JVM以及Java的本机安装系统里进行控制。

我们使用的编程模型是一个文件的模型;事实上,网络连接(一个“套接字”)已被封装到系统对象里,所以可象对其他数据流那样采用同样的方法调用。

除此以外,在我们处理另一个连网问题——同时控制多个网络连接——的时候,Java内建的多线程机制也是十分方便的。

本章将用一系列易懂的例子解释Java的连网支持。

15.1 机器的标识当然,为了分辨来自别处的一台机器,以及为了保证自己连接的是希望的那台机器,必须有一种机制能独一无二地标识出网络内的每台机器。

早期网络只解决了如何在本地网络环境中为机器提供唯一的名字。

但Java面向的是整个因特网,这要求用一种机制对来自世界各地的机器进行标识。

为达到这个目的,我们采用了IP(互联网地址)的概念。

IP以两种形式存在着:(1) 大家最熟悉的DNS(域名服务)形式。

我自己的域名是。

所以假定我在自己的域内有一台名为Opus的计算机,它的域名就可以是。

大学毕业设计关于数据库外文翻译2篇

大学毕业设计关于数据库外文翻译2篇

原文:Structure of the Relational database—《Database System Concepts》Part1: Relational Databases The relational model is the basis for any relational database management system (RDBMS).A relational model has three core components: a collection of obj ects or relations, operators that act on the objects or relations, and data integrity methods. In other words, it has a place to store the data, a way to create and retrieve the data, and a way to make sure that the data is logically consistent.A relational database uses relations, or two-dimensional tables, to store the information needed to support a business. Let's go over the basic components of a traditional relational database system and look at how a relational database is designed. Once you have a solid understanding of what rows, columns, tables, and relationships are, you'll be well on your way to leveraging the power of a relational database.Tables, Row, and ColumnsA table in a relational database, alternatively known as a relation, is a two-dimensional structure used to hold related information. A database consists of one or more related tables.Note: Don't confuse a relation with relationships. A relation is essentially a table, and a relationship is a way to correlate, join, or associate two tables.A row in a table is a collection or instance of one thing, such as one employee or one line item on an invoice. A column contains all the information of a single type, and the piece of data at the intersection of a row and a column, a field, is the smallest piece of information that can be retrieved with the database's query language. For example, a table with information about employees might have a column calledLAST_NAME that contains all of the employees' last names. Data is retrieved from a table by filtering on both the row and the column.Primary Keys, Datatypes, and Foreign KeysThe examples throughout this article will focus on the hypothetical work of Scott Smith, database developer and entrepreneur. He just started a new widget company and wants to implement a few of the basic business functions using the relational database to manage his Human Resources (HR) department.Relation: A two-dimensional structure used to hold related information, also known as a table.Note: Most of Scott's employees were hired away from one of his previous employers, some of whom have over 20 years of experience in the field. As a hiring incentive, Scott has agreed to keep the new employees' original hire date in the new database.Row:A group of one or more data elements in a database table that describes a person, place, or thing.Column:The component of a database table that contains all of the data of the same name and type across all rows.You'll learn about database design in the following sections, but let's assume for the moment that the majority of the database design is completed and some tables need to be implemented. Scott creates the EMP table to hold the basic employee information, and it looks something like this:Notice that some fields in the Commission (COMM) and Manager (MGR) columns do not contain a value; they are blank. A relational database can enforce the rule that fields in a column may or may not be empty. In this case, it makes sense for an employee who is not in the Sales department to have a blank Commission field. It also makes sense for the president of the company to have a blank Manager field, since that employee doesn't report to anyone.Field:The smallest piece of information that can be retrieved by the database query language. A field is found at the intersection of a row and a column in a database table.On the other hand, none of the fields in the Employee Number (EMPNO) column are blank. The company always wants to assign an employee number to an employee, and that number must be different for each employee. One of the features of a relational database is that it can ensure that a value is entered into this column and that it is unique. Th e EMPNO column, in this case, is the primary key of the table.Primary Key:A column (or columns) in a table that makes the row in the table distinguishable from every other row in the same table.Notice the different datatypes that are stored in the EMP ta ble: numeric values, character or alphabetic values, and date values.As you might suspect, the DEPTNO column contains the department number for the employee. But how do you know what department name is associated with what number? Scott created the DEPT table to hold the descriptions for the department codes in the EMP table.The DEPTNO column in the EMP table contains the same values as the DEPTNO column in the DEPT table. In this case, the DEPTNO column in the EMP table is considered a foreign key to the same column in the DEPT table.A foreign key enforces the concept of referential integrity in a relational database. The concept of referential integrity not only prevents an invalid department number from being inserted into the EMP table, but it also prevents a row in the DEPT table from being deleted if there are employees still assigned to that department.Foreign Key:A column (or columns) in a table that draws its values from a primary or unique key column in another table. A foreign key assists in ensuring the data integrity of a table. Referential Integrity A method employed by a relational database system that enforces one-to-many relationships between tables.Data ModelingBefore Scott created the actual tables in the database, he went through a design process known as data modeling. In this process, the developer conceptualizes and documents all the tables for the database. One of the common methods for mod eling a database is called ERA, which stands for entities, relationships, and attributes. The database designer uses an application that can maintain entities, their attributes, and their relationships. In general, an entity corresponds to a table in the database, and the attributes of the entity correspond to columns of the table.Data Modeling:A process of defining the entities, attributes, and relationships between the entities in preparation for creating the physical database.The data-modeling process involves defining the entities, defining the relationships between those entities, and then defining the attributes for each of the entities. Once a cycle is complete, it is repeated as many times as necessary to ensure that the designer is capturing what is important enough to go into the database. Let's take a closer look at each step in the data-modeling process.Defining the EntitiesFirst, the designer identifies all of the entities within the scope of the database application.The entities are the pers ons, places, or things that are important to the organization and need to be tracked in the database. Entities will most likely translate neatly to database tables. For example, for the first version of Scott's widget company database, he identifies four entities: employees, departments, salary grades, and bonuses. These will become the EMP, DEPT, SALGRADE, and BONUS tables.Defining the Relationships Between EntitiesOnce the entities are defined, the designer can proceed with defining how each of the entities is related. Often, the designer will pair each entity with every other entity and ask, "Is there a relationship between these two entities?" Some relationships are obvious; some are not.In the widget company database, there is most likely a relations hip between EMP and DEPT, but depending on the business rules, it is unlikely that the DEPT and SALGRADE entities are related. If the business rules were to restrict certain salary grades to certain departments, there would most likely be a new entity that defines the relationship between salary grades and departments. This entity wouldbe known as an associative or intersection table and would contain the valid combinations of salary grades and departments.Associative Table:A database table that stores th e valid combinations of rows from two other tables and usually enforces a business rule. An associative table resolves a many-to-many relationship.In general, there are three types of relationships in a relational database:One-to-many The most common type of relationship is one-to-many. This means that for each occurrence in a given entity, the parent entity, there may be one or more occurrences in a second entity, the child entity, to which it is related. For example, in the widget company database, the DEPT entity is a parent entity, and for each department, there could be one or more employees associated with that department. The relationship between DEPT and EMP is one-to-many.One-to-one In a one-to-one relationship, a row in a table is related to only one or none of the rows in a second table. This relationship type is often used for subtyping. For example, an EMPLOYEE table may hold the information common to all employees, while the FULLTIME, PARTTIME, and CONTRACTOR tables hold information unique to full-time employees, part-time employees, and contractors, respectively. These entities would be considered subtypes of an EMPLOYEE and maintain a one-to-one relationship with the EMPLOYEE table. These relationships are not as common as one-to-many relationships, because if one entity has an occurrence for a corresponding row in another entity, in most cases, the attributes from both entities should be in a single entity.Many-to-many In a many-to-many relationship, one row of a table may be related to man y rows of another table, and vice versa. Usually, when this relationship is implemented in the database, a third entity isdefined as an intersection table to contain the associations between the two entities in the relationship. For example, in a database used for school class enrollment, the STUDENT table has a many-to-many relationship with the CLASS table—one student may take one or more classes, and a given class may have one or more students. The intersection table STUDENT_CLASS would contain the comb inations of STUDENT and CLASS to track which students are in which classes.Once the designer has defined the entity relationships, the next step is to assign the attributes to each entity. This is physically implemented using columns, as shown here for th e SALGRADE table as derived from the salary grade entity.After the entities, relationships, and attributes have been defined, the designer may iterate the data modeling many more times. When reviewing relationships, new entities may be discovered. For exa mple, when discussing the widget inventory table and its relationship to a customer order, the need for a shipping restrictions table may arise.Once the design process is complete, the physical database tables may be created. Logical database design sessions should not involve physical implementation issues, but once the design has gone through an iteration or two, it's the DBA's job to bring the designers "down to earth." As a result, the design may need to be revisited to balance the ideal database implementation versus the realities of budgets andschedules.译文:关系数据库的结构—《数据库系统结构》第一章:关系数据库关系模型是任何关系数据库管理系统(RDBMS)的基础。

数据库毕业设计外文翻译--图像系统简介

数据库毕业设计外文翻译--图像系统简介

附录1 外文原文COLOR SYSTEM OVERVIEWIn the age of office automation and electronic imaging, office documents are being processed, transported, and displayed in a variety of ways. The scope of document processing is enormous; it encompasses page layout, document length, collation, simplex/duplex, color, image quality, finishing, and binding. If the office system is networked, then another dimension of network-related issues-protocol, file format, page description language, compression/decompression, job management, error handling, user interface, and device driver-has to be addressed. Digital color-imaging systems process electronic information from various sources; images may come from a local-area network, a remote-sensing device, different color workstations, or a local scanner. After processing, a document is usually compressed and transmitted to several places via a computer network for viewing, editing, or printing. Moreover, the trend in the industry is moving toward an open environment. This means that various devices such as scanners, computers, workstations, modems, and printers from multiple vendors are assembled into one system. Implementations should be based on public-domain technology rather than proprietary standards. This will allow vendors equal access to the market for system components and give users the widest choice in selecting components. It is a vastly large task to enable the communication of all system components regardless of differences in the operating system, file format, page description language, and information content. Ideally, the exchange should not cause information loss or alteration. A closer look at a document may reveal that it consists of different types of images, primarily text, graphs, and pictorial images. These all have different image characteristics and representations such as ASCII (American Standard Code for Information Interchange) for text, vector for graphs, and raster for pictorial images. Each type of image and its associated attributes like the font, font size, halftone, gray level, resolution, and color have to be dealt with differently. In such a complex environment, there is no doubt that many compatibility problems occur when an image is acquired, transmitted, displayed, and rendered. ?With the fast development of Internet technology, large volumes of data in the form of electronic documents from the Web. For the purposes of data integration and data exchange, more and moreexisting sources, such as relational databases, support public XML export, and increasing amount of public and private data is described in a semi-structured way. A number of issues need to be addressed when we integrate data from different sources, including heterogeneous and duplicate data, multiple divisions and partners, and changes.? Data heterogeneity results from the use of different information management systems to store data and each system has its own data structure and access methods. Relational database management systems benefit from the universal acceptance of Structured Query Language (SQL) as the primary means of getting answers whilst document and email repositories are generally accessed using text search engines with varying interfaces and capabilities. Because these systems were not designed with interoperability in mind, each must generally be accessed using source-specific applications or application programming interfaces (APIs).? Another difficulty in data integration is data duplication-different systems represent the same piece of data in different ways. For example, customers may be identified by name in one database, but by account number in a second repository, may identify the same customer by email address. Frequently a required piece of information is derived from multiple data points. Data integration is further complicated when customers do business with multiple divisions within a large company, or with other partners. Similarly, answering questions about the state of a company's supply chain requires access to vendor and distributor information sources. Doing business electronically across the firewall gives rise to security and data ownership issues. Finally, data integration has to deal with different types of changes; change in business requirements and strategies, in IT systems, mergers and acquisitions, and new product launches. This demands that a data integration solution be sufficiently flexible and adaptable.One possible solution for the data integration problems mentioned above is to provide an XML Web services break down the barriers between different computing platforms, development environments and communications networks, allowing organizations to work together electronically without the expense and delay of agreeing on semantics, schema, interfaces, and other application integration. XML provides the flexibility for handling data with differing structures. As XML is becoming the principal medium for data exchange over the Web and for information integration in general,increasing amounts of public and private data are described in XML. XML data is usually defined in a tree or graph based, self-describing object instance model (Boncz and Kersten, 1999). However, semi-structured data is incompatible with the flat structure of relational database tables, and so the growth of XML data requires new and complex query optimization techniques.Creating XML files with a text editor would be a lot easier if you didn't have to close all those HTML tags. First you have to add the XML declaration and the root opening and closing HTML tags. Next, you start adding element opening and closing tags one at a time. Of course, once you have the initial sequence completed you can just copy and paste to repeat the required elements. After doing this hundreds of times you'll be looking for a faster way to create XML files.Some XML editors will automatically add the closing tag after you have finished typing the opening tag but, you still have to type the brackets around the opening tag. I kept thinking this process should be easier. So, I came up with a solution that allows you to create XML files without using HTML tags.This console application will create an XML file based on user input. Just enter the file name, how many element fields you want, and the name of each field. Optionally, you can include a data type separated by a comma after the field name. You can just enter the field name because the data type is not required. The structure of the XML file that is created will be compatible with the .NET Dataset and can be easily added to a database.In addition to creating the XML file, an XSL file and HTML file are also created. The HTML file uses client side JavaScript to transform the XML file using the XSL file. This provides an easy way to view the new XML file by displaying it in a table layout.The download includes both the source code and the already compiled application. You can start using the executable right away or customize it to meet your needs. All you will need is the .NET Framework and a text editor, like Notepad, to build this application.Improving ASP Performance with Data CachingOne of the nicest features of is the ability to cache page content. This can be used to substantially reduce load on a website's database - which is an obvious attraction if the site uses Microsoft's Access to store data rather than SQL Server. Unfortunately there is no built in cachingsystem in classic ASP, but it is easy to build one by using the Application object to store data.When to use ASP Caching. Caching is most useful for data that changes - but not too often. For example an e-commerce store could display a list of popular products, or an information site could display a list of press releases.Don't forget that it is also possible to build functionality into the admin part of the site so that the cache would be flushed if new content is added to the database. That way the website administrator would not have to wait until the cache timed out in order for new content to appear on the website. Remember that data stored in Application variables is visible by all the users of the website。

数据库外文翻译毕业设计

数据库外文翻译毕业设计

Database Management Systems( 3th Edition ),Wiley ,2004,5-12A introduction to Database Management SystemRaghu RamakrishnanA database (sometimes spelled data base) is also called an electronic database , referring to any collection of data, or information, that is specially organized for rapid search and retrieval by a computer. Databases are structured to facilitate the storage, retrieval , modification, and deletion of data in conjunction with variousdata-processing operations .Databases can be stored on magnetic disk or tape, optical disk, or some other secondary storage device.A database consists of a file or a set of files. The information in these files maybe broken down into records, each of which consists of one or more fields. Fields are the basic units of data storage , and each field typically contains information pertaining to one aspect or attribute of the entity described by the database . Using keywords and various sorting commands, users can rapidly search , rearrange, group, and select the fields in many records to retrieve or create reports on particular aggregate of data.Complex data relationships and linkages may be found in all but the simplest databases .The system software package that handles the difficult tasks associated with creating ,accessing, and maintaining database records is called a database management system(DBMS).The programs in a DBMS package establish an interface between the database itself and the users of the database.. (These users may be applications programmers, managers and others with information needs, and various OS programs.)A DBMS can organize, process, and present selected data elements form the database. This capability enables decision makers to search, probe, and query database contents in order to extract answers to nonrecurring and unplanned questions that aren't available in regular reports. These questions might initially be vagueand/or poorly defined ,but people can “browse”through the database until they have the stored data items and”manage“the needed information. In short, the DBMS willassemble the needed items from the common database in response to the queries of those who aren't programmers.A database management system (DBMS) is composed of three major parts:(1)a storage subsystem that stores and retrieves data in files;(2) a modeling and manipulation subsystem that provides the means with which to organize the data and to add , delete, maintain, and update the data;(3)and an interface between the DBMS and its users. Several major trends are emerging that enhance the value and usefulnessof database management systems;Managers: who require more up-to-data information to make effective decision Customers: who demand increasingly sophisticated information services andmore current information about the status of their orders, invoices, and accounts. Users: who find that they can develop custom applications with databasesystems in a fraction of the time it takes to use traditional programming languages.Organizations : that discover information has a strategic value; they utilize their database systems to gain an edge over their competitors.The Database ModelA data model describes a way to structure and manipulate the data in a database. The structural part of the model specifies how data should be represented(such as tree, tables, and so on ).The manipulative part of the model specifies the operation with which to add, delete, display, maintain, print, search, select, sort and update the data. Hierarchical ModelThe first database management systems used a hierarchical model-that is-they arranged records into a tree structure. Some records are root records and all others have unique parent records. The structure of the tree is designed to reflect the order in which the data will be used that is ,the record at the root of a tree will be accessed first,then records one level below the root ,and so on.The hierarchical model was developed because hierarchical relationships are commonly found in business applications. As you have known, an organization char often describes a hierarchical relationship: top management is at the highest level, middle management at lower levels, and operational employees at the lowest levels. Note that within a strict hierarchy, each level of management may have many employees or levels of employees beneath it, but each employee has only one manager. Hierarchical data are characterized by this one-to-many relationship among data.In the hierarchical approach, each relationship must be explicitly defined whenthe database is created. Each record in a hierarchical database can contain only one key field and only one relationship is allowed between any two fields. This can create a problem because data do not always conform to such a strict hierarchy. Relational ModelA major breakthrough in database research occurred in 1970 when E. F. Codd proposed a fundamentally different approach to database management called relational model ,which uses a table as its data structure.The relational database is the most widely used database structure. Data is organized into related tables. Each table is made up of rows called and columns called fields. Each record contains fields of data about some specific item. For example, in a table containing information on employees, a record would contain fields of data such as a person's last name ,first name ,and street address.Structured query language(SQL)is a query language for manipulating data in a relational database .It is nonprocedural or declarative, in which the user need onlyspecify an English-like description that specifies the operation and the described record or combination of records. A query optimizer translates the description into a procedure to perform the database manipulation.Network ModelThe network model creates relationships among data through a linked-liststructure in which subordinate records can be linked to more than one parent record. This approach combines records with links, which are called pointers. The pointers are addresses that indicate the location of a record. With the network approach, a subordinate record can be linked to a key record and at the same time itself be a key record linked to other sets of subordinate records. The network mode historically has had a performance advantage over other database models. Today , such performance characteristics are only important in high-volume ,high-speed transaction processing such as automatic teller machine networks or airline reservation system.Both hierarchical and network databases are application specific. If a new application is developed ,maintaining the consistency of databases in different applications can be very difficult. For example, suppose a new pension application is developed .The data are the same, but a new database must be created.Object ModelThe newest approach to database management uses an object model , in which records are represented by entities called objects that can both store data and provide methods or procedures to perform specific tasks.The query language used for the object model is the same object-oriented programming language used to develop the database application .This can create problems because there is no simple , uniform query language such as SQL . The object model is relatively new, and only a few examples of object-oriented database exist. It has attracted attention because developers who choose an object-oriented programming language want a database based on an object-oriented model. Distributed DatabaseSimilarly , a distributed database is one in which different parts of the database reside on physically separated computers . One goal of distributed databases is the access of information without regard to where the data might be stored. Keeping in mind that once the users and their data are separated , the communication and networking concepts come into play .Distributed databases require software that resides partially in the larger computer. This software bridges the gap between personal and large computers and resolves the problems of incompatible data formats. Ideally, it would make the mainframe databases appear to be large libraries of information, with most of the processing accomplished on the personal computer.A drawback to some distributed systems is that they are often based on what is called a mainframe-entire model , in which the larger host computer is seen as the master and the terminal or personal computer is seen as a slave. There are some advantages to this approach . With databases under centralized control , many of the s personal'problems of data integrity that we mentioned earlier are solved . But todaycomputers, departmental computers, and distributed processing require computers and their applications to communicate with each other on a more equal or peer-to-peer basis. In a database, the client/server model provides the framework for distributing databases.One way to take advantage of many connected computers running database applications is to distribute the application into cooperating parts that are independent of one anther. A client is an end user or computer program that requests resources across a network. A server is a computer running software that fulfills those requests across a network . When the resources are data in a database ,the client/server model provides the framework for distributing database.A file serve is software that provides access to files across a network. Adedicated file server is a single computer dedicated to being a file server. This is useful ,for example ,if the files are large and require fast access .In such cases, a minicomputer or mainframe would be used as a file server. A distributed file server spreads the files around on individual computers instead of placing them on one dedicated computer.Advantages of the latter server include the ability to store and retrieve files onother computers and the elimination of duplicate files on each computer. A major disadvantage , however, is that individual read/write requests are being moved across the network and problems can arise when updating files. Suppose a user requests a record from a file and changes it while another user requests the same record and changes it too. The solution to this problems called record locking, which means that the first request makes others requests wait until the first request is satisfied . Other users may be able to read the record, but they will not be able to change it .A database server is software that services requests to a database across a network. For example, suppose a user types in a query for data on his or her personal computer . If the application is designed with the client/server model in mind ,the query language part on the personal computer simple sends the query across the network to the database server and requests to be notified when the data are found. Examples of distributed database systems can be found in the engineering world. Sun's Network Filing System(NFS),for example, is used in computer-aided engineering applications to distribute data among the hard disks in a network of Sun workstation.Distributing databases is an evolutionary step because it is logical that datashould exist at the location where they are being used . Departmental computers within a large corporation ,for example, should have data reside locally , yet those data should be accessible by authorized corporate management when they want to consolidate departmental data . DBMS software will protect the security and integrity of the database , and the distributed database will appear to its users as no different from the non-distributed database .。

数据库设计外文翻译

数据库设计外文翻译

外文资料As information technology advances, various management systems have emerged to change the daily lives of the more coherent, to the extent possible, the use of network resources can be significantly reasonable reduction of manual management inconvenience and waste of time.Accelerating the modernization of the 21st century, the continuous improvement of the scientific and cultural levels, the rapid growth of the number of students will inevitably increase the pressure information management students, the inefficient manual retrieval completely incompatible with the community\'s needs. The Student Information Management Systemis an information management one kind within system, currently information technique continuously of development, the network technique has already been applied in us extensively nearby of every trade, there is the network technical development, each high schools all make use of a calculator to manage to do to learn, the school is operated by handicraft before of the whole tedious affairs all got fast and solve high-efficiencily, especially student result management the system had in the school very big function, all can be more convenient, fast for the student and the teacher coming saying and understand accurately with management everyone noodles information.AbstractIt is a very heavy and baldness job of managing a bulky database by manpower. The disadvantage, such as great capacity of work, low efficiency and long period, exist in data inputting, demanding and modification. So the computer management system will bring us a quite change.Because there are so many students in the school, the data of students' information is huge, it makes the management of the information become a complicated and tedious work. This system aims at the school, passing by practically of demand analysis, adopt mighty VB6.0 to develop the student information management system. The whole system design process follow the principle of simple operation, beautiful and vivid interface and practical request. The student information management system including the function of system management, basic information management, study management, prize andpunishment management , print statement and so on. Through the proof of using, the student information management system which this text designed can satisfy the school to manage the demand of the aspect to students' information. The thesis introduced the background of development, the functions demanded and the process of design. The thesis mainly explained the point of the system design, the thought of design, the difficult technique and the solutions. The student managed the creation of the system to reduce the inconvenience on the manpower consumedly, let the whole student the data management is more science reasonable.The place that this system has most the special features is the backstage database to unify the management to student's information.That system mainly is divided into the system management, student profession management, student file management, school fees management, course management, result management and print the statement.The interface of the system is to make use of the vb software creation of, above few molds pieces are all make use of the vb to control a the piece binds to settle of method to carry out the conjunction toward the backstage database, the backstage database probably is divided into following few formses:Professional information form, the charges category form, student the job form, student the information form, political feature form of student, the customer logs on the form The system used Client/Server structure design, the system is in the data from one server and a number of Taiwan formed LAN workstations. Users can check the competence of different systems in different users submit personal data, background database you can quickly given the mandate to see to the content.Marks management is a important work of school,the original manual management have many insufficiencies,the reasons that,students' population are multitudinous in school,and each student's information are too complex,thus the work load are extremely big,the statistics and the inquiry have beeninconvenient.Therefore,how to solve these insufficiencies,let the marks management to be more convenient and quickly,have a higher efficiency,and become a key question.More and more are also urgent along with school automationthe marksmanagement when science and technology rapid development,therefore is essential to develop the software system of marks register to assist the school teaching management.So that can improve the marks management,enhance the efficiency of management.“We cut nature up, organize it into concepts, and ascribe significances as we do, largely because we are parties to an agreement that holds throughout our speech community and is codified in the patterns of our language …we cannot talk at all except by subscribing to the organization and classification of data which the agreement decrees.” Benjamin Lee Whorf (1897-1941)The genesis of the computer revolution was in a machine. The genesis of our programming languages thus tends to look like that machine.But computers are not so much machines as they are mind amplification tools (“bicycles for the mind,”as Steve Jobs is fond of saying) and a different kind of expressive medium. As a result, the tools are beginning to look less like machines and more like parts of our minds, and also like other forms of expression such as writing, painting, sculpture, animation, and filmmaking. Object-oriented programming (OOP) is part of this movement toward using the computer as an expressive medium.This chapter will introduce you to the basic concepts of OOP, including an overview of development methods. This chapter, and this book, assumes that you have some programming experience, although not necessarily in C. If you think you need more preparation in programming before tackling this book, you should work through the Thinking in C multimedia seminar, downloadable from .This chapter is background and supplementary material. Many people do not feel comfortable wading into object-oriented programming without understanding the big picture first. Thus, there are many concepts that are introduced here to give you a solid overview of OOP. However, other people may not get the big picture concepts until they’ve seen some of the mechanics first; these people may become boggeddown and lost without some code to get their hands on. If you’re part of this latter group and are eager to get to the specifics of the language, feel free to jump past this chapter—skipping it at t his point will not prevent you from writing programs or learning the language. However, you will want to come back here eventually to fill in your knowledge so you can understand why objects are important and how to design with them.All programming languages provide abstractions. It can be argued that the complexity of the problems you’re able to solve is directly related to the kind and quality of abstraction. By “kind”I mean, “What is it that you are abstracting?”Assembly language is a small abstraction of the underlying machine. Many so-called “imperative”languages that followed (such as FORTRAN, BASIC, and C) were abstractions of assembly language. These languages are big improvements over assembly language, but their primary abstraction still requires you to think in terms of the structure of the computer rather than the structure of the problem you are trying to solve. The programmer must establish the association between the machine model (in the “solution space,”which is the place where you’re implementing that solution, such as a computer) and the model of the problem that is actually being solved (in the 16 Thinking in Java Bruce EckelThe object-oriented approach goes a step further by providing tools for the programmer to represent elements in the problem space. This representation is general enough that the programmer is not constrained to any particular type of problem. We refer to the elements in the problem space and their representations in the solution space as “objects.” (You will also need other objects that don’t have problem-space analogs.) The idea is that the program is allowed to adapt itself to the lingo of the problem by adding new types of objects, so when you read the code describing the solution, you’re reading words that also express the problem. This is a more flexible and powerful language abstraction than what we’ve had before.1 Thus, OOP allows you to describe the problem in terms of the problem, rather than in terms of the computer where the solution will run. There’s still a connection back to the computer:Each object looks quite a bit like a little computer—it has a state, and it has operations that you can ask it to perform. However, this doesn’t seem like such a bad analogy to objects in the real world—they all have characteristics and behaviors.Java is making possible the rapid development of versatile programs for communicating and collaborating on the Internet. We're not just talking word processors and spreadsheets here, but also applications to handle sales, customer service, accounting, databases, and human resources--the meat and potatoes of corporate computing. Java is also making possible a controversial new class of cheap machines called network computers,or NCs,which SUN,IBM, Oracle, Apple, and others hope will proliferate in corporations and our homes.The way Java works is simple, Unlike ordinary software applications, which take up megabytes on the hard disk of your PC,Java applications,or"applets",are little programs that reside on the network in centralized servers,the network that delivers them to your machine only when you need them to your machine only when you need them.Because the applets are so much smaller than conventional programs, they don't take forever to download.Say you want to check out the sales results from the southwest region. You'll use your Internet browser to find the corporate Internet website that dishes up financial data and, with a mouse click or two, ask for the numbers.The server will zap you not only the data, but also the sales-analysis applet you need to display it. The numbers will pop up on your screen in a Java spreadsheet, so you can noodle around with them immediately rather than hassle with importing them to your own spreadsheet program。

数据库毕业设计外文翻译--正确选择数据采集系统

数据库毕业设计外文翻译--正确选择数据采集系统

中英文翻译Selecting the Right Data Acquisition SystemEngineers often must monitor a handful of signals over extended periods of time, and then graph and analyze the resulting data. The need to monitor, record and analyze data arises in a wide range of applications, including the design-verification stage of product development, environmental chamber monitoring, component inspection, benchtop testing and process trouble-shooting.This application note describes the various methods and devices you can use to acquire, record and analyze data, from the simple pen-and-paper method to using today's sophisticated data acquisition systems. It discusses the advantages and disadvantages of each method and provides a list of questions that will guide you in selecting the approach that best suits your needs.IntroductionIn geotechnical engineering, we sometime encounter some difficulties such as monitoring instruments distributed in a large area, dangerous environment of working site that cause some difficulty for easy access. In this case, operators may adopt remote control, by which a large amount of measured data will be transmitted to a observation room where the data are to be collected, stored and processed.The automatic data acquisition control system is able to complete the tasks as regular automatic data monitoring, acquisition and store, featuring high automation, large data store capacity and reliable performance.The system is composed of acquisition control system and display system, with the following features:1. No. of Channels: 32 ( can be increased or decreased according to user's real needs.)2. Scanning duration: decided by user, fastest 32 points/second3. Store capacity: 20G( may be increased or decreased)4. Display: (a) Table of parameter (b) History tendency (c) Column graphics.5. Function: real time monitoring control, warning6. Overall dimension: 50cm×50cm×72cmData acquisition systems, as the name implies, are products and/or processes used to collect information to document or analyze some phenomenon. In the simplest form, a technician logging the temperature of an oven on a piece of paper is performing data acquisition. As technology has progressed, this type of process has been simplified and made more accurate, versatile, and reliable through electronic equipment. Equipment ranges from simple recorders to sophisticated computer systems. Data acquisition products serve as a focal point in a system, tying together a wide variety of products, such as sensors that indicate temperature, flow, level, or pressure. Some common data acquistion terms are shown below:Data acquisition technology has taken giant leaps forward over the last 30 to 40 years. For example, 40 years ago, in a typical college lab, apparatus for tracking the temperature rise in a crucible of sodiumtungsten- bronze consisted of a thermocouple, a bridge, a lookup table, a pad of paper and a pencil.Today's college students are much more likely to use an automated process and analyze the data on a PC Today, numerous options are available for gathering data. The optimal choice depends on several factors, including the complexity of the task, the speed and accuracy you require, and the documentation you want. Data acquisition systems range from the simple to the complex, with a range of performance and functionality.Pencil and paperThe old pencil and paper approach is still viable for some situations, and it is inexpensive, readily available, quick and easy to get started. All you need to do is hook up a digital multimeter (DMM) and begin recording data by hand. Unfortunately, this method is error-prone, tends to be slow and requires extensive manual analysis. In addition, it works only for a single channel of data; while you can use multiple DMMs, the system will quickly becomes bulky and awkward. Accuracy is dependent on the transcriber's level of fastidiousness and you may need to scaleinput manually. For example, if the DMM is not set up to handle temperature sensors, manual scaling will be required. Taking these limitations into account, this is often an acceptablemethod when you need to perform a quick experiment.Strip chart recorderModern versions of the venerable strip chart recorder allow you to capture data from several inputs. They provide a permanent paper record of the data, and because this data is in graphical format, they allow you to easily spot trends. Once set up, most recorders have sufficient internal intelligence to run unattended — without the aid of either an operator or a computer. Drawbacks include a lack of flexibility and relatively low accuracy, which is often constrained to a few percentage points. You can typically perceive only small changes in the pen plots. While recorders perform well when monitoring a few channels over a long period of time, their value can be limited. For example, they are unable to turn another device on or off. Other concerns include pen and paper maintenance, paper supply and data storage, all of which translate into paper overuse and waste. Still, recorders are fairly easy to set up and operate, and offer a permanent record of the data for quick and simple analysis.Scanning digital multimeterSomebenchtop DMMs offer an optional scanning capability. A slot in the rear of the instrument accepts a scanner card that can multiplex between multiple inputs, with 8 to 10 channels of mux being fairly common. DMM accuracy and the functionality inherent in the instrument's front panel are retained. Flexibility is limited in that it is not possible to expand beyond the number of channels available in the expansion slot. An external PC usually handles data acquisition and analysis.PC plug-in cardsPC plug-in cards are single-board measurement systems that take advantage of the ISA or PCI-bus expansion slots in a PC. They often have reading rates as high as 100,000 readings per second. Counts of 8 to 16 channels are common, and acquired data is stored directly into the computer, where it can then be analyzed. Because the card is essentially part of the computer, it is easy to set up tests. PC cards also arerelatively inexpensive, in part, because they rely on the host PC to provide power, the mechanical enclosure and the user interface.Data acquisition optionsIn the downside, PC plug-in cards often have only 12 bits of resolution, so you can't perceive small variations with the input signal. Furthermore, the electrical environment inside a PC tends to be noisy, with high-speed clocks and bus noise radiated throughout. Often, this electrical interference limits the accuracy of the PC plug-in card to that of a handheld DMM .These cards also measure a fairly limited range of dc voltage. To measure other input signals, such as ac voltage, temperature or resistance, you may need some sort of external signal conditioning. Additional concerns include problematic calibration and overall system cost, especially if you need to purchase additional signal conditioning accessories or a PC to accommodate the cards. Taking that into consideration, PC plug-in cards offer an attractive approach to data acquisition if your requirements fall within the capabilities and limitations of the card.Data loggersData loggers are typically stand-alone instruments that, once they are setup, can measure, record and display data without operator or computer intervention. They can handle multiple inputs, in some instances up to 120 channels. Accuracy rivals that found in standalone bench DMMs, with performance in the 22-bit, 0.004-percent accuracy range. Some data loggers have the ability to scale measurements, check results against user-defined limits, and output signals for control.One advantage of using data loggers is their built-in signal conditioning. Most are able to directly measure a number of different inputs without the need for additional signal conditioning accessories. One channel could be monitoring a thermocouple, another a resistive temperature device (RTD) and still another could be looking at voltage.Thermocouple reference compensation for accurate temperature measurement is typically built into the multiplexer cards. A data logger's built-in intelligence helpsyou set up the test routine and specify the parameters of each channel. Once you have completed the setup, data loggers can run as standalone devices, much like a recorder. They store data locally in internal memory, which can accommodate 50,000 readings or more.PC connectivity makes it easy to transfer data to your computer for in-depth analysis. Most data loggers are designed for flexibility and simple configuration and operation, and many provide the option of remote site operation via battery packs or other methods. Depending on the A/D converter technique used, certain data loggers take readings at a relatively slow rate, especially compared to many PC plug-in cards. Still, reading speeds of 250 readings/second are not uncommon. Keep in mind that many of the phenomena being monitored are physical in nature — such as temperature, pressure and flow — and change at a fairly slow rate. Additionally, because of a data logger's superior measurement accuracy, multiple readings and averaging are not necessary, as they often are in PC plug-in solutions.Data acquisition front endsData acquisition front ends are often modular and are typically connected to a PC or controller. They are used in automated test applications for gathering data and for controlling and routing signals in other parts of the test setup. Front end performance can be very high, with speed and accuracy rivaling the best standalone instruments. Data acquisition front ends are implemented in a number of formats, including VXI versions, such as the Agilent E1419A multifunction measurement and control VXI module, and proprietary card cages.. Although front-end cost has been decreasing, these systems can be fairly expensive, and unless you require the high performance they provide, you may find their price to be prohibitive. On the plus side, they do offer considerable flexibility and measurement capability.Data Logger ApplicationsA good, low-cost data logger with moderate channel count (20 - 60 channels) and a relatively slow scan rate is more than sufficient for many of the applications engineers commonly face. Some key applications include:• Product characterization• Thermal profiling of electronic products• Environmental testing; environmental monitoring• Component characteriza tion• Battery testing• Building and computer room monitoring• Process monitoring, evaluation and troubleshooting No single data acquisition system works for all applications. Answering the following questions may help you decide which will best meet your needs:1. Does the system match my application?What is the measurement resolution, accuracy and noise performance? How fast does it scan? What transducers and measurement functions are supported? Is it upgradeable or expandable to meet future needs? How portable is it? Can it operate as a standalone instrument?2. How much does it cost?Is software included, or is it extra? Does it require signal conditioning add-ons? What is the warranty period? How easy and inexpensive is it to calibrate?3. How easy is it to use?Can the specifications be understood? What is the user interface like? How difficult is it to reconfigure for new applications? Can data be transferred easily to new applications? Which application packages are supported?ConclusionData acquisition can range from pencil, paper and a measuring device, to a highly sophisticated system of hardware instrumentation and software analysis tools. The first step for users contemplating the purchase of a data acquisition device or system is to determine the tasks at hand and the desired output, and then select the type and scope of equipment that meets their criteria. All of the sophisticated equipment and analysis tools that are available are designed to help users understand the phenomena they are monitoring. The tools are merely a means to an end.正确选择数据采集系统工程师经常要对很长时间内的很多信号进行监测、画图和分析产生的数据。

毕业设计(论文)外文资料翻译(学生用)

毕业设计(论文)外文资料翻译(学生用)

南京理工大学紫金学院毕业设计(论文)外文资料翻译系:计算机专业:计算机科学与技术姓名:沈俊男学号: 060601239外文出处: E. Jimenez-Ruiz,R. Berlanga. The Management(用外文写)and Integration of Biomedical[M/OL].Castellon:Spanish Ministry of Education andScience project,2004[2005-09098]./ftp/cs/papers/0609/0609144.pdf附件: 1.外文资料翻译译文;2.外文原文。

注:请将该封面与附件装订成册。

附件1:外文资料翻译译文管理和集成的生物医学知识:应用于Health-e-Child项目摘要:这个Health-e-Child项目的目的是为欧洲儿科学发展集成保健平台。

为了实现一个关于儿童健康的综合观点,一个复杂的生物医学数据、信息和知识的整合是必需的。

本体论将用于正式定义这个领域的专业知识,将塑造医学知识管理系统的基础。

文中介绍了一种对生物医学知识的垂直整合的新颖的方法。

该方法将会主要使临床医生中心化,并使定义本体碎片成为可能,连接这些碎片(语义桥接器),丰富了本体碎片(观点)。

这个策略为规格和捕获的碎片,桥接器和观点概述了初步的例子证明从医院数据库、生物医学本体、生物医学公共数据库的生物医学信息的征收。

关键词:垂直的知识集成、近似查询、本体观点、语义桥接器1.1 医学数据集成问题数据来源的集成已经在数据库社区成为传统的研究课题。

一个综合数据库系统主要的目标是允许用户均匀的访问一个分布和一个异构数据库。

数据集成的关键因素是定义一个全局性的模式,但是值得指出的是,我们必须区分三种全局模式:数据库模式、概念模式和域本体模式。

首先介绍了数据类型的信息存储、本地查询;其二,概括了这些图式采用更富有表达力的数据模型,如统一建模语言(UML)(TAMBIS和SEMEDA都遵循这个模式)。

毕业设计外文翻译例文

毕业设计外文翻译例文

大连科技学院毕业设计(论文)外文翻译学生姓名专业班级指导教师职称所在单位教研室主任完成日期 2016年4月15日Translation EquivalenceDespite the fact that the world is becoming a global village, translation remains a major way for languages and cultures to interact and influence each other. And name translation, especially government name translation, occupies a quite significant place in international exchange.Translation is the communication of the meaning of a source-language text by means of an equivalent target-language text. While interpreting—the facilitating of oral or sign-language communication between users of different languages—antedates writing, translation began only after the appearance of written literature. There exist partial translations of the Sumerian Epic of Gilgamesh (ca. 2000 BCE) into Southwest Asian languages of the second millennium BCE. Translators always risk inappropriate spill-over of source-language idiom and usage into the target-language translation. On the other hand, spill-overs have imported useful source-language calques and loanwords that have enriched the target languages. Indeed, translators have helped substantially to shape the languages into which they have translated. Due to the demands of business documentation consequent to the Industrial Revolution that began in the mid-18th century, some translation specialties have become formalized, with dedicated schools and professional associations. Because of the laboriousness of translation, since the 1940s engineers have sought to automate translation (machine translation) or to mechanically aid the human translator (computer-assisted translation). The rise of the Internet has fostered a world-wide market for translation services and has facilitated language localizationIt is generally accepted that translation, not as a separate entity, blooms into flower under such circumstances like culture, societal functions, politics and power relations. Nowadays, the field of translation studies is immersed with abundantly diversified translation standards, with no exception that some of them are presented by renowned figures and are rather authoritative. In the translation practice, however, how should we select the so-called translation standards to serve as our guidelines in the translation process and how should we adopt the translation standards to evaluate a translation product?In the macro - context of flourish of linguistic theories, theorists in the translation circle, keep to the golden law of the principle of equivalence. The theory of Translation Equivalence is the central issue in western translation theories. And the presentation of this theory gives great impetus to the development and improvement of translation theory. It‟s not diffi cult for us to discover that it is the theory of Translation Equivalence that serves as guidelines in government name translation in China. Name translation, as defined, is the replacement of thename in the source language by an equivalent name or other words in the target language. Translating Chinese government names into English, similarly, is replacing the Chinese government name with an equivalent in English.Metaphorically speaking, translation is often described as a moving trajectory going from A to B along a path or a container to carry something across from A to B. This view is commonly held by both translation practitioners and theorists in the West. In this view, they do not expect that this trajectory or something will change its identity as it moves or as it is carried. In China, to translate is also understood by many people normally as “to translate the whole text sentence by sentence and paragraph by paragraph, without any omission, addition, or other changes. In both views, the source text and the target text must be “the same”. This helps explain the etymological source for the term “translation equivalence”. It is in essence a word which describes the relationship between the ST and the TT.Equivalence means the state or fact or property of being equivalent. It is widely used in several scientific fields such as chemistry and mathematics. Therefore, it comes to have a strong scientific meaning that is rather absolute and concise. Influenced by this, translation equivalence also comes to have an absolute denotation though it was first applied in translation study as a general word. From a linguistic point of view, it can be divided into three sub-types, i.e., formal equivalence, semantic equivalence, and pragmatic equivalence. In actual translation, it frequently happens that they cannot be obtained at the same time, thus forming a kind of relative translation equivalence in terms of quality. In terms of quantity, sometimes the ST and TT are not equivalent too. Absolute translation equivalence both in quality and quantity, even though obtainable, is limited to a few cases.The following is a brief discussion of translation equivalence study conducted by three influential western scholars, Eugene Nida, Andrew Chesterman and Peter Newmark. It‟s expected that their studies can instruct GNT study in China and provide translators with insightful methods.Nida‟s definition of translation is: “Translation consists in reproducing in the receptor language the closest natural equivalent of the source language message, first in terms of meaning and secondly in terms of style.” It i s a replacement of textual material in one language〔SL〕by equivalent textual material in another language(TL). The translator must strive for equivalence rather than identity. In a sense, this is just another way of emphasizing the reproducing of the message rather than the conservation of the form of the utterance. The message in the receptor language should match as closely as possible the different elements in the source language to reproduce as literally and meaningfully as possible the form and content of the original. Translation equivalence is an empirical phenomenon discovered bycomparing SL and TL texts and it‟s a useful operational concept like the term “unit of translati on”.Nida argues that there are two different types of equivalence, namely formal equivalence and dynamic equivalence. Formal correspondence focuses attention on the message itself, in both form and content, whereas dynamic equivalence is based upon “the principle of equivalent effect”.Formal correspondence consists of a TL item which represents the closest equivalent of a ST word or phrase. Nida and Taber make it clear that there are not always formal equivalents between language pairs. Therefore, formal equivalents should be used wherever possible if the translation aims at achieving formal rather than dynamic equivalence. The use of formal equivalents might at times have serious implications in the TT since the translation will not be easily understood by the target readership. According to Nida and Taber, formal correspondence distorts the grammatical and stylistic patterns of the receptor language, and hence distorts the message, so as to cause the receptor to misunderstand or to labor unduly hard.Dyn amic equivalence is based on what Nida calls “the principle of equivalent effect” where the relationship between receptor and message should be substantially the same as that which existed between the original receptors and the message. The message has to be modified to the receptor‟s linguistic needs and cultural expectation and aims at complete naturalness of expression. Naturalness is a key requirement for Nida. He defines the goal of dynamic equivalence as seeking the closest natural equivalent to the SL message. This receptor-oriented approach considers adaptations of grammar, of lexicon and of cultural references to be essential in order to achieve naturalness; the TL should not show interference from the SL, and the …foreignness …of the ST setting is minimized.Nida is in favor of the application of dynamic equivalence, as a more effective translation procedure. Thus, the product of the translation process, that is the text in the TL, must have the same impact on the different readers it was addressing. Only in Nida and Taber's edition is it clearly stated that dynamic equivalence in translation is far more than mere correct communication of information.As Andrew Chesterman points out in his recent book Memes of Translation, equivalence is one of the five element of translation theory, standing shoulder to shoulder with source-target, untranslatability, free-vs-literal, All-writing-is-translating in importance. Pragmatically speaking, observed Chesterman, “the only true examples of equivalence (i.e., absolute equivalence) are those in which an ST item X is invariably translated into a given TL as Y, and vice versa. Typical examples would be words denoting numbers (with the exceptionof contexts in which they have culture-bound connotations, such as “magic” or “unlucky”), certain technical terms (oxygen, molecule) and the like. From this point of view, the only true test of equivalence would be invariable back-translation. This, of course, is unlikely to occur except in the case of a small set of lexical items, or perhaps simple isolated syntactic structure”.Peter Newmark. Departing from Nida‟s receptor-oriented line, Newmark argues that the success of equivalent effect is “illusory “and that the conflict of loyalties and the gap between emphasis on source and target language will always remain as the overriding problem in translation theory and practice. He suggests narrowing the gap by replacing the old terms with those of semantic and communicative translation. The former attempts to render, as closely as the semantic and syntactic structures of the second language allow, the exact contextual meaning of the original, while the latter “attempts to produce on its readers an effect as close as possible to that obtained on the readers of the original.” Newmark‟s description of communicative translation resembles Nida‟s dynamic equivalence in the effect it is trying to create on the TT reader, while semantic translation has similarities to Nida‟s formal equivalence.Meanwhile, Newmark points out that only by combining both semantic and communicative translation can we achieve the goal of keeping the …spirit‟ of the original. Semantic translation requires the translator retain the aesthetic value of the original, trying his best to keep the linguistic feature and characteristic style of the author. According to semantic translation, the translator should always retain the semantic and syntactic structures of the original. Deletion and abridgement lead to distortion of the author‟s intention and his writing style.翻译对等尽管全世界正在渐渐成为一个地球村,但翻译仍然是语言和和文化之间的交流互动和相互影响的主要方式之一。

毕业设计(论文)外文资料及译文(模板)

毕业设计(论文)外文资料及译文(模板)

大连东软信息学院
毕业设计(论文)外文资料及译文
系所:
专业:
班级:
姓名:
学号:
大连东软信息学院
Dalian Neusoft University of Information
外文资料和译文格式要求
一、装订要求
1、外文资料原文(复印或打印)在前、译文在后、最后为指导教师评定成绩。

2、译文必须采用计算机输入、打印。

3、A4幅面打印,于左侧装订。

二、撰写要求
1、外文文献内容与所选课题相关。

2、本科学生译文汉字字数不少于4000字,高职学生译文汉字字数不少于2000字。

三、格式要求
1、译文字号:中文小四号宋体,英文小四号“Times New Roman”字型,全文统一,首行缩进2个中文字符,1.5倍行距。

2、译文页码:页码用阿拉伯数字连续编页,字体采用“Times New Roman”字体,字号小五,页底居中。

3、译文页眉:眉体使用单线,页眉说明五号宋体,居中“大连东软信息学院本科毕业设计(论文)译文”。

大连东软信息学院毕业设计(论文)译文
大连东软信息学院毕业设计(论文)译文
大连东软信息学院毕业设计(论文)译文
大连东软信息学院毕业设计(论文)译文
大连东软信息学院毕业设计(论文)译文。

大学毕业设计仓库管理系统数据库计算机外文参考文献原文及翻译

大学毕业设计仓库管理系统数据库计算机外文参考文献原文及翻译

河北工程大学毕业论文(设计)英文参考文献原文复印件及译文数据仓库数据仓库为商务运作提供结构与工具,以便系统地组织、理解和使用数据进行决策。

大量组织机构已经发现,在当今这个充满竞争、快速发展的世界,数据仓库是一个有价值的工具。

在过去的几年中,许多公司已花费数百万美元,建立企业范围的数据仓库。

许多人感到,随着工业竞争的加剧,数据仓库成了必备的最新营销武器——通过更多地了解客户需求而保住客户的途径。

“那么”,你可能会充满神秘地问,“到底什么是数据仓库?”数据仓库已被多种方式定义,使得很难严格地定义它。

宽松地讲,数据仓库是一个数据库,它与组织机构的操作数据库分别维护。

数据仓库系统允许将各种应用系统集成在一起,为统一的历史数据分析提供坚实的平台,对信息处理提供支持。

按照W. H. Inmon,一位数据仓库系统构造方面的领头建筑师的说法,“数据仓库是一个面向主题的、集成的、时变的、非易失的数据集合,支持管理决策制定”。

这个简短、全面的定义指出了数据仓库的主要特征。

四个关键词,面向主特征。

(1)(2)确保(3)(4)下的应用数据。

由于这种分离,数据仓库不需要事务处理、恢复和并行控制机制。

通常,它只需要两种数据访问:数据的初始化装入和数据访问。

概言之,数据仓库是一种语义上一致的数据存储,它充当决策支持数据模型的物理实现,并存放企业决策所需信息。

数据仓库也常常被看作一种体系结构,通过将异种数据源中的数据集成在一起而构造,支持结构化和启发式查询、分析报告和决策制定。

“好”,你现在问,“那么,什么是建立数据仓库?”根据上面的讨论,我们把建立数据仓库看作构造和使用数据仓库的过程。

数据仓库的构造需要数据集成、数据清理、和数据统一。

利用数据仓库常常需要一些决策支持技术。

这使得“知识工人”(例如,经理、分析人员和主管)能够使用数据仓库,快捷、方便地得到数据的总体视图,根据数据仓库中的信息做出准确的决策。

有些作者使用术语“建立数据仓库”表示构造数据仓库的过程,而用术语“仓库DBMS”表示管理和使用数据仓库。

数据库设计外文翻译--Java开发2.0:使用 Hibernate Shards 进行切分

数据库设计外文翻译--Java开发2.0:使用 Hibernate Shards 进行切分

本科生毕业设计(论文)外文资料译文( 2011 届)译文题目Java开发2.0:使用Hibernate Shards 进行切分外文资料译文规范说明一、译文文本要求1.外文译文不少于2000汉字;2.外文译文本文格式参照论文正文规范(标题、字体、字号、图表、原文信息等);3.外文原文资料信息列文末,对应于论文正文的参考文献部分,标题用“外文原文资料信息”,内容包括:1)外文原文作者;2)书名或论文题目;3)外文原文来源:□出版社或刊物名称、出版时间或刊号、译文部分所在页码□网页地址二、外文原文资料(电子文本或数字化后的图片):1.外文原文不少于10000印刷字符(图表等除外);2.外文原文若是纸质的请数字化(图片)后粘贴于译文后的原文资料处,但装订时请用纸质原文复印件附于译文后。

指导教师意见:指导教师签名:年月日一、外文资料译文:Java开发2.0:使用Hibernate Shards 进行切分横向扩展的关系数据库Andrew Glover,作者兼开发人员,Beacon50摘要:Sharding并不适合所有网站,但它是一种能够满足大数据的需求方法。

对于一些商店来说,切分意味着可以保持一个受信任的 RDBMS,同时不牺牲数据可伸缩性和系统性能。

在Java 开发 2.0系列的这一部分中,您可以了解到切分何时起作用,以及何时不起作用,然后开始着手对一个可以处理数 TB 数据的简单应用程序进行切分。

日期:2010年8月31日级别:中级PDF格式:A4和信(64KB的15页)取得Adobe®Reader®软件当关系数据库试图在一个单一表中存储数TB 的数据时,总体性能通常会降低。

索引所有的数据读取,显然是很耗时的,而且其中有可能是写入,也可能是读出。

因为NoSQL 数据商店尤其适合存储大型数据,但是NoSQL 是一种非关系数据库方法。

对于倾向于使用ACID-ity 和实体结构关系数据库的开发人员及需要这种结构的项目来说,切分是一个令人振奋的选方法。

毕业设计外文翻译英文翻译英文原稿

毕业设计外文翻译英文翻译英文原稿

Harmonic source identification and current separationin distribution systemsYong Zhao a,b,Jianhua Li a,Daozhi Xia a,*a Department of Electrical Engineering Xi’an Jiaotong University, 28 West Xianning Road, Xi’an, Shaanxi 710049, Chinab Fujian Electric Power Dispatch and Telecommunication Center, 264 Wusi Road, Fuzhou, Fujian, 350003, China AbstractTo effectively diminish harmonic distortions, the locations of harmonic sources have to be identified and their currents have to be separated from that absorbed by conventional linear loads connected to the same CCP. In this paper, based on the intrinsic difference between linear and nonlinear loads in their V –I characteristics and by utilizing a new simplified harmonic source model, a new principle for harmonic source identification and harmonic current separation is proposed. By using this method, not only the existence of harmonic source can be determined, but also the contributions of the harmonic source and the linear loads to harmonic voltage distortion can be distinguished. The detailed procedure based on least squares approximation is given. The effectiveness of the approach is illustrated by test results on a composite load.2004 Elsevier Ltd. All rights reserved.Keywords: Distribution system; Harmonic source identification; Harmonic current separation; Least squares approximation1. IntroductionHarmonic distortion has experienced a continuous increase in distribution systems owing to the growing use of nonlinear loads. Many studies have shown that harmonics may cause serious effects on power systems, communication systems, and various apparatus [1–3]. Harmonic voltages at each point on a distribution network are not only determined by the harmonic currents produced by harmonic sources (nonlinear loads), but also related to all linear loads (harmonic current sinks) as well as the structure and parameters of the network. To effectively evaluate and diminish the harmonic distortion in power systems, the locations of harmonic sources have to be identified and the responsibility of the distortion caused by related individual customers has to be separated.As to harmonic source identification, most commonly the negative harmonic power is considered as an essential evidence of existing harmonic source [4–7]. Several approaches aiming at evaluating the contribution of an individual customer can also be found in the literatures. Schemes based on power factor measurement to penalize the customer’s harmonic currents are discussed in Ref. [8]. However, it would be unfair to use economical penalization if we could not distinguish whether the measured harmonic current is from nonlinear load or from linear load.In fact, the intrinsic difference between linear and nonlinear loads lies in their V –I characteristics. Harmonic currents of a linear load are i n linear proportion to its supplyharmonic voltages of the same order 次, whereas the harmonic currents of a nonlinear load are complex nonlinear functions of its supply fundamental 基波and harmonic voltage components of all orders. To successfully identify and isolate harmonic source in an individual customer or several customers connected at same point in the network, the V –I characteristics should be involved and measurement of voltages and currents under several different supply conditions should be carried out.As the existing approaches based on measurements of voltage and current spectrum or harmonic power at a certain instant cannot reflect the V –I characteristics, they may not provide reliable information about the existence and contribution of harmonic sources, which has been substantiated by theoretical analysis or experimental researches [9,10].In this paper, to approximate the nonlinear characteristics and to facilitate the work in harmonic source identification and harmonic current separation, a new simplified harmonic source model is proposed. Then based on the difference between linear and nonlinear loads in their V –I characteristics, and by utilizing the harmonic source model, a new principle for harmonic source identification and harmonic current separation is presented. By using the method, not only the existence of harmonic source can be determined, but also the contributions of the harmonic sources and the linear loads can be separated. Detailed procedure of harmonic source identification and harmonic current separation based on least squares approximation is presented. Finally, test results on a composite load containing linear and nonlinear loads are given to illustrate the effectiveness of the approach.2. New principle for harmonic source identification and current separationConsider a composite load to be studied in a distribution system, which may represent an individual consumer or a group of customers supplied by a common feeder 支路in the system. To identify whether it contains any harmonic source and to separate the harmonic currents generated by the harmonic sources from that absorbed by conventional linear loads in the measured total harmonic currents of the composite load, the following assumptions are made.(a) The supply voltage and the load currents are both periodical waveforms withperiod T; so that they can be expressed by Fourier series as1()s i n (2)h h h v t ht T πθ∞==+ (1)1()sin(2)h h h i t ht πφ∞==+The fundamental frequency and harmonic components can further be presented bycorresponding phasorshr hi h h hr hi h hV jV V I jI I θφ+=∠+=∠ , 1,2,3,...,h n = (2)(b) During the period of identification, the composite load is stationary, i.e. both its composition and circuit parameters of all individual loads keep unchanged.Under the above assumptions, the relationship between the total harmonic currents of the harmonic sources(denoted by subscript N) in the composite load and the supply voltage, i.e. the V –I characteristics, can be described by the following nonlinear equation ()()()N i t f v t = (3)and can also be represented in terms of phasors as()()122122,,,...,,,,,,...,,Nhr r i nr ni Nh Nhi r inr ni I V V V V V I I V V V V V ⎡⎤=⎢⎥⎣⎦ 2,3,...,h n = (4)Note that in Eq. (4), the initial time (reference time) of the voltage waveform has been properly selected such that the phase angle u1 becomes 0 and 10i V =, 11r V V =in Eq. (2)for simplicity.The V –I characteristics of the linear part (denote by subscript L) of the composite load can be represented by its equivalent harmonic admittance Lh Lh Lh Y G jB =+, and the total harmonic currents absorbed by the linear part can be described as,Lhr LhLh hr Lh Lhi LhLh hi I G B V I I B G V -⎡⎤⎡⎤⎡⎤==⎢⎥⎢⎥⎢⎥⎣⎦⎣⎦⎣⎦2,3,...,h n = (5)From Eqs. (4) and (5), the whole harmonic currents absorbed by the composite load can be expressed as()()122122,,,...,,,,,,...,,hr Lhr Nhr r i nr ni h hi Lhi Nhi r inr ni I I I V V V V V I I I I V V V V V ⎡⎤⎡⎤⎡⎤==-⎢⎥⎢⎥⎢⎥⎣⎦⎣⎦⎣⎦ 2,3,...,h n = (6)As the V –I characteristics of harmonic source are nonlinear, Eq. (6) can neither be directly used for harmonic source identification nor for harmonic current separation. To facilitate the work in practice, simplified methods should be involved. The common practice in harmonic studies is to represent nonlinear loads by means of current harmonic sources or equivalent Norton models [11,12]. However, these models are not of enough precision and new simplified model is needed.From the engineering point of view, the variations of hr V and hi V ; ordinarily fall into^3% bound of the rated bus voltage, while the change of V1 is usually less than ^5%. Within such a range of supply voltages, the following simplified linear relation is used in this paper to approximate the harmonic source characteristics, Eq. (4)112222112322,ho h h r r h i i hnr nr hni ni Nh ho h h r r h i i hnr nr hni ni a a V a V a V a V a V I b b V b V b V b V b V ++++++⎡⎤=⎢⎥++++++⎣⎦2,3,...,h n = (7)这个地方不知道是不是原文写错?23h r r b V 其他的都是2The precision and superiority of this simplified model will be illustrated in Section 4 by test results on several kinds of typical harmonic sources.The total harmonic current (Eq. (6)) then becomes112222112222,2,3,...,Lh Lh hr ho h h r r h i i hnr nr hni ni h Lh Lh hi ho h h r r h i i hnr nr hni ni G B V a a V a V a V a V a V I B G V b b V b V b V b V b V h n-++++++⎡⎤⎡⎤⎡⎤=-⎢⎥⎢⎥⎢⎥++++++⎣⎦⎣⎦⎣⎦= (8)It can be seen from the above equations that the harmonic currents of the harmonic sources (nonlinear loads) and the linear loads differ from each other intrinsically in their V –I characteristics. The harmonic current component drawn by the linear loads is uniquely determined by the harmonic voltage component with same order in the supply voltage. On the other hand, the harmonic current component of the nonlinear loads contains not only a term caused by the same order harmonic voltage but also a constant term and the terms caused by fundamental and harmonic voltages of all other orders. This property will be used for identifying the existence of harmonic source sin composite load.As the test results shown in Section 4 demonstrate that the summation of the constant term and the component related to fundamental frequency voltage in the harmonic current of nonlinear loads is dominant whereas other components are negligible, further approximation for Eq. (7) can be made as follows.Let112'012()()nh h hkr kr hki ki k k h Nhnh h hkr kr hki kik k h a a V a V a V I b b V b V b V =≠=≠⎡⎤+++⎢⎥⎢⎥=⎢⎥⎢⎥+++⎢⎥⎢⎥⎣⎦∑∑ hhr hhi hr Nhhhr hhi hi a a V I b b V ⎡⎤⎡⎤''=⎢⎥⎢⎥⎣⎦⎣⎦hhrhhihr Lh Lh Nh hhrhhi hi a a V I I I b b V ''⎡⎤⎡⎤'''=-=⎢⎥⎢⎥''⎣⎦⎣⎦,2,3,...,hhr hhiLh Lh hhrhhi hhr hhi Lh Lh hhr hhi a a G B a a h n b b B G b b ''-⎡⎤⎡⎤⎡⎤=-=⎢⎥⎢⎥⎢⎥''⎣⎦⎣⎦⎣⎦The total harmonic current of the composite load becomes112012(),()2,3,...,nh h hkr kr hki ki k k hhhrhhi hr h Lh NhLhNh n hhrhhi hi h h hkr kr hki kik k h a a V a V a V a a V I I I I I b b V b b V b V b V h n=≠=≠⎡⎤+++⎢⎥⎢⎥''⎡⎤⎡⎤''=-=-=-⎢⎥⎢⎥⎢⎥''⎣⎦⎣⎦⎢⎥+++⎢⎥⎢⎥⎣⎦=∑∑ (9)By neglecting ''Nh I in the harmonic current of nonlinear load and adding it to the harmonic current of linear load, 'Nh I can then be deemed as harmonic current of thenonlinear load while ''Lh I can be taken as harmonic current of the linear load. ''Nh I =0 means the composite load contains no harmonic sources, while ''0NhI ≠signify that harmonic sources may exist in this composite load. As the neglected term ''Nh I is not dominant, it is obviousthat this simplification does not make significant error on the total harmonic current of nonlinear load. However, it makes the possibility or the harmonic source identification and current separation.3. Identification procedureIn order to identify the existence of harmonic sources in a composite load, the parameters in Eq. (9) should be determined primarily, i.e.[]0122hr h h h rh i hhr hhihnr hni C a a a a a a a a ''= []0122hi h h h rh i hhrhhihnr hni C b b b b b b b b ''=For this purpose, measurement of different supply voltages and corresponding harmoniccurrents of the composite load should be repeatedly performed several times in some short period while keeping the composite load stationary. The change of supply voltage can for example be obtained by switching in or out some shunt capacitors, disconnecting a parallel transformer or changing the tap position of transformers with OLTC. Then, the least squares approach can be used to estimate the parameters by the measured voltages and currents. The identification procedure will be explained as follows.(1) Perform the test for m (2m n ≥)times to get measured fundamental frequency andharmonic voltage and current phasors ()()k k h h V θ∠,()()k k hh I φ∠,()1,2,,,1,2,,k m h n == .(2) For 1,2,,k n = ,transfer the phasors corresponding to zero fundamental voltage phase angle ()1(0)k θ=and change them into orthogonal components, i.e.()()11kkr V V = ()10ki V =()()()()()()()()()()11cos sin kkkkk kkkhr h h hihhV V h V V h θθθθ=-=-()()()()()()()()()()11cos sin k kkkk kkkhrhhhihhI I h I I h φθφθ=-=-,2,3,...,h n =(3)Let()()()()()()()()1221Tk k k k k k k k r i hr hi nr ni VV V V V V V V ⎡⎤=⎣⎦ ,()1,2,,k m = ()()()12Tm X V V V ⎡⎤=⎣⎦ ()()()12T m hr hr hr hrW I I I ⎡⎤=⎣⎦()()()12Tm hi hi hihi W I I I ⎡⎤=⎣⎦ Minimize ()()()211hr mk hr k I C V=-∑ and ()()()211him k hi k IC V=-∑, and determine the parametershr C and hi C by least squares approach as [13]:()()11T T hr hr T T hi hiC X X X W C X X X W --== (10)(4) By using Eq. (9), calculate I0Lh; I0Nh with the obtained Chr and Chi; then the existence of harmonic source is identified and the harmonic current is separated.It can be seen that in the course of model construction, harmonic source identification and harmonic current separation, m times changing of supply system operating condition and measuring of harmonic voltage and currents are needed. More accurate the model, more manipulations are necessary.To compromise the needed times of the switching operations and the accuracy of the results, the proposed model for the nonlinear load (Eq. (7)) and the composite load (Eq. (9)) can be further simplified by only considering the dominant terms in Eq. (7), i.e.01111,Nhr h h hhr hhi hr Nh Nhi ho h hhrhhi hi I a a V a a V I I b b V b b V +⎡⎤⎡⎤⎡⎤⎡⎤==+⎢⎥⎢⎥⎢⎥⎢⎥+⎣⎦⎣⎦⎣⎦⎣⎦2,3,,h n = (11) 01111h h Nh ho h a a V I b b V +⎡⎤'=⎢⎥+⎣⎦01111,hr hhrhhi hr h h h LhNh hi hhr hhihi ho h I a a V a a V I I I I b b V b b V ''+⎡⎤⎡⎤⎡⎤⎡⎤''==-=-⎢⎥⎢⎥⎢⎥⎢⎥''+⎣⎦⎣⎦⎣⎦⎣⎦2,3,,h n = (12) In this case, part equations in the previous procedure should be changed as follows[]01hr h h hhrhhi C a a a a ''= []01hi h h hhrhhiC b b b b ''= ()()()1Tk k k hr hi V V V ⎡⎤=⎣⎦ Similarly, 'Nh I and 'Lh I can still be taken as the harmonic current caused by thenonlinear load and the linear load, respectively.4. Experimental validation4.1. Model accuracyTo demonstrate the validity of the proposed harmonic source models, simulations are performed on the following three kind of typical nonlinear loads: a three-phase six-pulse rectifier, a single-phase capacitor-filtered rectifier and an acarc furnace under stationary operating condition.Diagrams of the three-phase six-pulse rectifier and the single-phase capacitor-filtered rectifier are shown in Figs. 1 and 2 [14,15], respectively, the V –I characteristic of the arc furnace is simplified as shown in Fig. 3 [16].The harmonic currents used in the simulation test are precisely calculated from their mathematical model. As to the supply voltage, VekT1 is assumed to be uniformly distributed between 0.95 and 1.05, VekThr and VekThi ek 1; 2;…;m T are uniformly distributed between20.03 and 0.03 with base voltage 10 kV and base power 1 MVFig. 1. Diagram of three-phase six-pulse rectifier.Fig. 2. Diagram of single-phase capacitor-filtered rectifierFig. 3. Approximate V –I characteristics of arc furnace.Three different models including the harmonic current source (constant current) model, the Norton model and the proposed simplified model are simulated and estimated by the least squares approach for comparison.For the three-phase six-pulse rectifier with fundamental currentI=1.7621; the1 parameters in the simplified model for fifth and seventh harmonic currents are listed in Table 1.To compare the accuracy of the three different models, the mean and standard deviations of the errors on Ihr; Ihi and Ih between estimated value and the simulated actual value are calculated for each model. The error comparison of the three models on the three-phase six-pulse rectifier is shown in Table 2, where mhr; mhi and mha denote the mean, and shr; shi and sha represent the standard deviations. Note that I1 and _Ih in Table 2are the current values caused by rated pure sinusoidal supply voltage.Error comparisons on the single-phase capacitor-filtered rectifier and the arc furnace load are listed in Table 3 and 4, respectively.It can be seen from the above test results that the accuracy of the proposed model is different for different nonlinear loads, while for a certain load, the accuracy will decrease as the harmonic order increase. However, the proposed model is always more accurate than other two models.It can also be seen from Table 1 that the componenta50 t a51V1 and b50 t b51V1 are around 20:0074 t0:3939 0:3865 and 0:0263 t 0:0623 0:0886 while the componenta55V5r and b55V5i will not exceed 0:2676 £0:03 0:008 and 0:9675 £0:003 0:029; respectively. The result shows that the fifth harmonic current caused by the summation of constant term and the fundamental voltage is about 10 times of that caused by harmonic voltage with same order, so that the formal is dominant in the harmonic current for the three-phase six-pulse rectifier. The same situation exists for other harmonic orders and other nonlinear loads.4.2. Effectiveness of harmonic source identification and current separationTo show the effectiveness of the proposed harmonic source identification method, simulations are performed on a composite load containing linear load (30%) and nonlinear loads with three-phase six-pulse rectifier (30%),single-phase capacitor-filtered rectifier (20%) and ac arc furnace load (20%).For simplicity, only the errors of third order harmonic current of the linear and nonlinear loads are listed in Table 5, where IN3 denotes the third order harmonic current corresponding to rated pure sinusoidal supply voltage; mN3r ;mN3i;mN3a and mL3r ;mL3i;mL3a are error means of IN3r ; IN3i; IN3 and IL3r ; IL3i; IL3 between the simulated actual value and the estimated value;sN3r ;sN3i;sN3a and sL3r ;sL3i;sL3a are standard deviations.Table 2Table 3It can be seen from Table 5 that the current errors of linear load are less than that of nonlinear loads. This is because the errors of nonlinear load currents are due to both the model error and neglecting the components related to harmonic voltages of the same order, whereas only the later components introduce errors to the linear load currents. Moreover, it can be found that more precise the composite load model is, less error is introduced. However, even by using the very simple model (12), the existence of harmonic sources can be correctly identified and the harmonic current of linear and nonlinear loads can be effectively separated. Table 4Error comparison on the arc furnaceTable 55. ConclusionsIn this paper, from an engineering point of view, firstly anew linear model is presented for representing harmonic sources. On the basis of the intrinsic difference between linear and nonlinear loads in their V –I characteristics, and by using the proposed harmonic source model, a new concise principle for identifying harmonic sources and separating harmonic source currents from that of linear loads is proposed. The detailed modeling and identification procedure is also developed based on the least squares approximation approach. Test results on several kinds of typical harmonic sources reveal that the simplified model is of sufficient precision, and is superior to other existing models. The effectiveness of the harmonic source identification approach is illustrated using a composite nonlinear load.AcknowledgementsThe authors wish to acknowledge the financial support by the National Natural Science Foundation of China for this project, under the Research Program Grant No.59737140. References[1] IEEE Working Group on Power System Harmonics, The effects of power system harmonics on power system equipment and loads. IEEE Trans Power Apparatus Syst 1985;9:2555–63.[2] IEEE Working Group on Power System Harmonics, Power line harmonic effects on communication line interference. IEEE Trans Power Apparatus Syst 1985;104(9):2578–87.[3] IEEE Task Force on the Effects of Harmonics, Effects of harmonic on equipment. IEEE Trans Power Deliv 1993;8(2):681–8.[4] Heydt GT. Identification of harmonic sources by a State Estimation Technique. IEEE Trans Power Deliv 1989;4(1):569–75.[5] Ferach JE, Grady WM, Arapostathis A. An optimal procedure for placing sensors and estimating the locations of harmonic sources in power systems. IEEE Trans Power Deliv 1993;8(3):1303–10.[6] Ma H, Girgis AA. Identification and tracking of harmonic sources in a power system using Kalman filter. IEEE Trans Power Deliv 1996;11(3):1659–65.[7] Hong YY, Chen YC. Application of algorithms and artificial intelligence approach for locating multiple harmonics in distribution systems. IEE Proc.—Gener. Transm. Distrib 1999;146(3):325–9.[8] Mceachern A, Grady WM, Moncerief WA, Heydt GT, McgranaghanM. Revenue and harmonics: an evaluation of someproposed rate structures. IEEE Trans Power Deliv 1995;10(1):474–82.[9] Xu W. Power direction method cannot be used for harmonic sourcedetection. Power Engineering Society Summer Meeting, IEEE; 2000.p. 873–6.[10] Sasdelli R, Peretto L. A VI-based measurement system for sharing the customer and supply responsibility for harmonic distortion. IEEETrans Instrum Meas 1998;47(5):1335–40.[11] Arrillaga J, Bradley DA, Bodger PS. Power system harmonics. NewYork: Wiley; 1985.[12] Thunberg E, Soder L. A Norton approach to distribution networkmodeling for harmonic studies. IEEE Trans Power Deliv 1999;14(1):272–7.[13] Giordano AA, Hsu FM. Least squares estimation with applications todigital signal processing. New York: Wiley; 1985.[14] Xia D, Heydt GT. Harmonic power flow studies. Part I. Formulationand solution. IEEE Trans Power Apparatus Syst 1982;101(6):1257–65.[15] Mansoor A, Grady WM, Thallam RS, Doyle MT, Krein SD, SamotyjMJ. Effect of supply voltage harmonics on the input current of single phase diode bridge rectifier loads. IEEE Trans Power Deliv 1995;10(3):1416–22.[16] Varadan S, Makram EB, Girgis AA. A new time domain voltage source model for an arc furnace using EMTP. IEEE Trans Power Deliv 1996;11(3):1416–22.。

计算机毕业设计外文翻译---数据仓库

计算机毕业设计外文翻译---数据仓库

DATA WAREHOUSEData warehousing provides architectures and tools for business executives to systematically organize, understand, and use their data to make strategic decisions. A large number of organizations have found that data warehouse systems are valuable tools in today's competitive, fast evolving world. In the last several years, many firms have spent millions of dollars in building enterprise-wide data warehouses. Many people feel that with competition mounting in every industry, data warehousing is the latest must-have marketing weapon —— a way to keep customers by learning more about their needs.“So", you may ask, full of intrigue, “what exactly is a data warehouse?"Data warehouses have been defined in many ways, making it difficult to formulate a rigorous definition. Loosely speaking, a data warehouse refers to a database that is maintained separately from an organization's operational databases. Data warehouse systems allow for the integration of a variety of application systems. They support information processing by providing a solid platform of consolidated, historical data for analysis.According to W. H. Inmon, a leading architect in the construction of data warehouse systems, “a data warehouse is a subject-oriented, integrated, time-variant, and nonvolatile collection of data in support of management's decision making process." This short, but comprehensive definition presents the major features of a data warehouse. The four keywords, subject-oriented, integrated, time-variant, and nonvolatile, distinguish data warehouses from other data repository systems, such as relational database systems, transaction processing systems, and file systems. Let's take a closer look at each of these key features.(1)Subject-oriented: A data warehouse is organized around major subjects, such as customer, vendor, product, and sales. Rather than concentrating on the day-to-day operations and transaction processing of an organization, a data warehouse focuses on the modeling and analysis of data for decision makers. Hence, data warehouses typically provide a simple and concise view around particular subject issues by excluding data that are not useful in the decision support process.(2)Integrated: A data warehouse is usually constructed by integrating multiple heterogeneous sources, such as relational databases, flat files, and on-line transaction records. Data cleaning and data integration techniques are applied to ensure consistency in naming conventions, encoding structures, attribute measures, and so on..(3)Time-variant: Data are stored to provide information from a historical perspective (e.g., the past 5-10 years). Every key structure in the data warehouse contains, either implicitly or explicitly, an element of time.(4)Nonvolatile: A data warehouse is always a physically separate store of data transformed from the application data found in the operational environment. Due to this separation, a data warehouse does not require transaction processing, recovery, and concurrency control mechanisms. It usually requires only two operations in data accessing: initial loading of data and access of data..In sum, a data warehouse is a semantically consistent data store that serves as a physical implementation of a decision support data model and stores the information on which an enterprise needs to make strategic decisions. A data warehouse is also often viewed as an architecture, constructed by integrating data from multiple heterogeneous sources to support structured and/or ad hoc queries, analytical reporting, and decision making.“OK", you now ask, “what, then, is data warehousing?"Based on the above, we view data warehousing as the process of constructing and using data warehouses. The construction of a data warehouse requires data integration, data cleaning, and data consolidation. The utilization of a data warehouse often necessitates a collection of decision support technologies. This allows “knowledge workers" (e.g., managers, analysts, and executives) to use the warehouse to quickly and conveniently obtain an overview of the data, and to make sound decisionsbased on information in the warehouse. Some authors use the term “data warehousing" to refer only to the process of data warehouse construction, while the term warehouse DBMS is used to refer to the management and utilization of data warehouses. We will not make this distinction here.“How are organizations using the information from data warehouses?" Many organizations are using this information to support business decision making activities, including:(1) increasing customer focus, which includes the analysis of customer buying patterns (such as buying preference, buying time, budget cycles, and appetites for spending).(2) repositioning products and managing product portfolios by comparing the performance of sales by quarter, by year, and by geographic regions, in order to fine-tune production strategies.(3) analyzing operations and looking for sources of profit.(4) managing the customer relationships, making environmental corrections, and managing the cost of corporate assets.Data warehousing is also very useful from the point of view of heterogeneous database integration. Many organizations typically collect diverse kinds of data and maintain large databases from multiple, heterogeneous, autonomous, and distributed information sources. To integrate such data, and provide easy and efficient access to it is highly desirable, yet challenging. Much effort has been spent in the database industry and research community towards achieving this goal.The traditional database approach to heterogeneous database integration is to build wrappers and integrators (or mediators) on top of multiple, heterogeneous databases. A variety of data joiner and data blade products belong to this category. When a query is posed to a client site, a metadata dictionary is used to translate the query into queries appropriate for the individual heterogeneous sites involved. These queries are then mapped and sent to local query processors. The results returned from the different sites are integrated into a global answer set. This query-driven approach requires complex information filtering and integration processes, and competes for resources with processing at local sources. It is inefficient and potentially expensive for frequent queries, especially for queries requiring aggregations.Data warehousing provides an interesting alternative to the traditional approach of heterogeneous database integration described above. Rather than using a query-driven approach, data warehousing employs an update-driven approach in which information from multiple, heterogeneous sources is integrated in advance and stored in a warehouse for direct querying and analysis. Unlike on-line transaction processing databases, data warehouses do not contain the most current information. However, a data warehouse brings high performance to the integrated heterogeneous database system since data are copied, preprocessed, integrated, annotated, summarized, and restructured into one semantic data store. Furthermore, query processing in data warehouses does not interfere with the processing at local sources. Moreover, data warehouses can store and integrate historical information and support complex multidimensional queries. As a result, data warehousing has become very popular in industry.1.Differences between operational database systems and data warehousesSince most people are familiar with commercial relational database systems, it is easy to understand what a data warehouse is by comparing these two kinds of systems.The major task of on-line operational database systems is to perform on-line transaction and query processing. These systems are called on-line transaction processing (OLTP) systems. They cover most of the day-to-day operations of an organization, such as, purchasing, inventory, manufacturing, banking, payroll, registration, and accounting. Data warehouse systems, on the other hand, serve users or “knowledge workers" in the role of data analysis and decision making. Such systems can organize and present data in various formats in order to accommodate the diverse needs of the different users. These systems are known as on-line analytical processing (OLAP) systems.The major distinguishing features between OLTP and OLAP are summarized as follows.(1)Users and system orientation: An OLTP system is customer-oriented and is used for transaction and query processing by clerks, clients, and information technology professionals. An OLAP system is market-oriented and is used for data analysis by knowledge workers, including managers, executives, and analysts.(2)Data contents: An OLTP system manages current data that, typically, are too detailed to be easily used for decision making. An OLAP system manages large amounts of historical data, provides facilities for summarization and aggregation, and stores and manages information at different levels of granularity. These features make the data easier for use in informed decision making.(3)Database design: An OLTP system usually adopts an entity-relationship (ER) data model and an application -oriented database design. An OLAP system typically adopts either a star or snowflake model, and a subject-oriented database design.(4)View: An OLTP system focuses mainly on the current data within an enterprise or department, without referring to historical data or data in different organizations. In contrast, an OLAP system often spans multiple versions of a database schema, due to the evolutionary process of an organization. OLAP systems also deal with information that originates from different organizations, integrating information from many data stores. Because of their huge volume, OLAP data are stored on multiple storage media.(5). Access patterns: The access patterns of an OLTP system consist mainly of short, atomic transactions. Such a system requires concurrency control and recovery mechanisms. However, accesses to OLAP systems are mostly read-only operations (since most data warehouses store historical rather than up-to-date information), although many could be complex queries.Other features which distinguish between OLTP and OLAP systems include database size, frequency of operations, and performance metrics and so on.2.But, why have a separate data warehouse?“Since operational databases store huge amounts of data", you observe, “why not perform on-line analytical processing directly on such databases instead of spending additional time and resources to construct a separate data warehouse?"A major reason for such a separation is to help promote the high performance of both systems. An operational database is designed and tuned from known tasks and workloads, such as indexing and hashing using primary keys, searching for particular records, and optimizing “canned" queries. On the other hand, data warehouse queries are often complex. They involve the computation of large groups of data at summarized levels, and may require the use of special data organization, access, and implementation methods based on multidimensional views. Processing OLAP queries in operational databases would substantially degrade the performance of operational tasks.Moreover, an operational database supports the concurrent processing of several transactions. Concurrency control and recovery mechanisms, such as locking and logging, are required to ensure the consistency and robustness of transactions. An OLAP query often needs read-only access of data records for summarization and aggregation. Concurrency control and recovery mechanisms, if applied for such OLAP operations, may jeopardize the execution of concurrent transactions and thus substantially reduce the throughput of an OLTP system.Finally, the separation of operational databases from data warehouses is based on the different structures, contents, and uses of the data in these two systems. Decision support requires historical data, whereas operational databases do not typically maintain historical data. In this context, the data in operational databases, though abundant, is usually far from complete for decision making. Decision support requires consolidation (such as aggregation and summarization) of data from heterogeneous sources, resulting in high quality, cleansed and integrated data. In contrast, operational databases contain only detailed raw data, such as transactions, which need to be consolidated before analysis. Since the two systems provide quite different functionalities and require different kinds of data, it is necessary to maintain separate databases.数据仓库数据仓库为商务运作提供了组织结构和工具,以便系统地组织、理解和使用数据进行决策。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

附录A(1)Web数据库概述Internet技术的兴起与发展,使社会大系统中出现了巨大的信息流和宏大的数据流,伴之而来的是Web技术的蓬勃发展,并且很快就占据了Internet技术的主流地位。

而数据库技术发展到今天已相对比较成熟,特别适合于对大量的数据进行组织管理。

由于Internet信息量的巨大,随着Internet的进一步发展,Web技术和数据库技术的结合---Web数据库技术便构成了当今Internet上最流行的新一代信息传播技术,并且已深深地改变着网络应用的面貌。

无论是网络图书馆,网络情报检索系统,网络信息出版,还是基于客户机/服务器模式下的信息管理系统,都离不开Web数据库技术。

对于政府来讲,开展电子政务已成为全球关注的热点。

20世纪90年代信息技术的迅猛发展,特别是互联网技术的普及应用,使电子政务的发展成为当代信息化的最重要的领域之一。

通过Web数据库技术,不仅把Web与数据库的所有优点集中在一起,而且充分利用了大量已有的数据库信息资源,可以使用户在Web浏览器上方便地检索和浏览数据库的内容,所以,将Web技术与数据库相结合,开发动态的Web数据库应用,已是电子政务系统建设的一个必不可少的重要内容。

(2)Web数据库的产生数据库技术是人们长期以来用来存储各种各样信息的手段。

如今,Internet 已经遍布世界的各个角落,整个世界也被连成一体,因此Web数据库技术也随着Internet而渗透到地球的每个角落。

在Internet中,Web是发展最快的技术之一,但只具有信息发布功能的信息共享平台的Web是静态的,服务器响应用户请求,向用户发送文件,用户接收这个文件并把它显示出来,这种工作方式不能实时交互动态信息,客户机和服务器之间的交流是很有限的,满足不了现代商务活动的需求。

后来随着CGI技术的引入,特别是Java和JavaScreipt语言的引入,使得Web页面可以方便地传播动态信息,与用户进行交互活动。

通过应用Java和JavaScript语言,以及后来的VBscript,Perl等语言,可以很方便地设计具有动画,声音,图形图像和种种特殊效果的Web页面。

这种交互式动态Web页面的实现需要大量的数据资源为基础。

为了对数据资源进行高效的存取,数据库系统自然而然就要进入Internet的舞台,于是Web数据库也就应运而生了。

(3)通过Web访问数据库的优点数据库应用的一个重要方面就是数据的访问,但是许多数据库系统目前提供的访问方式,或是一个字符方式的查询界面,或是通过编程方式实现,无论哪种方式都较难使用。

近年来发展的一些RAD工具,如VB,Dephi,Powerbuider等可以方便地开发一些图形界面的访问数据库软件,但是这样的开发工具需要使用者具有编程技术,并且开发的程序不能跨平台运行。

而且,用RAD工具开发的软件,随用户需求的改变,可能需要增添新的功能或在界面上做一些改动。

如果开发的软件使用范围比较广泛的话,那么软件的更新将是一项很大的工作。

而Internet技术的发展,使上述问题有了解决的办法,如果建立了Web服务器,就可以通过Web服务器实现对数据库的访问,上面提到的问题也就可以解决了。

与传统方式相比,通过Web访问数据库的优点在于:1)借用现成的浏览器软件,无需开发数据库前端。

如果能够通过Web来访问数据库,就不需要开发客户端的程序,使用的数据库应用都可以通过浏览器来实现,界面统一,也减少了培训费用,能使广大用户很方便地访问数据库信息。

2)标准统一,开发过程简单。

HTML是Web信息的组织方式,是一种国际标准,开发者甚至只需学习HTML语言,而使用者只需学习一种界面---浏览器界面。

3)交叉平台支持。

几乎在各种操作系统上都有现成的浏览器可供使用,为一个Web服务器书写的HTML文档,可以被所有平台的浏览器所浏览,实现了跨平台操作。

(4)Web数据库系统的基本模型由于Web的易用性,实用性,它很快占据了主导地位,目前已经成为使用最为广泛,最有前途,最有魅力的信息传播技术。

不过,Web服务只是提供了Internet上信息交互的平台,要想实现真正的Internet,就要将人,企业,社会与Internet融为一体,这就要靠信息化应用的实现。

电子商务是以Web网络技术和数据库技术为支撑的,其中Web数据库技术是电子商务的核心技术,支持电子商务已经成为各大厂商竞争的焦点之一,Web数据库的发展成为新的热点和难题。

Web数据库就是能将数据库技术与Web技术很好地融合在一起,使数据库系统成为Web的重要有机组成部分的数据库,能够实现数据库与网络技术的无缝的有机结合。

早期的Internet数据库系统采用的是两层客户机/服务器结构。

这种结构在Internet应用早期获得了极大的发展。

随着Internet应用的普及,由于Internet 上信息资源的复杂性和不规范性,这种两层结构的数据库系统在开发各种网上应用时显得力不从心,表现在无法管理各种网上的复杂的文档型和多媒体型数据资源,缺乏开放的标准,一般不能跨平台运行。

为此就要求对数据库作出一些适应性调整,如增加数据库的面向对象成分以增加处理多种复杂数据类型的能力,增加各种中间件以扩展基于Internet的应用能力,通过应用服务器解释执行各种HTML中嵌入脚本来解决Internet应用中数据库数据的显示,维护,输出以及到HTML的格式转换等。

此时,数据库的基于Internet应用的模式典型地表现为一种三层或四层的多层结构,在这种多层结构体系下,解决了数据库的Internet 的应用的方法问题,使得各种网上数据库数据的发布,检索,维护,数据管理等一般性应用变得更加容易和简单。

(5)Web数据库的发展趋势最近几年,数据库市场飞速发展,电子商务成为各种企业发展的重点之一,甚至有人预言,电子商务极有可能建立起新型的虚拟商业,乃至虚拟工业。

而电子商务是以数据库技术和网络技术为支撑的,其中数据库技术是其核心。

更多的用户已经把数据库的重要性放在十分重要的地位,其主要原因是用户将把应用软件和应用需求放在首位,而应用软件开发直接依赖于数据库开发工具。

另外,由于硬件随着芯片技术的发展越来越缺乏特性,硬件指标将变成次要的考虑因素。

对行业性应用来讲,而今可能是首先选择数据库厂家再考虑硬件厂家了。

正是用户需求的这种变化给数据库厂商提供了新的发展机会。

可以预言,在不久的将来,Web数据库将成为数据库领域研究的热点技术。

1)非结构化数据库信息可以划分为两大类,一类信息能够用数据或统一的结构加以表示,称之为结构化数据,如数字,符号;而另一类信息无法用数字或统一的结构表示,如文本,图像,声音,网页等,称之为非结构化数据。

结构化数据属于非结构化数据,是非结构化数据的特例。

随着网络技术的发展,特别是Internet和Intranet的飞速发展,使得非结构化数据的数量日趋增大。

这时,主要有于管理结构化数据的关系数据库的局限性暴露得越来越明显。

因而,数据库技术相应地进入了“后关系数据库时代”,进入基于网络应用的非结构化数据库时代。

所谓非结构化数据库,简单地说,就是字段可变的数据库。

2)异构数据库系统相互关联的数据库可以很容易地被归纳在一起,创建一个单一的虚拟数据库,也叫做异构数据库系统。

异构数据库系统是相关的多个数据库系统的集合,可以实现数据的共享和透明访问。

每个数据库系统在加入异构数据库系统之前本身就已存在,拥有自己的DBMS。

它的异构性主要体现在以下几个方面:计算机体系结构的异构;基础操作系统的异构;DBMS本身的异构。

它的目标在于实现不同数据库之间的数据之间的数据信息资源,硬件设备资源和人力资源的合并和共享。

目前,异构数据库系统的集成以及建立在此基础之上的数据仓库,数据挖掘已经成为网络数据库技术研究的重点之一。

著名的国内外数据库厂商也将异构数据库系统作为竞争的焦点,研究如何将原来传统的,可能分布于各地的多个关系数据库集成起来,进行改进和发展,形成虚拟异构数据库系统和数据仓库,更好地为企业信息化,电子商务服务。

(6)Web数据库技术简介从技术发展的角度来看,以前通过浏览器访问数据库的唯一渠道是CGI方式。

随后出现了SAPI和JDBC等技术方案,近来又流行ASP、JSP技术。

下面对这些技术逐一进行介绍。

1)CGICGI是Web服务器运行时外部程序的规范,按照CGI编写的程序可以扩展服务器的功能,完成服务器本身不能完成的工作,外部程序执行时可以生成HTML 文档,并将文档返回Web服务器。

CGI应用程序能够与浏览器进行交互作用,还可以通过数据库的API与数据库服务器等外部数据源进行通信,如一个CGI程序可以从数据库服务器中获取数据,然后格式化为HTML,文档后发送给浏览器,也可以将从浏览器获得的数据放到数据库中,几乎所有的服务器软件都支持CGI,开发者可以使用任何一种Web服务器内置语言编写CGI,其中包括流行的C,C++,VB和Dephi等。

按照应用环境的不同,CGI又可以分为标准CGI和间接CGI。

CGI程序应用是作为一个独立的外部应用来运行的,与服务器上的其他程序竞争处理器资源,这将导致运行速度减慢。

而且,用CGI开发支持Web的应用也是一个比较困难的数据库过程中,连接状态的管理也是很重要的。

如果没有状态管理,那么浏览器的每一次请求,都需要一个连接的建立与释放的过程,效率较低。

CGI不提供状态管理功能,另外,必须用某个特定数据库服务器的专用SQL 语言来手工编写数据库接口,其移植性也不好。

2)JDBCJava的推出,使Web有了活力和动感。

Internet用户可以从Web服务器上下载Java小程序到本地浏览器上运行。

这些下载的小程序就像本地程序一样,可独立地访问本地和其他服务器资源,而最初的Java语言并没有数据库访问的功能,随着应用的深入,要求Java提供数据库访问功能的呼声越来越高。

为了防止出现对Java在数据库访问方面各不相同的扩展,JavaSoft公司制定了JDBC,作为Java语言的数据库访问API,JDBC是第一个标准的,支持Java数据库的API,它使得Java程序与数据库连接更为容易。

JDBC在功能上与ODBC相同,给开发人员提供一个统一的数据库访问接口。

目前,JDBC已经得到了许多厂商的支持,包括Borland,Oracle和Sybase等公司。

当前流行的大多数数据库系统都推出了自己的JDBC驱动程序。

3)JSPJSP是Java Server Pages的简称,是Sun公司在Java语言基础上开发的动态网页制作技术。

JSP结合Servlet和JavaBean技术,将网页逻辑与网页设计和显示分离,支持可重用的基于组件的设计,使基于Web应用程序的开发变得迅速而简单。

相关文档
最新文档