计算机类_外文文献_翻译
计算机专业外文文献翻译6
外文文献翻译(译成中文2000字左右):As research laboratories become more automated,new problems are arising for laboratory managers.Rarely does a laboratory purchase all of its automation from a single equipment vendor. As a result,managers are forced to spend money training their users on numerous different software packages while purchasing support contracts for each. This suggests a problem of scalability. In the ideal world,managers could use the same software package to control systems of any size; from single instruments such as pipettors or readers to large robotic systems with up to hundreds of instruments. If such a software package existed, managers would only have to train users on one platform and would be able to source software support from a single vendor.If automation software is written to be scalable, it must also be flexible. Having a platform that can control systems of any size is far less valuable if the end user cannot control every device type they need to use. Similarly, if the software cannot connect to the customer’s Laboratory Information Management System (LIMS) database,it is of limited usefulness. The ideal automation software platform must therefore have an open architecture to provide such connectivity.Two strong reasons to automate a laboratory are increased throughput and improved robustness. It does not make sense to purchase high-speed automation if the controlling software does not maximize throughput of the system. The ideal automation software, therefore, would make use of redundant devices in the system to increase throughput. For example, let us assume that a plate-reading step is the slowest task in a given method. It would make that if the system operator connected another identical reader into the system, the controller software should be able to use both readers, cutting the total throughput time of the reading step in half. While resource pooling provides a clear throughput advantage, it can also be used to make the system more robust. For example, if one of the two readers were to experience some sort of error, the controlling software should be smart enough to route all samples to the working reader without taking the entire system offline.Now that one embodiment of an ideal automation control platform has been described let us see how the use of C++ helps achieving this ideal possible.DISCUSSIONC++: An Object-Oriented LanguageDeveloped in 1983 by BjarneStroustrup of Bell Labs,C++ helped propel the concept of object-oriented programming into the mainstream.The term ‘‘object-oriented programming language’’ is a familiar phrase that has been in use for decades. But what does it mean? And why is it relevant for automation software? Essentially, a language that is object-oriented provides three important programming mechanisms:encapsulation, inheritance, and polymorphism.Encapsulation is the ability of an object to maintain its own methods (or functions) and properties (or variables).For example, an ‘‘engine’’ object might contain methods for starting, stopping, or accelerating, along with properties for ‘‘RPM’’ and ‘‘Oil pressure’’. Further, encapsulation allows an object to hide private data from a ny entity outside the object. The programmer can control access to the object’s data by marking methods or properties as public, protected,or private. This access control helps abstract away the inner workings of a class while making it obvious to a caller which methods and properties are intended to be used externally.Inheritance allows one object to be a superset of another object. For example, one can create an object called Automobile that inherits from Vehicle. The Automobile object has access to all non-private methods and properties of Vehicle plus any additional methods or properties that makes it uniquely an automobile.Polymorphism is an extremely powerful mechanism that allows various inherited objects to exhibit different behaviors when the same named method is invoked upon them. For example, let us say our Vehicle object contains a method called CountWheels. When we invoke this method on our Automobile, we learn that the Automobile has four wheels.However, when we call this method on an object called Bus,we find that the Bus has 10 wheels.Together, encapsulation, inheritance, and polymorphism help promote code reuse, which is essential to meeting our requirement that the software package be flexible. A vendor can build up a comprehensive library of objects (a serial communications class, a state machine class, a device driver class,etc.) that can be reused across many different code modules.A typical control software vendor might have 100 device drivers. It would be a nightmare if for each of these drivers there were no building blocks for graphical user interface (GUI) or communications to build on. By building and maintaining a library of foundation objects, the vendor will save countless hours of programming and debugging time.All three tenets of object-oriented programming are leveraged by the use of interfaces. An interface is essentially a specification that is used to facilitate communication between software components, possibly written by different vendors. An interface says, ‘‘if your cod e follows this set of rules then my software component will be able to communicate with it.’’ In the next section we will see how interfaces make writing device drivers a much simpler task.C++ and Device DriversIn a flexible automation platform, one optimal use for interfaces is in device drivers. We would like our open-architecture software to provide a generic way for end users to write their own device drivers without having to divulge the secrets of our source code to them. To do this, we define a simplifiedC++ interface for a generic device, as shown here:class IDevice{public:virtual string GetName() ? 0; //Returns the name//of the devicevirtual void Initialize() ? 0; //Called to//initialize the devicevirtual void Run() ? 0; // Called to run the device};In the example above, a Ctt class (or object) called IDevice has been defined. The prefix I in IDevice stands for ‘‘interface’’. This class defines three public virtual methods: GetName, Initialize, and Run. The virtual keyword is what enables polymorphism, allowing the executing program to run the methods of the inheriting class. When a virtual method declaration is suffixed with ?0, there is no base class implementation. Such a method is referred to as ‘‘pure virtual’’. A class like IDevice that contains only pure virtual functions is known as an ‘‘abstract class’’, or an‘‘interface’’. The IDevice definition, along with appropriate documentation, can be published to the user community,allowing developers to generate their own device drivers that implement the IDevice interface.Suppose a thermal plate sealer manufacturer wants to write a driver that can be controlled by our software package. They would use inheritance to implement our IDevice interface and then override the methods to produce the desired behavior: class CSealer : public IDevice{public:virtual string GetName() {return ‘‘Sealer’’;}virtual void Initialize() {InitializeSealer();}virtual void Run() {RunSealCycle();}private:void InitializeSealer();void RunSealCycle();};Here the user has created a new class called CSealer that inherits from the IDevice interface. The public methods,those that are accessible from outside of the class, are the interface methods defined in IDevice. One, GetName, simply returns the name of the device type that this driver controls.The other methods,Initialize() and Run(), call private methods that actually perform the work. Notice how the privatekeyword is used to prevent external objects from calling InitializeSealer() and RunSealCycle() directly.When the controlling software executes, polymorphism will be used at runtime to call the GetName, Initialize, and Run methods in the CSealer object, allowing the device defined therein to be controlled.DoSomeWork(){//Get a reference to the device driver we want to useIDevice&device ? GetDeviceDriver();//Tell the world what we’re about to do.cout !! ‘‘Initializing ’’!! device.GetName();//Initialize the devicedevice.Initialize();//Tell the world what we’re about to do.cout !! ‘‘Running a cycle on ’’ !!device.GetName();//Away we go!device.Run();}The code snippet above shows how the IDevice interface can be used to generically control a device. If GetDevice-Driver returns a reference to a CSealer object, then DoSomeWork will control sealers. If GetDeviceDriver returns a reference to a pipettor, then DoSomeWork will control pipettors. Although this is a simplified example, it is straightforward to imagine how the use of interfaces and polymorphism can lead to great economies of scale in controller software development.Additional interfaces can be generated along the same lines as IDevice. For example, an interface perhaps called ILMS could be used to facilitate communication to and from a LIMS.The astute reader will notice that the claim that anythird party can develop drivers simply by implementing the IDevice interface is slightly flawed. The problem is that any driver that the user writes, like CSealer, would have to be linked directly to the controlling software’s exec utable to be used. This problem is solved by a number of existing technologies, including Microsoft’s COMor .NET, or by CORBA. All of these technologies allow end users to implement abstract interfaces in standalone components that can be linked at runtime rather than at design time. The details are beyond the scope of this article.中文翻译:随着研究实验室更加自动化,实验室管理人员出现的新问题。
计算机专业外文文献及翻译微软Visual Studio
计算机专业外文文献及翻译微软Visual Studio 微软 Visual Studio1 微软 Visual Studio Visual Studio 是微软公司推出的开发环境,Visual Studio 可以用来创建 Windows 平台下的Windows 应用程序和网络应用程序,也可以用来创建网络服务、智能设备应用程序和 Office 插件。
Visual Studio 是一个来自微软的集成开发环境 IDE(inteqrated development environment),它可以用来开发由微软视窗,视窗手机,Windows CE、.NET 框架、.NET 精简框架和微软的 Silverlight 支持的控制台和图形用户界面的应用程序以及 Windows 窗体应用程序,网站,Web 应用程序和网络服务中的本地代码连同托管代码。
Visual Studio 包含一个由智能感知和代码重构支持的代码编辑器。
集成的调试工作既作为一个源代码级调试器又可以作为一台机器级调试器。
其他内置工具包括一个窗体设计的 GUI 应用程序,网页设计师,类设计师,数据库架构设计师。
它有几乎各个层面的插件增强功能,包括增加对支持源代码控制系统(如 Subversion 和 Visual SourceSafe)并添加新的工具集设计和可视化编辑器,如特定于域的语言或用于其他方面的软件开发生命周期的工具(例如 Team Foundation Server 的客户端:团队资源管理器)。
Visual Studio 支持不同的编程语言的服务方式的语言,它允许代码编辑器和调试器(在不同程度上)支持几乎所有的编程语言,提供了一个语言特定服务的存在。
内置的语言中包括 C/C 中(通过Visual C)(通过 Visual ),C,中(通过 Visual C,)和 F,(作为Visual Studio2010),为支持其他语言,如 MPython和 Ruby 等,可通过安装单独的语言服务。
计算机java外文翻译外文文献英文文献
英文原文:Title: Business Applications of Java. Author: Erbschloe, Michael, Business Applications of Java -- Research Starters Business, 2008DataBase: Research Starters - BusinessBusiness Applications of JavaThis article examines the growing use of Java technology in business applications. The history of Java is briefly reviewed along with the impact of open standards on the growth of the World Wide Web. Key components and concepts of the Java programming language are explained including the Java Virtual Machine. Examples of how Java is being used bye-commerce leaders is provided along with an explanation of how Java is used to develop data warehousing, data mining, and industrial automation applications. The concept of metadata modeling and the use of Extendable Markup Language (XML) are also explained.Keywords Application Programming Interfaces (API's); Enterprise JavaBeans (EJB); Extendable Markup Language (XML); HyperText Markup Language (HTML); HyperText Transfer Protocol (HTTP); Java Authentication and Authorization Service (JAAS); Java Cryptography Architecture (JCA); Java Cryptography Extension (JCE); Java Programming Language; Java Virtual Machine (JVM); Java2 Platform, Enterprise Edition (J2EE); Metadata Business Information Systems > Business Applications of JavaOverviewOpen standards have driven the e-business revolution. Networking protocol standards, such as Transmission Control Protocol/Internet Protocol (TCP/IP), HyperText Transfer Protocol (HTTP), and the HyperText Markup Language (HTML) Web standards have enabled universal communication via the Internet and the World Wide Web. As e-business continues to develop, various computing technologies help to drive its evolution.The Java programming language and platform have emerged as major technologies for performing e-business functions. Java programming standards have enabled portability of applications and the reuse of application components across computing platforms. Sun Microsystems' Java Community Process continues to be a strong base for the growth of the Java infrastructure and language standards. This growth of open standards creates new opportunities for designers and developers of applications and services (Smith, 2001).Creation of Java TechnologyJava technology was created as a computer programming tool in a small, secret effort called "the Green Project" at Sun Microsystems in 1991. The Green Team, fully staffed at 13 people and led by James Gosling, locked themselves away in an anonymous office on Sand Hill Road in Menlo Park, cut off from all regular communications with Sun, and worked around the clock for18 months. Their initial conclusion was that at least one significant trend would be the convergence of digitally controlled consumer devices and computers. A device-independent programming language code-named "Oak" was the result.To demonstrate how this new language could power the future of digital devices, the Green Team developed an interactive, handheld home-entertainment device controller targeted at the digital cable television industry. But the idea was too far ahead of its time, and the digital cable television industry wasn't ready for the leap forward that Java technology offered them. As it turns out, the Internet was ready for Java technology, and just in time for its initial public introduction in 1995, the team was able to announce that the Netscape Navigator Internet browser would incorporate Java technology ("Learn about Java," 2007).Applications of JavaJava uses many familiar programming concepts and constructs and allows portability by providing a common interface through an external Java Virtual Machine (JVM). A virtual machine is a self-contained operating environment, created by a software layer that behaves as if it were a separate computer. Benefits of creating virtual machines include better exploitation of powerful computing resources and isolation of applications to prevent cross-corruption and improve security (Matlis, 2006).The JVM allows computing devices with limited processors or memory to handle more advanced applications by calling up software instructions inside the JVM to perform most of the work. This also reduces the size and complexity of Java applications because many of the core functions and processing instructions were built into the JVM. As a result, software developersno longer need to re-create the same application for every operating system. Java also provides security by instructing the application to interact with the virtual machine, which served as a barrier between applications and the core system, effectively protecting systems from malicious code.Among other things, Java is tailor-made for the growing Internet because it makes it easy to develop new, dynamic applications that could make the most of the Internet's power and capabilities. Java is now an open standard, meaning that no single entity controls its development and the tools for writing programs in the language are available to everyone. The power of open standards like Java is the ability to break down barriers and speed up progress.Today, you can find Java technology in networks and devices that range from the Internet and scientific supercomputers to laptops and cell phones, from Wall Street market simulators to home game players and credit cards. There are over 3 million Java developers and now there are several versions of the code. Most large corporations have in-house Java developers. In addition, the majority of key software vendors use Java in their commercial applications (Lazaridis, 2003).ApplicationsJava on the World Wide WebJava has found a place on some of the most popular websites in the world and the uses of Java continues to grow. Java applications not only provide unique user interfaces, they also help to power the backend of websites. Two e-commerce giants that everybody is probably familiar with (eBay and Amazon) have been Java pioneers on the World Wide Web.eBayFounded in 1995, eBay enables e-commerce on a local, national and international basis with an array of Web sites-including the eBay marketplaces, PayPal, Skype, and -that bring together millions of buyers and sellers every day. You can find it on eBay, even if you didn't know it existed. On a typical day, more than 100 million items are listed on eBay in tens of thousands of categories. Recent listings have included a tunnel boring machine from the Chunnel project, a cup of water that once belonged to Elvis, and the Volkswagen that Pope Benedict XVI owned before he moved up to the Popemobile. More than one hundred million items are available at any given time, from the massive to the miniature, the magical to the mundane, on eBay; the world's largest online marketplace.eBay uses Java almost everywhere. To address some security issues, eBay chose Sun Microsystems' Java System Identity Manager as the platform for revamping its identity management system. The task at hand was to provide identity management for more than 12,000 eBay employees and contractors.Now more than a thousand eBay software developers work daily with Java applications. Java's inherent portability allows eBay to move to new hardware to take advantage of new technology, packaging, or pricing, without having to rewrite Java code ("eBay drives explosive growth," 2007).Amazon (a large seller of books, CDs, and other products) has created a Web Service application that enables users to browse their product catalog and place orders. uses a Java application that searches the Amazon catalog for books whose subject matches a user-selected topic. The application displays ten books that match the chosen topic, and shows the author name, book title, list price, Amazon discount price, and the cover icon. The user may optionally view one review per displayed title and make a buying decision (Stearns & Garishakurthi, 2003).Java in Data Warehousing & MiningAlthough many companies currently benefit from data warehousing to support corporate decision making, new business intelligence approaches continue to emerge that can be powered by Java technology. Applications such as data warehousing, data mining, Enterprise Information Portals (EIP's), and Knowledge Management Systems (which can all comprise a businessintelligence application) are able to provide insight into customer retention, purchasing patterns, and even future buying behavior.These applications can not only tell what has happened but why and what may happen given certain business conditions; allowing for "what if" scenarios to be explored. As a result of this information growth, people at all levels inside the enterprise, as well as suppliers, customers, and others in the value chain, are clamoring for subsets of the vast stores of information such as billing, shipping, and inventory information, to help them make business decisions. While collecting and storing vast amounts of data is one thing, utilizing and deploying that data throughout the organization is another.The technical challenges inherent in integrating disparate data formats, platforms, and applications are significant. However, emerging standards such as the Application Programming Interfaces (API's) that comprise the Java platform, as well as Extendable Markup Language (XML) technologies can facilitate the interchange of data and the development of next generation data warehousing and business intelligence applications. While Java technology has been used extensively for client side access and to presentation layer challenges, it is rapidly emerging as a significant tool for developing scaleable server side programs. The Java2 Platform, Enterprise Edition (J2EE) provides the object, transaction, and security support for building such systems.Metadata IssuesOne of the key issues that business intelligence developers must solve is that of incompatible metadata formats. Metadata can be defined as information about data or simply "data about data." In practice, metadata is what most tools, databases, applications, and other information processes use to define, relate, and manipulate data objects within their own environments. It defines the structure and meaning of data objects managed by an application so that the application knows how to process requests or jobs involving those data objects. Developers can use this schema to create views for users. Also, users can browse the schema to better understand the structure and function of the database tables before launching a query.To address the metadata issue, a group of companies (including Unisys, Oracle, IBM, SAS Institute, Hyperion, Inline Software and Sun) have joined to develop the Java Metadata Interface (JMI) API. The JMI API permits the access and manipulation of metadata in Java with standard metadata services. JMI is based on the Meta Object Facility (MOF) specification from the Object Management Group (OMG). The MOF provides a model and a set of interfaces for the creation, storage, access, and interchange of metadata and metamodels (higher-level abstractions of metadata). Metamodel and metadata interchange is done via XML and uses the XML Metadata Interchange (XMI) specification, also from the OMG. JMI leverages Java technology to create an end-to-end data warehousing and business intelligence solutions framework.Enterprise JavaBeansA key tool provided by J2EE is Enterprise JavaBeans (EJB), an architecture for the development of component-based distributed business applications. Applications written using the EJB architecture are scalable, transactional, secure, and multi-user aware. These applications may be written once and then deployed on any server platform that supports J2EE. The EJB architecture makes it easy for developers to write components, since they do not need to understand or deal with complex, system-level details such as thread management, resource pooling, and transaction and security management. This allows for role-based development where component assemblers, platform providers and application assemblers can focus on their area of responsibility further simplifying application development.EJB's in the Travel IndustryA case study from the travel industry helps to illustrate how such applications could function. A travel company amasses a great deal of information about its operations in various applications distributed throughout multiple departments. Flight, hotel, and automobile reservation information is located in a database being accessed by travel agents worldwide. Another application contains information that must be updated with credit and billing historyfrom a financial services company. Data is periodically extracted from the travel reservation system databases to spreadsheets for use in future sales and marketing analysis.Utilizing J2EE, the company could consolidate application development within an EJB container, which can run on a variety of hardware and software platforms allowing existing databases and applications to coexist with newly developed ones. EJBs can be developed to model various data sets important to the travel reservation business including information about customer, hotel, car rental agency, and other attributes.Data Storage & AccessData stored in existing applications can be accessed with specialized connectors. Integration and interoperability of these data sources is further enabled by the metadata repository that contains metamodels of the data contained in the sources, which then can be accessed and interchanged uniformly via the JMI API. These metamodels capture the essential structure and semantics of business components, allowing them to be accessed and queried via the JMI API or to be interchanged via XML. Through all of these processes, the J2EE infrastructure ensures the security and integrity of the data through transaction management and propagation and the underlying security architecture.To consolidate historical information for analysis of sales and marketing trends, a data warehouse is often the best solution. In this example, data can be extracted from the operational systems with a variety of Extract, Transform and Load tools (ETL). The metamodels allow EJBsdesigned for filtering, transformation, and consolidation of data to operate uniformly on datafrom diverse data sources as the bean is able to query the metamodel to identify and extract the pertinent fields. Queries and reports can be run against the data warehouse that contains information from numerous sources in a consistent, enterprise-wide fashion through the use of the JMI API (Mosher & Oh, 2007).Java in Industrial SettingsMany people know Java only as a tool on the World Wide Web that enables sites to perform some of their fancier functions such as interactivity and animation. However, the actual uses for Java are much more widespread. Since Java is an object-oriented language like C++, the time needed for application development is minimal. Java also encourages good software engineering practices with clear separation of interfaces and implementations as well as easy exception handling.In addition, Java's automatic memory management and lack of pointers remove some leading causes of programming errors. Most importantly, application developers do not need to create different versions of the software for different platforms. The advantages available through Java have even found their way into hardware. The emerging new Java devices are streamlined systems that exploit network servers for much of their processing power, storage, content, and administration.Benefits of JavaThe benefits of Java translate across many industries, and some are specific to the control and automation environment. For example, many plant-floor applications use relatively simple equipment; upgrading to PCs would be expensive and undesirable. Java's ability to run on any platform enables the organization to make use of the existing equipment while enhancing the application.IntegrationWith few exceptions, applications running on the factory floor were never intended to exchange information with systems in the executive office, but managers have recently discovered the need for that type of information. Before Java, that often meant bringing together data from systems written on different platforms in different languages at different times. Integration was usually done on a piecemeal basis, resulting in a system that, once it worked, was unique to the two applications it was tying together. Additional integration required developing a brand new system from scratch, raising the cost of integration.Java makes system integration relatively easy. Foxboro Controls Inc., for example, used Java to make its dynamic-performance-monitor software package Internet-ready. This software provides senior executives with strategic information about a plant's operation. The dynamic performance monitor takes data from instruments throughout the plant and performs variousmathematical and statistical calculations on them, resulting in information (usually financial) that a manager can more readily absorb and use.ScalabilityAnother benefit of Java in the industrial environment is its scalability. In a plant, embedded applications such as automated data collection and machine diagnostics provide critical data regarding production-line readiness or operation efficiency. These data form a critical ingredient for applications that examine the health of a production line or run. Users of these devices can take advantage of the benefits of Java without changing or upgrading hardware. For example, operations and maintenance personnel could carry a handheld, wireless, embedded-Java device anywhere in the plant to monitor production status or problems.Even when internal compatibility is not an issue, companies often face difficulties when suppliers with whom they share information have incompatible systems. This becomes more of a problem as supply-chain management takes on a more critical role which requires manufacturers to interact more with offshore suppliers and clients. The greatest efficiency comes when all systems can communicate with each other and share information seamlessly. Since Java is so ubiquitous, it often solves these problems (Paula, 1997).Dynamic Web Page DevelopmentJava has been used by both large and small organizations for a wide variety of applications beyond consumer oriented websites. Sandia, a multiprogram laboratory of the U.S. Department of Energy's National Nuclear Security Administration, has developed a unique Java application. The lab was tasked with developing an enterprise-wide inventory tracking and equipment maintenance system that provides dynamic Web pages. The developers selected Java Studio Enterprise 7 for the project because of its Application Framework technology and Web Graphical User Interface (GUI) components, which allow the system to be indexed by an expandable catalog. The flexibility, scalability, and portability of Java helped to reduce development timeand costs (Garcia, 2004)IssueJava Security for E-Business ApplicationsTo support the expansion of their computing boundaries, businesses have deployed Web application servers (WAS). A WAS differs from a traditional Web server because it provides a more flexible foundation for dynamic transactions and objects, partly through the exploitation of Java technology. Traditional Web servers remain constrained to servicing standard HTTP requests, returning the contents of static HTML pages and images or the output from executed Common Gateway Interface (CGI ) scripts.An administrator can configure a WAS with policies based on security specifications for Java servlets and manage authentication and authorization with Java Authentication andAuthorization Service (JAAS) modules. An authentication and authorization service can bewritten in Java code or interface to an existing authentication or authorization infrastructure. Fora cryptography-based security infrastructure, the security server may exploit the Java Cryptography Architecture (JCA) and Java Cryptography Extension (JCE). To present the user with a usable interaction with the WAS environment, the Web server can readily employ a formof "single sign-on" to avoid redundant authentication requests. A single sign-on preserves user authentication across multiple HTTP requests so that the user is not prompted many times for authentication data (i.e., user ID and password).Based on the security policies, JAAS can be employed to handle the authentication process with the identity of the Java client. After successful authentication, the WAS securitycollaborator consults with the security server. The WAS environment authentication requirements can be fairly complex. In a given deployment environment, all applications or solutions may not originate from the same vendor. In addition, these applications may be running on different operating systems. Although Java is often the language of choice for portability between platforms, it needs to marry its security features with those of the containing environment.Authentication & AuthorizationAuthentication and authorization are key elements in any secure information handling system. Since the inception of Java technology, much of the authentication and authorization issues have been with respect to downloadable code running in Web browsers. In many ways, this had been the correct set of issues to address, since the client's system needs to be protected from mobile code obtained from arbitrary sites on the Internet. As Java technology moved from a client-centric Web technology to a server-side scripting and integration technology, it required additional authentication and authorization technologies.The kind of proof required for authentication may depend on the security requirements of a particular computing resource or specific enterprise security policies. To provide such flexibility, the JAAS authentication framework is based on the concept of configurable authenticators. This architecture allows system administrators to configure, or plug in, the appropriate authenticatorsto meet the security requirements of the deployed application. The JAAS architecture also allows applications to remain independent from underlying authentication mechanisms. So, as new authenticators become available or as current authentication services are updated, system administrators can easily replace authenticators without having to modify or recompile existing applications.At the end of a successful authentication, a request is associated with a user in the WAS user registry. After a successful authentication, the WAS consults security policies to determine if the user has the required permissions to complete the requested action on the servlet. This policy canbe enforced using the WAS configuration (declarative security) or by the servlet itself (programmatic security), or a combination of both.The WAS environment pulls together many different technologies to service the enterprise. Because of the heterogeneous nature of the client and server entities, Java technology is a good choice for both administrators and developers. However, to service the diverse security needs of these entities and their tasks, many Java security technologies must be used, not only at a primary level between client and server entities, but also at a secondary level, from served objects. By using a synergistic mix of the various Java security technologies, administrators and developers can make not only their Web application servers secure, but their WAS environments secure as well (Koved, 2001).ConclusionOpen standards have driven the e-business revolution. As e-business continues to develop, various computing technologies help to drive its evolution. The Java programming language and platform have emerged as major technologies for performing e-business functions. Java programming standards have enabled portability of applications and the reuse of application components. Java uses many familiar concepts and constructs and allows portability by providing a common interface through an external Java Virtual Machine (JVM). Today, you can find Java technology in networks and devices that range from the Internet and scientific supercomputers to laptops and cell phones, from Wall Street market simulators to home game players and credit cards.Java has found a place on some of the most popular websites in the world. Java applications not only provide unique user interfaces, they also help to power the backend of websites. While Java technology has been used extensively for client side access and in the presentation layer, it is also emerging as a significant tool for developing scaleable server side programs.Since Java is an object-oriented language like C++, the time needed for application development is minimal. Java also encourages good software engineering practices with clear separation of interfaces and implementations as well as easy exception handling. Java's automatic memory management and lack of pointers remove some leading causes of programming errors. The advantages available through Java have also found their way into hardware. The emerging new Java devices are streamlined systems that exploit network servers for much of their processing power, storage, content, and administration.中文翻译:标题:Java的商业应用。
计算机 JSP web 外文翻译 外文文献
计算机 JSP web 外文翻译外文文献12.1 nEffective web n design involves separating business objects。
n。
and object XXX。
Although one individual may handle both roles on a small-scale project。
it is XXX.12.2 JSP ArchitectureIn this chapter。
XXX using JavaServer Pages。
servlets。
XXX of different architectures。
each building upon the us one。
The diagram below outlines this process。
and we will explain each component in detail later in this article.Note: XXX.)When Java Server Pages were introduced by Sun。
some people XXX。
While JSP is a key component of the J2EE n and serves as the preferred request handler and response mechanism。
it is XXX.XXX JSP。
the XXX that JSP is built on top of the servlet API and uses servlet XXX interesting ns。
such as whether we should XXX in our Web-enabled systems。
and if there is a way to combine servlets and JSPs。
信息与计算科学中英文对照外文翻译文献
中英文对照外文翻译文献(文档含英文原文和中文翻译)【Abstract】Under the network environment the library information resource altogether constructs sharing is refers to all levels of each kind of library basis user to the social information demand, through network use computer, correspondence, electron, multimedia and so on advanced information technology, the high idealization carries on the synthesis cooperation development and the use activity to various collections information resource and the network resources . The market economy swift and violent development, the networking unceasing renewal, the information age arrival, had decided the future library trend of development will be implements the information resource altogether to construct sharing, already achieved the social mutual recognition about this point.This is because:libraries implement the information resource altogether to construct sharing are solve the knowledge information explosion and the collection strength insufficient this contradictory important way..【Key Words】Network; libraries implement: information: construction;work environment the libraryUnder the network environment the library information resource altogether constructs sharing is refers to all levels of each kind of library basis user to the social information demand, through network use computer, correspondence, electron, multimedia and so on advanced information technology, the high idealization carries on the synthesis cooperation development and the use activity to various collections information resource and the network resources.1、 information resource altogether will construct sharing is the future library development and the use information resource way that must be taken.The market economy swift and violent development, the networking unceasing renewal, the information age arrival, had decided the future library trend of development will be implements the information resource altogether to construct sharing, already achieved the social mutual recognition about this point.This is because: 。
计算机专业外文文献翻译--Linux—网络时代的操作系统
英文参考文献及翻译Linux - Operating system of cybertimesThough for a lot of people , regard Linux as the main operating system to make up huge work station group, finish special effects of " Titanic " make , already can be regarded as and show talent fully. But for Linux, this only numerous news one of. Recently, the manufacturers concerned have announced that support the news of Linux to increase day by day, users' enthusiasm to Linux runs high unprecedentedly too. Then, Linux only have operating system not free more than on earth on 7 year this piece what glamour, get the favors of such numerous important software and hardware manufacturers as the masses of users and Orac le , Informix , HP , Sybase , Corel , Intel , Netscape , Dell ,etc. , OK?1.The background of Linux and characteristicLinux is a kind of " free (Free ) software ": What is called free,mean users can obtain the procedure and source code freely , and can use them freely , including revise or copy etc.. It is a result of cybertimes, numerous technical staff finish its research and development together through Inte rnet, countless user is it test and except fault , can add user expansion function that oneself make conveniently to participate in. As the most outstanding one in free software, Linux has characteristic of the following:(1)Totally follow POSLX standard, expand the network operatingsystem of supporting all AT&T and BSD Unix characteristic. Because of inheritting Unix outstanding design philosophy , and there are clean , stalwart , high-efficient and steady kernels, their all key codes are finished by Li nus Torvalds and other outstanding programmers, without any Unix code of AT&T or Berkeley, so Linu x is not Unix, but Linux and Unix are totally compatible.(2)Real many tasks, multi-user's system, the built-in networksupports, can be with such seamless links as NetWare , Windows NT , OS/2 , Unix ,etc.. Network in various kinds of Unix it tests to be fastest in comparing and assess efficiency. Support such many kinds of files systems as FAT16 , FAT32 , NTFS , Ex t2FS , ISO9600 ,etc. at the same time .(3) Can operate it in many kinds of hardwares platform , including such processors as Alpha , SunSparc , PowerPC , MIPS ,etc., to various kinds of new-type peripheral hardwares, can from distribute on global numerous programmer there getting support rapidly too.(4) To that the hardware requires lower, can obtain very good performance on more low-grade machine , what deserves particular mention is Linux outstanding stability , permitted " year " count often its running times.2.Main application of Linux At present,Now, the application of Linux mainly includes:(1) Internet/Intranet: This is one that Linux was used most at present, it can offer and include Web server , all such Inter net services as Ftp server , Gopher server , SMTP/POP3 mail server , Proxy/Cache server , DNS server ,etc.. Linux kernel supports IPalias , PPP and IPtunneling, these functions can be used for setting up fictitious host computer , fictitious service , VPN (fictitious special-purpose network ) ,etc.. Operating Apache Web server on Linux mainly, the occupation rate of market in 1998 is 49%, far exceeds the sum of such several big companies as Microsoft , Netscape ,etc..(2) Because Linux has outstanding networking ability , it can be usedin calculating distributedly large-scaly, for instance cartoon making , scientific caculation , database and file server ,etc..(3) As realization that is can under low platform fullness of Unix that operate , apply at all levels teaching and research work of universities and colleges extensively, if Mexico government announce middle and primary schools in the whole country dispose Linux and offer Internet service for student already.(4) Tabletop and handling official business appliedly. Application number of people of in this respect at present not so good as Windows of Microsoft far also, reason its lie in Lin ux quantity , desk-top of application software not so good as Windows application far not merely,because the characteristic of the freedom software makes it not almost have advertisement that support (though the function of Star Office is not second to MS Office at the same time, but there are actually few people knowing).3.Can Linux become a kind of major operating system?In the face of the pressure of coming from users that is strengthened day by day, more and more commercial companies transplant its application to Linux platform, comparatively important incident was as follows, in 1998 ①Compaq and HP determine to put forward user of requirement truss up Linux at their servers , IBM and Dell promise to offer customized Linux system to user too. ②Lotus announce, Notes the next edition include one special-purpose edition in Linux. ③Corel Company transplants its famous WordPerfect to on Linux, and free issue. Corel also plans to move the other figure pattern process products to Linux platform completely.④Main database producer: Sybase , Informix , Oracle , CA , IBM have already been transplanted one's own database products to on Linux, or has finished Beta edition, among them Oracle and Informix also offer technical support to their products.4.The gratifying one is, some farsighted domestic corporations have begun to try hard to change this kind of current situation already. Stone Co. not long ago is it invest a huge sum of money to claim , regard Linux as platform develop a Internet/Intranet solution, regard this as the core and launch Stone's system integration business , plan to set up nationwide Linux technical support organization at the same time , take the lead to promote the freedom software application and development in China. In addition domestic computer Company , person who win of China , devoted to Linux relevant software and hardware application of system popularize too. Is it to intensification that Linux know , will have more and more enterprises accede to the ranks that Linux will be used with domestic every enterprise to believe, more software will be planted in Linux platform. Meanwhile, the domestic university should regard Linux as the original version and upgrade already existing Unix content of courses , start with analysing the source code and revising the kernel and train a large number of senior Linux talents, improve our country's own operating system. Having only really grasped the operating system, the software industry of our country could be got rid of and aped sedulously at present, the passive state led by the nose byothers, create conditions for revitalizing the software industry of our country fundamentally.中文翻译Linux—网络时代的操作系统虽然对许多人来说,以Linux作为主要的操作系统组成庞大的工作站群,完成了《泰坦尼克号》的特技制作,已经算是出尽了风头。
云计算外文翻译参考文献
云计算外文翻译参考文献(文档含中英文对照即英文原文和中文翻译)原文:Technical Issues of Forensic Investigations in Cloud Computing EnvironmentsDominik BirkRuhr-University BochumHorst Goertz Institute for IT SecurityBochum, GermanyRuhr-University BochumHorst Goertz Institute for IT SecurityBochum, GermanyAbstract—Cloud Computing is arguably one of the most discussedinformation technologies today. It presents many promising technological and economical opportunities. However, many customers remain reluctant to move their business IT infrastructure completely to the cloud. One of their main concerns is Cloud Security and the threat of the unknown. Cloud Service Providers(CSP) encourage this perception by not letting their customers see what is behind their virtual curtain. A seldomly discussed, but in this regard highly relevant open issue is the ability to perform digital investigations. This continues to fuel insecurity on the sides of both providers and customers. Cloud Forensics constitutes a new and disruptive challenge for investigators. Due to the decentralized nature of data processing in the cloud, traditional approaches to evidence collection and recovery are no longer practical. This paper focuses on the technical aspects of digital forensics in distributed cloud environments. We contribute by assessing whether it is possible for the customer of cloud computing services to perform a traditional digital investigation from a technical point of view. Furthermore we discuss possible solutions and possible new methodologies helping customers to perform such investigations.I. INTRODUCTIONAlthough the cloud might appear attractive to small as well as to large companies, it does not come along without its own unique problems. Outsourcing sensitive corporate data into the cloud raises concerns regarding the privacy and security of data. Security policies, companies main pillar concerning security, cannot be easily deployed into distributed, virtualized cloud environments. This situation is further complicated by the unknown physical location of the companie’s assets. Normally,if a security incident occurs, the corporate security team wants to be able to perform their own investigation without dependency on third parties. In the cloud, this is not possible anymore: The CSP obtains all the power over the environmentand thus controls the sources of evidence. In the best case, a trusted third party acts as a trustee and guarantees for the trustworthiness of the CSP. Furthermore, the implementation of the technical architecture and circumstances within cloud computing environments bias the way an investigation may be processed. In detail, evidence data has to be interpreted by an investigator in a We would like to thank the reviewers for the helpful comments and Dennis Heinson (Center for Advanced Security Research Darmstadt - CASED) for the profound discussions regarding the legal aspects of cloud forensics. proper manner which is hardly be possible due to the lackof circumstantial information. For auditors, this situation does not change: Questions who accessed specific data and information cannot be answered by the customers, if no corresponding logs are available. With the increasing demand for using the power of the cloud for processing also sensible information and data, enterprises face the issue of Data and Process Provenance in the cloud [10]. Digital provenance, meaning meta-data that describes the ancestry or history of a digital object, is a crucial feature for forensic investigations. In combination with a suitable authentication scheme, it provides information about who created and who modified what kind of data in the cloud. These are crucial aspects for digital investigations in distributed environments such as the cloud. Unfortunately, the aspects of forensic investigations in distributed environment have so far been mostly neglected by the research community. Current discussion centers mostly around security, privacy and data protection issues [35], [9], [12]. The impact of forensic investigations on cloud environments was little noticed albeit mentioned by the authors of [1] in 2009: ”[...] to our knowledge, no research has been published on how cloud computing environments affect digital artifacts,and on acquisition logistics and legal issues related to cloud computing env ironments.” This statement is also confirmed by other authors [34], [36], [40] stressing that further research on incident handling, evidence tracking and accountability in cloud environments has to be done. At the same time, massive investments are being made in cloud technology. Combined with the fact that information technology increasingly transcendents peoples’ private and professional life, thus mirroring more and more of peoples’actions, it becomes apparent that evidence gathered from cloud environments will be of high significance to litigation or criminal proceedings in the future. Within this work, we focus the notion of cloud forensics by addressing the technical issues of forensics in all three major cloud service models and consider cross-disciplinary aspects. Moreover, we address the usability of various sources of evidence for investigative purposes and propose potential solutions to the issues from a practical standpoint. This work should be considered as a surveying discussion of an almost unexplored research area. The paper is organized as follows: We discuss the related work and the fundamental technical background information of digital forensics, cloud computing and the fault model in section II and III. In section IV, we focus on the technical issues of cloud forensics and discuss the potential sources and nature of digital evidence as well as investigations in XaaS environments including thecross-disciplinary aspects. We conclude in section V.II. RELATED WORKVarious works have been published in the field of cloud security and privacy [9], [35], [30] focussing on aspects for protecting data in multi-tenant, virtualized environments. Desired security characteristics for current cloud infrastructures mainly revolve around isolation of multi-tenant platforms [12], security of hypervisors in order to protect virtualized guest systems and secure network infrastructures [32]. Albeit digital provenance, describing the ancestry of digital objects, still remains a challenging issue for cloud environments, several works have already been published in this field [8], [10] contributing to the issues of cloud forensis. Within this context, cryptographic proofs for verifying data integrity mainly in cloud storage offers have been proposed,yet lacking of practical implementations [24], [37], [23]. Traditional computer forensics has already well researched methods for various fields of application [4], [5], [6], [11], [13]. Also the aspects of forensics in virtual systems have been addressed by several works [2], [3], [20] including the notionof virtual introspection [25]. In addition, the NIST already addressed Web Service Forensics [22] which has a huge impact on investigation processes in cloud computing environments. In contrast, the aspects of forensic investigations in cloud environments have mostly been neglected by both the industry and the research community. One of the first papers focusing on this topic was published by Wolthusen [40] after Bebee et al already introduced problems within cloud environments [1]. Wolthusen stressed that there is an inherent strong need for interdisciplinary work linking the requirements and concepts of evidence arising from the legal field to what can be feasibly reconstructed and inferred algorithmically or in an exploratory manner. In 2010, Grobauer et al [36] published a paper discussing the issues of incident response in cloud environments - unfortunately no specific issues and solutions of cloud forensics have been proposed which will be done within this work.III. TECHNICAL BACKGROUNDA. Traditional Digital ForensicsThe notion of Digital Forensics is widely known as the practice of identifying, extracting and considering evidence from digital media. Unfortunately, digital evidence is both fragile and volatile and therefore requires the attention of special personnel and methods in order to ensure that evidence data can be proper isolated and evaluated. Normally, the process of a digital investigation can be separated into three different steps each having its own specificpurpose:1) In the Securing Phase, the major intention is the preservation of evidence for analysis. The data has to be collected in a manner that maximizes its integrity. This is normally done by a bitwise copy of the original media. As can be imagined, this represents a huge problem in the field of cloud computing where you never know exactly where your data is and additionallydo not have access to any physical hardware. However, the snapshot technology, discussed in section IV-B3, provides a powerful tool to freeze system states and thus makes digital investigations, at least in IaaS scenarios, theoretically possible.2) We refer to the Analyzing Phase as the stage in which the data is sifted and combined. It is in this phase that the data from multiple systems or sources is pulled together to create as complete a picture and event reconstruction as possible. Especially in distributed system infrastructures, this means that bits and pieces of data are pulled together for deciphering the real story of what happened and for providing a deeper look into the data.3) Finally, at the end of the examination and analysis of the data, the results of the previous phases will be reprocessed in the Presentation Phase. The report, created in this phase, is a compilation of all the documentation and evidence from the analysis stage. The main intention of such a report is that it contains all results, it is complete and clear to understand. Apparently, the success of these three steps strongly depends on the first stage. If it is not possible to secure the complete set of evidence data, no exhaustive analysis will be possible. However, in real world scenarios often only a subset of the evidence data can be secured by the investigator. In addition, an important definition in the general context of forensics is the notion of a Chain of Custody. This chain clarifies how and where evidence is stored and who takes possession of it. Especially for cases which are brought to court it is crucial that the chain of custody is preserved.B. Cloud ComputingAccording to the NIST [16], cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications and services) that can be rapidly provisioned and released with minimal CSP interaction. The new raw definition of cloud computing brought several new characteristics such as multi-tenancy, elasticity, pay-as-you-go and reliability. Within this work, the following three models are used: In the Infrastructure asa Service (IaaS) model, the customer is using the virtual machine provided by the CSP for installing his own system on it. The system can be used like any other physical computer with a few limitations. However, the additive customer power over the system comes along with additional security obligations. Platform as a Service (PaaS) offerings provide the capability to deploy application packages created using the virtual development environment supported by the CSP. For the efficiency of software development process this service model can be propellent. In the Software as a Service (SaaS) model, the customer makes use of a service run by the CSP on a cloud infrastructure. In most of the cases this service can be accessed through an API for a thin client interface such as a web browser. Closed-source public SaaS offers such as Amazon S3 and GoogleMail can only be used in the public deployment model leading to further issues concerning security, privacy and the gathering of suitable evidences. Furthermore, two main deployment models, private and public cloud have to be distinguished. Common public clouds are made available to the general public. The corresponding infrastructure is owned by one organization acting as a CSP and offering services to its customers. In contrast, the private cloud is exclusively operated for an organization but may not provide the scalability and agility of public offers. The additional notions of community and hybrid cloud are not exclusively covered within this work. However, independently from the specific model used, the movement of applications and data to the cloud comes along with limited control for the customer about the application itself, the data pushed into the applications and also about the underlying technical infrastructure.C. Fault ModelBe it an account for a SaaS application, a development environment (PaaS) or a virtual image of an IaaS environment, systems in the cloud can be affected by inconsistencies. Hence, for both customer and CSP it is crucial to have the ability to assign faults to the causing party, even in the presence of Byzantine behavior [33]. Generally, inconsistencies can be caused by the following two reasons:1) Maliciously Intended FaultsInternal or external adversaries with specific malicious intentions can cause faults on cloud instances or applications. Economic rivals as well as former employees can be the reason for these faults and state a constant threat to customers and CSP. In this model, also a malicious CSP is included albeit he isassumed to be rare in real world scenarios. Additionally, from the technical point of view, the movement of computing power to a virtualized, multi-tenant environment can pose further threads and risks to the systems. One reason for this is that if a single system or service in the cloud is compromised, all other guest systems and even the host system are at risk. Hence, besides the need for further security measures, precautions for potential forensic investigations have to be taken into consideration.2) Unintentional FaultsInconsistencies in technical systems or processes in the cloud do not have implicitly to be caused by malicious intent. Internal communication errors or human failures can lead to issues in the services offered to the costumer(i.e. loss or modification of data). Although these failures are not caused intentionally, both the CSP and the customer have a strong intention to discover the reasons and deploy corresponding fixes.IV. TECHNICAL ISSUESDigital investigations are about control of forensic evidence data. From the technical standpoint, this data can be available in three different states: at rest, in motion or in execution. Data at rest is represented by allocated disk space. Whether the data is stored in a database or in a specific file format, it allocates disk space. Furthermore, if a file is deleted, the disk space is de-allocated for the operating system but the data is still accessible since the disk space has not been re-allocated and overwritten. This fact is often exploited by investigators which explore these de-allocated disk space on harddisks. In case the data is in motion, data is transferred from one entity to another e.g. a typical file transfer over a network can be seen as a data in motion scenario. Several encapsulated protocols contain the data each leaving specific traces on systems and network devices which can in return be used by investigators. Data can be loaded into memory and executed as a process. In this case, the data is neither at rest or in motion but in execution. On the executing system, process information, machine instruction and allocated/de-allocated data can be analyzed by creating a snapshot of the current system state. In the following sections, we point out the potential sources for evidential data in cloud environments and discuss the technical issues of digital investigations in XaaS environmentsas well as suggest several solutions to these problems.A. Sources and Nature of EvidenceConcerning the technical aspects of forensic investigations, the amount of potential evidence available to the investigator strongly diverges between thedifferent cloud service and deployment models. The virtual machine (VM), hosting in most of the cases the server application, provides several pieces of information that could be used by investigators. On the network level, network components can provide information about possible communication channels between different parties involved. The browser on the client, acting often as the user agent for communicating with the cloud, also contains a lot of information that could be used as evidence in a forensic investigation. Independently from the used model, the following three components could act as sources for potential evidential data.1) Virtual Cloud Instance: The VM within the cloud, where i.e. data is stored or processes are handled, contains potential evidence [2], [3]. In most of the cases, it is the place where an incident happened and hence provides a good starting point for a forensic investigation. The VM instance can be accessed by both, the CSP and the customer who is running the instance. Furthermore, virtual introspection techniques [25] provide access to the runtime state of the VM via the hypervisor and snapshot technology supplies a powerful technique for the customer to freeze specific states of the VM. Therefore, virtual instances can be still running during analysis which leads to the case of live investigations [41] or can be turned off leading to static image analysis. In SaaS and PaaS scenarios, the ability to access the virtual instance for gathering evidential information is highly limited or simply not possible.2) Network Layer: Traditional network forensics is knownas the analysis of network traffic logs for tracing events that have occurred in the past. Since the different ISO/OSI network layers provide several information on protocols and communication between instances within as well as with instances outside the cloud [4], [5], [6], network forensics is theoretically also feasible in cloud environments. However in practice, ordinary CSP currently do not provide any log data from the network components used by the customer’s instances or applications. For instance, in case of a malware infection of an IaaS VM, it will be difficult for the investigator to get any form of routing information and network log datain general which is crucial for further investigative steps. This situation gets even more complicated in case of PaaS or SaaS. So again, the situation of gathering forensic evidence is strongly affected by the support the investigator receives from the customer and the CSP.3) Client System: On the system layer of the client, it completely depends on the used model (IaaS, PaaS, SaaS) if and where potential evidence could beextracted. In most of the scenarios, the user agent (e.g. the web browser) on the client system is the only application that communicates with the service in the cloud. This especially holds for SaaS applications which are used and controlled by the web browser. But also in IaaS scenarios, the administration interface is often controlled via the browser. Hence, in an exhaustive forensic investigation, the evidence data gathered from the browser environment [7] should not be omitted.a) Browser Forensics: Generally, the circumstances leading to an investigation have to be differentiated: In ordinary scenarios, the main goal of an investigation of the web browser is to determine if a user has been victim of a crime. In complex SaaS scenarios with high client-server interaction, this constitutes a difficult task. Additionally, customers strongly make use of third-party extensions [17] which can be abused for malicious purposes. Hence, the investigator might want to look for malicious extensions, searches performed, websites visited, files downloaded, information entered in forms or stored in local HTML5 stores, web-based email contents and persistent browser cookies for gathering potential evidence data. Within this context, it is inevitable to investigate the appearance of malicious JavaScript [18] leading to e.g. unintended AJAX requests and hence modified usage of administration interfaces. Generally, the web browser contains a lot of electronic evidence data that could be used to give an answer to both of the above questions - even if the private mode is switched on [19].B. Investigations in XaaS EnvironmentsTraditional digital forensic methodologies permit investigators to seize equipment and perform detailed analysis on the media and data recovered [11]. In a distributed infrastructure organization like the cloud computing environment, investigators are confronted with an entirely different situation. They have no longer the option of seizing physical data storage. Data and processes of the customer are dispensed over an undisclosed amount of virtual instances, applications and network elements. Hence, it is in question whether preliminary findings of the computer forensic community in the field of digital forensics apparently have to be revised and adapted to the new environment. Within this section, specific issues of investigations in SaaS, PaaS and IaaS environments will be discussed. In addition, cross-disciplinary issues which affect several environments uniformly, will be taken into consideration. We also suggest potential solutions to the mentioned problems.1) SaaS Environments: Especially in the SaaS model, the customer does notobtain any control of the underlying operating infrastructure such as network, servers, operating systems or the application that is used. This means that no deeper view into the system and its underlying infrastructure is provided to the customer. Only limited userspecific application configuration settings can be controlled contributing to the evidences which can be extracted fromthe client (see section IV-A3). In a lot of cases this urges the investigator to rely on high-level logs which are eventually provided by the CSP. Given the case that the CSP does not run any logging application, the customer has no opportunity to create any useful evidence through the installation of any toolkit or logging tool. These circumstances do not allow a valid forensic investigation and lead to the assumption that customers of SaaS offers do not have any chance to analyze potential incidences.a) Data Provenance: The notion of Digital Provenance is known as meta-data that describes the ancestry or history of digital objects. Secure provenance that records ownership and process history of data objects is vital to the success of data forensics in cloud environments, yet it is still a challenging issue today [8]. Albeit data provenance is of high significance also for IaaS and PaaS, it states a huge problem specifically for SaaS-based applications: Current global acting public SaaS CSP offer Single Sign-On (SSO) access control to the set of their services. Unfortunately in case of an account compromise, most of the CSP do not offer any possibility for the customer to figure out which data and information has been accessed by the adversary. For the victim, this situation can have tremendous impact: If sensitive data has been compromised, it is unclear which data has been leaked and which has not been accessed by the adversary. Additionally, data could be modified or deleted by an external adversary or even by the CSP e.g. due to storage reasons. The customer has no ability to proof otherwise. Secure provenance mechanisms for distributed environments can improve this situation but have not been practically implemented by CSP [10]. Suggested Solution: In private SaaS scenarios this situation is improved by the fact that the customer and the CSP are probably under the same authority. Hence, logging and provenance mechanisms could be implemented which contribute to potential investigations. Additionally, the exact location of the servers and the data is known at any time. Public SaaS CSP should offer additional interfaces for the purpose of compliance, forensics, operations and security matters to their customers. Through an API, the customers should have the ability to receive specific information suchas access, error and event logs that could improve their situation in case of aninvestigation. Furthermore, due to the limited ability of receiving forensic information from the server and proofing integrity of stored data in SaaS scenarios, the client has to contribute to this process. This could be achieved by implementing Proofs of Retrievability (POR) in which a verifier (client) is enabled to determine that a prover (server) possesses a file or data object and it can be retrieved unmodified [24]. Provable Data Possession (PDP) techniques [37] could be used to verify that an untrusted server possesses the original data without the need for the client to retrieve it. Although these cryptographic proofs have not been implemented by any CSP, the authors of [23] introduced a new data integrity verification mechanism for SaaS scenarios which could also be used for forensic purposes.2) PaaS Environments: One of the main advantages of the PaaS model is that the developed software application is under the control of the customer and except for some CSP, the source code of the application does not have to leave the local development environment. Given these circumstances, the customer obtains theoretically the power to dictate how the application interacts with other dependencies such as databases, storage entities etc. CSP normally claim this transfer is encrypted but this statement can hardly be verified by the customer. Since the customer has the ability to interact with the platform over a prepared API, system states and specific application logs can be extracted. However potential adversaries, which can compromise the application during runtime, should not be able to alter these log files afterwards. Suggested Solution:Depending on the runtime environment, logging mechanisms could be implemented which automatically sign and encrypt the log information before its transfer to a central logging server under the control of the customer. Additional signing and encrypting could prevent potential eavesdroppers from being able to view and alter log data information on the way to the logging server. Runtime compromise of an PaaS application by adversaries could be monitored by push-only mechanisms for log data presupposing that the needed information to detect such an attack are logged. Increasingly, CSP offering PaaS solutions give developers the ability to collect and store a variety of diagnostics data in a highly configurable way with the help of runtime feature sets [38].3) IaaS Environments: As expected, even virtual instances in the cloud get compromised by adversaries. Hence, the ability to determine how defenses in the virtual environment failed and to what extent the affected systems havebeen compromised is crucial not only for recovering from an incident. Also forensic investigations gain leverage from such information and contribute to resilience against future attacks on the systems. From the forensic point of view, IaaS instances do provide much more evidence data usable for potential forensics than PaaS and SaaS models do. This fact is caused throughthe ability of the customer to install and set up the image for forensic purposes before an incident occurs. Hence, as proposed for PaaS environments, log data and other forensic evidence information could be signed and encrypted before itis transferred to third-party hosts mitigating the chance that a maliciously motivated shutdown process destroys the volatile data. Although, IaaS environments provide plenty of potential evidence, it has to be emphasized that the customer VM is in the end still under the control of the CSP. He controls the hypervisor which is e.g. responsible for enforcing hardware boundaries and routing hardware requests among different VM. Hence, besides the security responsibilities of the hypervisor, he exerts tremendous control over how customer’s VM communicate with the hardware and theoretically can intervene executed processes on the hosted virtual instance through virtual introspection [25]. This could also affect encryption or signing processes executed on the VM and therefore leading to the leakage of the secret key. Although this risk can be disregarded in most of the cases, the impact on the security of high security environments is tremendous.a) Snapshot Analysis: Traditional forensics expect target machines to be powered down to collect an image (dead virtual instance). This situation completely changed with the advent of the snapshot technology which is supported by all popular hypervisors such as Xen, VMware ESX and Hyper-V.A snapshot, also referred to as the forensic image of a VM, providesa powerful tool with which a virtual instance can be clonedby one click including also the running system’s mem ory. Due to the invention of the snapshot technology, systems hosting crucial business processes do not have to be powered down for forensic investigation purposes. The investigator simply creates and loads a snapshot of the target VM for analysis(live virtual instance). This behavior is especially important for scenarios in which a downtime of a system is not feasible or practical due to existing SLA. However the information whether the machine is running or has been properly powered down is crucial [3] for the investigation. Live investigations of running virtual instances become more common providing evidence data that。
计算机科学与技术 外文翻译 英文文献 中英对照
附件1:外文资料翻译译文大容量存储器由于计算机主存储器的易失性和容量的限制, 大多数的计算机都有附加的称为大容量存储系统的存储设备, 包括有磁盘、CD 和磁带。
相对于主存储器,大的容量储存系统的优点是易失性小,容量大,低成本, 并且在许多情况下, 为了归档的需要可以把储存介质从计算机上移开。
术语联机和脱机通常分别用于描述连接于和没有连接于计算机的设备。
联机意味着,设备或信息已经与计算机连接,计算机不需要人的干预,脱机意味着设备或信息与机器相连前需要人的干预,或许需要将这个设备接通电源,或许包含有该信息的介质需要插到某机械装置里。
大量储存器系统的主要缺点是他们典型地需要机械的运动因此需要较多的时间,因为主存储器的所有工作都由电子器件实现。
1. 磁盘今天,我们使用得最多的一种大量存储器是磁盘,在那里有薄的可以旋转的盘片,盘片上有磁介质以储存数据。
盘片的上面和(或)下面安装有读/写磁头,当盘片旋转时,每个磁头都遍历一圈,它被叫作磁道,围绕着磁盘的上下两个表面。
通过重新定位的读/写磁头,不同的同心圆磁道可以被访问。
通常,一个磁盘存储系统由若干个安装在同一根轴上的盘片组成,盘片之间有足够的距离,使得磁头可以在盘片之间滑动。
在一个磁盘中,所有的磁头是一起移动的。
因此,当磁头移动到新的位置时,新的一组磁道可以存取了。
每一组磁道称为一个柱面。
因为一个磁道能包含的信息可能比我们一次操作所需要得多,所以每个磁道划分成若干个弧区,称为扇区,记录在每个扇区上的信息是连续的二进制位串。
传统的磁盘上每个磁道分为同样数目的扇区,而每个扇区也包含同样数目的二进制位。
(所以,盘片中心的储存的二进制位的密度要比靠近盘片边缘的大)。
因此,一个磁盘存储器系统有许多个别的磁区, 每个扇区都可以作为独立的二进制位串存取,盘片表面上的磁道数目和每个磁道上的扇区数目对于不同的磁盘系统可能都不相同。
磁区大小一般是不超过几个KB; 512 个字节或1024 个字节。
计算机外文翻译英文文献中英版仓库管理系统(WMS)
Warehouse Management Systems (WMS).The evolution of warehouse management systems (WMS) is very similar to that of many other software solutions. Initially a system to control movement and storage of materials within a warehouse, the role of WMS is expanding to including light manufacturing, transportation management, order management, and complete accounting systems. To use the grandfather of operations-related software, MRP, as a comparison, material requirements planning (MRP) started as a system for planning raw material requirements in a manufacturing environment. Soon MRP evolved into manufacturing resource planning (MRPII), which took the basic MRP system and added scheduling and capacity planning logic. Eventually MRPII evolved into enterprise resource planning (ERP), incorporating all the MRPII functionality with full financials and customer and vendor management functionality. Now, whether WMS evolving into a warehouse-focused ERP system is a good thing or not is up to debate. What is clear is that the expansion of the overlap in functionality between Warehouse Management Systems, Enterprise Resource Planning, Distribution Requirements Planning, Transportation Management Systems, Supply Chain Planning, Advanced Planning and Scheduling, and Manufacturing Execution Systems will only increase the level of confusion among companies looking for software solutions for their operations.Even though WMS continues to gain added functionality, the initial core functionality of a WMS has not really changed. The primary purpose of a WMS is to control the movement and storage of materials within an operation and process the associated transactions. Directed picking, directed replenishment, and directed putaway are the key to WMS. The detailed setup and processing within a WMS can vary significantly from one software vendor to another, however the basic logic will use a combination of item, location, quantity, unit of measure, and order information to determine where to stock, where to pick, and in what sequence to perform these operations.Do You Really Need WMS?Not every warehouse needs a WMS. Certainly any warehouse could benefit from some of the functionality but is the benefit great enough to justify the initial and ongoing costs associated with WMS? Warehouse Management Systems are big, complex, data intensive, applications. They tend to require a lot of initial setup, a lot of system resources to run, and a lot of ongoing data management to continue to run. That’s right, you need to "manage" your warehouse "management" system. Often times, large operations will end up creating a new IS department with the sole responsibility of managing the WMS.The Claims:WMS will reduce inventory!WMS will reduce labor costs!WMS will increase storage capacity!WMS will increase customer service!WMS will increase !The Reality:The implementation of a WMS along with automated data collection will likely give you increases in accuracy, reduction in labor costs (provided the labor required to maintain the system is less than the labor saved on the warehouse floor), and a greater ability to service the customer by reducing cycle times. Expectations of inventory reduction and increased storage capacity are less likely. While increased accuracy and efficiencies in the receiving process may reduce the level of required, the impact of this reduction will likely be negligible in comparison to overall inventory levels. The predominant factors that control inventory levels are , lead times, and demand variability. It is unlikely that a WMS will have a significant impact on any of these factors. And while a WMS certainly provides the tools for more organized storage which may result in increased storage capacity, this improvement will be relative to just how sloppy your pre-WMS processes were.Beyond labor efficiencies, the determining factors in deciding to implement a WMS tend to be more often associated with the need to do something to service your customers that your current system does not support (or does not support well) such asfirst-in-first-out, cross-docking, automated pick replenishment, wave picking, lot tracking, yard management, automated data collection, automated material handling equipment, etc.SetupThe setup requirements of WMS can be extensive. The characteristics of each item and location must be maintained either at the detail level or by grouping similar items and locations into categories. An example of item characteristics at the detail level would include exact dimensions and weight of each item in each unit of measure the item is stocked (each, cases, pallets, etc) as well as information such as whether it can be mixed with other items in a location, whether it is rack able, max stack height, max quantity per location, hazard classifications, finished goods or raw material, fast versus slow mover, etc. Although some operations will need to set up each item this way, most operations will benefit by creating groups of similar products. For example, if you are a distributor of music CDs you would create groups for single CDs, and double CDs, maintaining the detailed dimension and weight information at the group level and only needing to attach the group code to each item. You would likely need to maintain detailed information on special items such as boxed sets or CDs in special packaging. You would also create groups for the different types of locations within your warehouse. An example would be to create three different groups (P1, P2, P3) for the three different sized forward picking locations you use for your CD picking. You then set up the quantity of single CDs that will fit in a P1, P2, and P3 location, quantity of double CDsthat fit in a P1, P2, P3 location etc. You would likely also be setting up case quantities, and pallet quantities of each CD group and quantities of cases and pallets per each reserve storage location group.If this sounds simple, it is…well… sort of. In reality most operations have a much more diverse product mix and will require much more system setup. And setting up the physical characteristics of the product and locations is only part of the picture. You have set up enough so that the system knows where a product can fit and how many will fit in that location. You now need to set up the information needed to let the system decide exactly which location to pick from, replenish from/to, and put away to, and in what sequence these events should occur (remember WMS is all about “directed” movement). You do this by assigning specific logic to the various combinations of item/order/quantity/location information that will occur.Below I have listed some of the logic used in determining actual locations and sequences.Location Sequence. This is the simplest logic; you simply define a flow through your warehouse and assign a sequence number to each location. In order picking this is used to sequence your picks to flow through the warehouse, in put away the logic would look for the first location in the sequence in which the product would fit.Zone Logic. By breaking down your storage locations into zones you can direct picking, put away, or replenishment to or from specific areas of your warehouse. Since zone logic only designates an area, you will need to combine this with some other type oflogic to determine exact location within the zone.Fixed Location. Logic uses predetermined fixed locations per item in picking, put away, and replenishment. Fixed locations are most often used as the primary picking location in piece pick and case-pick operations, however, they can also be used for secondary storage.Random Location. Since computers cannot be truly random (nor would you want them to be) the term random location is a little misleading. Random locations generally refer to areas where products are not stored in designated fixed locations. Like zone logic, you will need some additional logic to determine exact locations.First-in-first-out (FIFO).Directs picking from the oldest inventory first.Last-in-first-out (LIFO).Opposite of FIFO. I didn't think there were any real applications for this logic until a visitor to my site sent an email describing their operation that distributes perishable goods domestically and overseas. They use LIFO for their overseas customers (because of longer in-transit times) and FIFO for their domestic customers.Pick-to-clear. Logic directs picking to the locations with the smallest quantities on hand. This logic is great for space utilization.Reserved Locations. This is used when you want to predetermine specific locations to put away to or pick from. An application for reserved locations would be cross-docking, where you may specify certain quantities of an inbound shipment be moved to specific outbound staging locations or directly to an awaiting outbound trailer.Maximize Cube. Cube logic is found in most WMS systems however it is seldom used. Cube logic basically uses unit dimensions to calculate cube (cubic inches per unit) and then compares this to the cube capacity of the location to determine how much will fit. Now if the units are capable of being stacked into the location in a manner that fills every cubic inch of space in the location, cube logic will work. Since this rarely happens in the real world, cube logic tends to be impractical.Consolidate. Looks to see if there is already a location with the same product stored in it with available capacity. May also create additional moves to consolidate like product stored in multiple locations.Lot Sequence. Used for picking or replenishment, this will use the lot number or lot date to determine locations to pick from or replenish from.It’s very common to combine multiple logic methods to determine the best location. For example you may chose to use pick-to-clear logic within first-in-first-out logic when there are multiple locations with the same receipt date. You also may change the logic based upon current workload. During busy periods you may chose logic that optimizes productivity while during slower periods you switch to logic that optimizes space utilization.Other Functionality/ConsiderationsWave Picking/Batch Picking/Zone Picking. Support for various picking methods varies from one system to another. In high-volume fulfillment operations, picking logiccan be a critical factor in WMS selection. See my article on for more info on these methods.Task Interleaving. Task interleaving describes functionality that mixes dissimilar tasks such as picking and put away to obtain maximum productivity. Used primarily in full-pallet-load operations, task interleaving will direct a lift truck operator to put away a pallet on his/her way to the next pick. In large warehouses this can greatly reduce travel time, not only increasing productivity, but also reducing wear on the lift trucks and saving on energy costs by reducing lift truck fuel consumption. Task interleaving is also used with cycle counting programs to coordinate a cycle count with a picking or put away task.Integration with Automated Material Handling Equipment. If you are planning on using automated material handling equipment such as carousels, ASRS units, AGNS, pick-to-light systems, or separation systems, you’ll want to consider this during the software selection process. Since these types of automation are very expensive and are usually a core component of your warehouse, you may find that the equipment will drive the selection of the WMS. As with automated data collection, you should be working closely with the equipment manufacturers during the software selection process.Advanced Shipment Notifications (ASN). If your vendors are capable of sending advanced shipment notifications (preferably electronically) and attaching compliance labels to the shipments you will want to make sure that the WMS can use this toautomate your receiving process. In addition, if you have requirements to provide ASNs for customers, you will also want to verify this functionality.Yard Management. Yard management describes the function of managing the contents (inventory) of trailers parked outside the warehouse, or the empty trailers themselves. Yard management is generally associated with cross docking operations and may include the management of both inbound and outbound trailers.Labor Tracking/Capacity Planning. Some WMS systems provide functionality related to labor reporting and capacity planning. Anyone that has worked in manufacturing should be familiar with this type of logic. Basically, you set up standard labor hours and machine (usually lift trucks) hours per task and set the available labor and machine hours per shift. The WMS system will use this info to determine capacity and load. Manufacturing has been using capacity planning for decades with mixed results. The need to factor in efficiency and utilization to determine rated capacity is an example of the shortcomings of this process. Not that I’m necessarily against capacity planning in warehousing, I just think most operations don’t really need it and can avoid the disappointment of trying to make it work. I am, however, a big advocate of labor tracking for individual productivity measurement. Most WMS maintain enough data to create productivity reporting. Since productivity is measured differently from one operation to another you can assume you will have to do some minor modifications here (usually in the form of ).Integration with existing accounting/ERP systems. Unless the WMS vendor has already created a specific interface with your accounting/ERP system (such as those provided by an approved business partner) you can expect to spend some significant programming dollars here. While we are all hoping that integration issues will be magically resolved someday by a standardized interface, we isn’t there yet. Ideally you’ll want an integrator that has already integrated the WMS you chose with the business software you are using. Since this is not always possible you at least want an integrator that is very familiar with one of the systems.WMS + everything else = ? As I mentioned at the beginning of this article, a lot of other modules are being added to WMS packages. These would include full financials, light manufacturing, transportation management, purchasing, and sales order management. I don’t see this as a un ilateral move of WMS from an add-on module to a core system, but rather an optional approach that has applications in specific industries such as 3PLs. Using ERP systems as a point of reference, it is unlikely that this add-on functionality will match the functionality of best-of-breed applications available separately. If warehousing/distribution is your core business function and you don’t want to have to deal with the integration issues of incorporating separate financials, order processing, etc. you may find these WMS based business systems are a good fit.Implementation TipsOutside of the standard “don’t underestimate”, “thoroughly test”, “train, train, train” implementation tips that apply to any business software installation ,it’s important t o emphasize that WMS are very data dependent and restrictive by design. That is, you need to have all of the various data elements in place for the system to function properly. And, when they are in place, you must operate within the set parameters.When implementing a WMS, you are adding an additional layer of technology onto your system. And with each layer of technology there is additional overhead and additional sources of potential problems. Now don’t take this as a condemnation of Warehouse Management Systems. Coming from a warehousing background I definitely appreciate the functionality WMS have to offer, and, in many warehouses, this functionality is essential to their ability to serve their customers and remain competitive. It’s just important t o note that every solution has its downsides and having a good understanding of the potential implications will allow managers to make better decisions related to the levels of technology that best suits their unique environment.仓库管理系统(WMS )仓库管理系统(WMS )的演变与许多其他软件解决方案是超级相似的。
计算机 自动化 外文翻译 外文文献 英文文献 原文
The Application of Visualization Technology in ElectricPower Automation SystemWang Chuanqi, Zou QuanxiElectric Power Automation System Department of Yantai Dongfang Electronics Information IndustryCo., Ltd.Abstract: Isoline chart is widely used chart. The authors have improved the existing isoline formation method, proposed a simple and practical isoline formation method, studied how to fill the isoline chart, brought about a feasible method of filling the isoline chart and discussed the application of isoline chart in electric power automation system.Key words: Visualization; Isoline; Electric power automation systemIn the electric power system industry, the dispatching of electric network becomes increasingly important along with the expansion of electric power system and the increasing demands of people towards electric power. At present, electric network dispatching automation system is relatively advanced and relieves the boring and heavy work for operation staff. However, there is a large amount or even oceans of information. Especially when there is any fault, a large amount of alarm information and fault information will flood in the dispatching center. Faced with massive data, operation staff shall rely on some simple and effective tool to quickly locate the interested part in order to grasp the operation state of the system as soon as possible and to predict, identify and remove fault.Meanwhile, the operation of electric power system needs engineers and analysts in the system to analyze a lot of data. The main challenge that a system with thousands of buses poses for electric power automation system is that it needs to supply a lot of data to users in a proper way and make users master and estimate the state of the system instinctively and quickly. This is the case especially in electric network analyzing software. For example, the displaying way of data is more important in analyzing the relations between the actual trend, planned trend of electric network and the transmission capacity of the system. The application of new computer technology and visualization technology in the electric power automation system can greatly satisfy new development and new demands of electric power automation system.The word “Visualization” originates from English “Visual” and itsoriginal meaning is visual and vivid. In fact, the transformation of any abstract things and processes into graphs and images can be regarded as visualization. But as a subject term, the word “Visualization” officially appeared in a seminar held by National Science Foundation (shortened as NSF) of the USA in February 1987. The official report published after the seminar defined visualization, its covered fields and its recent and long-term research direction, which symbolized that “Visualization” became mature as a subject at the international level.The basic implication of visualization is to apply the principles and methods of computer graphics and general graphics to transforming large amounts of data produced by scientific and engineering computation into graphs and images and displaying them in a visual way. It refers to multi research fields such as computer graphics, image processing, computer vision, computer-aided design (CAD) and graphical user interface (GUI), etc. and has become an important direction for the current research of computer graphics.There are a lot of methods to realize visualization and each method has its unique features and applies to different occasions. Isoline and isosurface is an important method in visualization and can be applied to many occasions. The realization of isoline (isosurface) and its application in the electric power automation system will be explained below in detail.1、 Isoline (Isosurface)Isoline is defined with all such points (x i, y i), in which F(x i, y i)=F i (F i is a set value), and these points connected in certain order form the isoline of F(x,y) whose value is F i…Common isolines such as contour line and isotherm, etc.are based on the measurement of certain height and temperature.Regular isoline drawing usually adopts grid method and the steps are as follows:gridingdiscrete data;converting grid points into numerical value;calculating isoline points; tracing isoline; smoothing and marking isoline; displaying isoline or filling the isoline chart. Recently, some people have brought about the method of introducing triangle grid to solve the problems of quadrilateral grid. What the two methods have in common is to use grid and isoline points on the grid for traveling tracing, which results in the following defects in the drawing process:(1) The two methods use the grid structure, first find out isoline pointson each side of certain quadrilateral grid or triangle grid, and then continue to find out isoline points from all the grids, during which a lot of judgment are involved, increasing the difficulty of program realization. When grid nodes become isoline points, they shall be treated as singular nodes, which not only reduces the graph accuracy but also increases the complexity of drawing.(2) The two methods produce drawn graphs with inadequate accuracy and intersection may appear during traveling tracing. The above methods deal with off-grid points using certain curve-fitting method. That is, the methods make two approximations and produce larger tolerance.(3) The methods are not universal and they can only deal with data of grid structure. If certain data is transformed into the grid structure, interpolation is needed in the process, which will definitely reduce the accuracy of graphs.To solve the problems, we adopt the method of raster graph in drawing isoline when realizing the system function, and it is referred to as non-grid method here. This method needs no grid structures and has the following advantages compared to regular methods:(1) Simple programming and easily realized, with no singular nodes involved and no traveling tracing of isoline. All these advantages greatly reduce the complexity of program design.(2)Higher accuracy. It needs one approximation while regular methods need two or more.(3) More universal and with no limits of grid1.1 Isoline Formation Method of Raster GraphThe drawing of raster graph has the following features: the area of drawing isoline is limited and is composed of non-continuous points. In fact, raster graph is limited by computer screen and what people can see is just a chart formed by thousands of or over ten thousand discrete picture elements. For example, a straight line has limited length on computers and is displayed with lots of discrete points. Due to the limitations of human eyes, it seemscontinuous. Based on the above features, this paper proposes isoline formation method of raster graph. The basic idea of this method is: as computer graphs are composed of discrete points, one just needs to find out all the picture element points on the same isoline, which will definitely form thisisoline.Take the isoline of rectangular mountain area for example to discuss detailed calculation method. Data required in calculation is the coordinates and altitude of each measuring point, i.e., (x i ,y i ,z i ), among which z i represents the altitude of No.i measuring point and there are M measuring points in total. Meanwhile, the height of isoline which is to be drawn is provided. For example, starting from h 0 , an isoline is drawn with every height difference of ∆h0 and total m isolines are drawn. Besides, the size of the screen area to be displayed is known and here (StartX,StartY) represents the top left corner of this area while (EndX ,EndY)represents the low right corner of this area. The calculation method for drawing its isoline is as follows:(1) Find out the value of x i and y i of the top left corner and low right corner points in the drawing area, which are represented by X max ,X min ,Y max ,Y min ;(2)Transform the coordinate (x i ,y i ) into screen coordinate (SX i ,SY i ) and the required transformation formula is as follows:sx i =x i -X min /X max -X min (EndX-StartX)sy i =y i -Y min /Y max -Y min (EndY-StartY)Fig. 1 Height computation sketch(3) i =startX,j=StartY; Suppose i =startX,j=StartY;(4) Use the method of calculating height (such as distance weighting method and least square method, etc.) to calculate out the height h 1, h 2, h 3 of points (i,j), (i+1,j) and (i,j+1), i.e., the height of the three points P 1, P 2 and P 3 in Fig. 1;(5) Check the value of h 1, h 2, h 3 and determine whether there is any isoline crossing according to the following methods:①k=1,h=h 0;①k=1,h=h 0;②Judge whether (P 1-h)*(P 2-h)≤0 is justified. If justified, continue the next step; otherwise, perform ⑤;③Judge whether |P1-h|=|P2-h| is justified. If justified, it indicates that there is an isoline crossing P1, P2, dot the two points and jumpto (6); otherwise, continue next step;④Judge whether |P1-h|<|P2-h|is justified. If justified, it indicates that there is an isoline crossing P1, dot this point; otherwise, dot P2;⑤Judge whether (P1-h)*(P3-h)≤0 is justified. If justified, continue next step; otherwise, perform ⑧.⑥Judge whether|P1-h|=|P3-h|is justified. If justified, dot the twopoints P1\,P3 and jump to (6);otherwise, jump to ⑤;⑦Judge whether|P1-h|<|P3-h|is justified. If justified, dot P1; otherwise, dot P3;⑧Suppose k:=k+1 and judge whether k<m+1 I is justified. If unjustified, continue next step; otherwise, suppose h:=h+∆h0 and return to ②.(6) Suppose j:j+1 and judge whether j<EndY is justified. If unjustified,continue next step; otherwise, return to (4);(7) Suppose i:=i+1 and judge whether i<EndX is justified. If unjustified,continue next step; otherwise, return to (4);(8) The end.In specific program design, in order to avoid repeated calculation, an array can be used to keep all the value of P2 in Column i+1 and another variable is used to keep the value of P3.From the above calculation method, it can be seen th at this method doesn’tinvolve the traveling of isoline, the judgment of grid singular nodes and theconnection of isoline, etc., which greatly simplifies the programming and iseasily realized, producing no intersection lines in the drawn chart.1.2 Griding and Determining NodesTime consumption of a calculation method is of great concern. Whencalculating the height of (i,j), all the contributing points to the height ofthis point need to be found out. If one searches through the whole array, it is very time consuming. Therefore, the following regularized grid method is introduced to accelerate the speed.First, two concepts, i.e., influence domain and influence point set, are provided and defined as follows: Definition 1: influence domain O(P) of node P refers to the largest area in which this nodes has some influence on other nodes. In this paper, it can refer to the closed disc with radius as r (predetermined) or the square with side length as a (predetermined).Definition 2: influence point set S(P)of node P refers to the collection of all the nodes which can influence node P. In this paper, it refers to the point set with the number of elements as n (predetermined), i.e., the number of all the known contributing nodes to the height of node (i,j) can only be n and these nodes are generally n nodes closet to node P.According to the above definition, in order to calculate out the height of any node (i,j), one just needs to find out all the nodes influencing the height of this node and then uses the interpolation method according totwo-dimensional surface fitting. Here, we will explain in detail how to calculate out the height of node (i,j) with Definition 1, i.e., the method of influence domain, and make similar calculation with Definition 2.Grid structure is used to determine other nodes in the influence domainof node (i,j). Irregular area is covered with regular grid, in which the grids have the same size and the side of grid is parallel with X axis and Y axis. The grid is described as follows:(x min,x max,NCX)(y min,y max,NCY)In the formula, x min, y max and x max, y max are respectively the minimum and maximum coordinates of x, y direction of the area; NCX is the number of grids in X direction; NCY is the number of grids in Y direction.Determining which grid a node belongs to is performed in the following two steps. Suppose the coordinate of this node is (x,y). First, respectively calculate its grid No. in x direction and y direction, and the formula is as follows:IX=NCX*(x-x min)/(xmax-x min)+1;IY=NCY(y-y min)/(y max-y min)+1。
计算机专业毕业设计论文外文文献中英文翻译——java对象
1 . Introduction To Objects1.1The progress of abstractionAll programming languages provide abstractions. It can be argued that the complexity of the problems you’re able to solve is directly related to the kind and quality of abstraction。
By “kind” I mean,“What is it that you are abstracting?” Assembly language is a small abstraction of the underlying machine. Many so—called “imperative” languages that followed (such as FORTRAN,BASIC, and C) were abstractions of assembly language。
These languages are big improvements over assembly language,but their primary abstraction still requires you to think in terms of the structure of the computer rather than the structure of the problem you are trying to solve。
The programmer must establish the association between the machine model (in the “solution space,” which is the place where you’re modeling that problem, such as a computer) and the model of the problem that is actually being solved (in the “problem space,” which is the place where the problem exists). The effort required to perform this mapping, and the fact that it is extrinsic to the programming language,produces programs that are difficult to write and expensive to maintain,and as a side effect created the entire “programming methods” industry.The alter native to modeling the machine is to model the problem you’re trying to solve。
计算机网络技术中英文对照外文翻译文献
中英文资料外文翻译网站建设技术1.介绍网络技术的发展,为今天全球性的信息交流与资在建立源共享和交往提供了更多的途径和可能。
足不出户便可以知晓天下大事,按几下键盘或点几下鼠标可以与远在千里之外的朋友交流,网上通信、网上浏览、网上交互、网上电子商务已成为现代人们生活的一部分。
Internet 时代, 造就了人们新的工作和生活方式,其互联性、开放性和共享信息的模式,打破了传统信息传播方式的重重壁垒,为人们带来了新的机遇。
随着计算机和信息时代的到来,人类社会前进的脚步在逐渐加快。
近几年网页设计发展,快得人目不暇接。
随着网页设计技术的发展,丰富多彩的网页成为网上一道亮丽的风景线。
要想设计美观实用的网页就应该深入掌握网站建设技术。
在建立网站时,我们分析了网站建立的目的、内容、功能、结构,应用了更多的网页设计技术。
2、网站的定义2.1 如何定义网站确定网站的任务和目标,是建设网站所面临的最重要的问题。
为什么人们会来到你的网站? 你有独特的服务吗? 人们第一次到你的网站是为了什么? 他们还会再来吗? 这些问题都是定义网站时必须考虑的问题。
要定义网站,首先,必须对整个网站有一个清晰认识,弄清到底要设计什么、主要的目的与任务、如何对任务进行组织与规划。
其次,保持网站的高品质。
在众多网站的激烈竞争中,高品质的产品是长期竞争的最大优势。
一个优秀的网站应具备:(1)用户访问网站的速度要快;(2)注意反馈与更新。
及时更新网站内容、及时反馈用户的要求;(3)首页设计要合理。
首页给访问者留下的第一印象很重要,设计务必精美,以求产生良好的视觉效果。
2.2 网站的内容和功能在网站的内容方面,就是要做到新、快、全三面。
网站内容的类型包括静态的、动态的、功能的和事物处理的。
确定网站的内容是根据网站的性质决定的,在设计政府网站、商业网站、科普性网站、公司介绍网站、教学交流网站等的内容和风格时各有不同。
我们建立的网站同这些类型的网站性质均不相同。
信息技术发展趋势研究论文中英文外文翻译文献
信息技术发展趋势研究论文中英文外文翻译文献本文旨在通过翻译介绍几篇关于信息技术发展趋势的外文文献,以帮助读者更全面、深入地了解该领域的研究进展。
以下是几篇相关文献的简要介绍:1. 文献标题: "Emerging Trends in Information Technology"- 作者: John Smith- 发表年份: 2019本文调查了信息技术领域的新兴趋势,包括人工智能、大数据、云计算和物联网等。
通过对相关案例的分析,研究人员得出了一些关于这些趋势的结论,并探讨了它们对企业和社会的潜在影响。
2. 文献标题: "Cybersecurity Challenges in the Digital Age"- 作者: Anna Johnson- 发表年份: 2020这篇文献探讨了数字时代中信息技术领域所面临的网络安全挑战。
通过分析日益复杂的网络威胁和攻击方式,研究人员提出了一些应对策略,并讨论了如何提高组织和个人的网络安全防护能力。
3. 文献标题: "The Impact of Artificial Intelligence on Job Market"- 作者: Sarah Thompson- 发表年份: 2018这篇文献研究了人工智能对就业市场的影响。
作者通过分析行业数据和相关研究,讨论了自动化和智能化技术对各个行业和职位的潜在影响,并提出了一些建议以适应未来就业市场的变化。
以上是对几篇外文文献的简要介绍,它们涵盖了信息技术发展趋势的不同方面。
读者可以根据需求进一步查阅这些文献,以获得更深入的了解和研究。
计算机类外文文献翻译---Java核心技术
本科毕业论文外文文献及译文文献、资料题目:Core Java™ V olume II–AdvancedFeatures文献、资料来源:著作文献、资料发表(出版)日期:2008.12.1院(部):计算机科学与技术学院专业:网络工程班级:姓名:学号:指导教师:翻译日期:外文文献:Core Java™ Volume II–Advanced Features When Java technology first appeared on the scene, the excitement was not about a well-crafted programming language but about the possibility of safely executing applets that are delivered over the Internet (see V olume I, Chapter 10 for more information about applets). Obviously, delivering executable applets is practical only when the recipients are sure that the code can't wreak havoc on their machines. For this reason, security was and is a major concern of both the designers and the users of Java technology. This means that unlike other languages and systems, where security was implemented as an afterthought or a reaction to break-ins, security mechanisms are an integral part of Java technology.Three mechanisms help ensure safety:•Language design features (bounds checking on arrays, no unchecked type conversions, no pointer arithmetic, and so on).•An access control mechanism that controls what the code can do (such as file access, network access, and so on).•Code signing, whereby code authors can use standard cryptographic algorithms to authenticate Java code. Then, the users of the code can determine exactly who created the code and whether the code has been altered after it was signed.Below, you'll see the cryptographic algorithms supplied in the java.security package, which allow for code signing and user authentication.As we said earlier, applets were what started the craze over the Java platform. In practice, people discovered that although they could write animated applets like the famous "nervous text" applet, applets could not do a whole lot of useful stuff in the JDK 1.0 security model. For example, because applets under JDK 1.0 were so closely supervised, they couldn't do much good on a corporate intranet, even though relatively little risk attaches to executing an applet from your company's secure intranet. It quickly became clear to Sun that for applets to become truly useful, it was important for users to be able to assign different levels of security, depending on where the applet originated. If an applet comes from a trusted supplier and it has not been tampered with, the user of that applet can then decide whether to give the applet more privileges.To give more trust to an applet, we need to know two things:•Where did the applet come from?•Was the code corrupted in transit?In the past 50 years, mathematicians and computer scientists have developed sophisticated algorithms for ensuring the integrity of data and for electronic signatures. The java.security package contains implementations of many of these algorithms. Fortunately, you don't need to understand the underlying mathematics to use the algorithms in the java.security package. In the next sections, we show you how message digests can detect changes in data files and how digital signatures can prove the identity of the signer.A message digest is a digital fingerprint of a block of data. For example, the so-called SHA1 (secure hash algorithm #1) condenses any data block, no matter how long, into a sequence of 160 bits (20 bytes). As with real fingerprints, one hopes that no two messages have the same SHA1 fingerprint. Of course, that cannot be true—there are only 2160 SHA1 fingerprints, so there must be some messages with the same fingerprint. But 2160is so large that the probability of duplication occurring is negligible. How negligible? According to James Walsh in True Odds: How Risks Affect Your Everyday Life (Merritt Publishing 1996), the chance that you will die from being struck by lightning is about one in 30,000. Now, think of nine other people, for example, your nine least favorite managers or professors. The chance that you and all of them will die from lightning strikes is higher than that of a forged message having the same SHA1 fingerprint as the original. (Of course, more than ten people, none of whom you are likely to know, will die from lightning strikes. However, we are talking about the far slimmer chance that your particular choice of people will be wiped out.)A message digest has two essential properties:•If one bit or several bits of the data are changed, then the message digest also changes.• A forger who is in possession of a given message cannot construct a fake message that has the same message digest as the original.The second property is again a matter of probabilities, of course. Consider the following message by the billionaire father:"Upon my death, my property shall be divided equally among my children; however, my son George shall receive nothing."That message has an SHA1 fingerprint of2D 8B 35 F3 BF 49 CD B1 94 04 E0 66 21 2B 5E 57 70 49 E1 7EThe distrustful father has deposited the message with one attorney and the fingerprint with another. Now, suppose George can bribe the lawyer holding the message. He wants to change the message so that Bill gets nothing. Of course, that changes the fingerprint to a completely different bit pattern:2A 33 0B 4B B3 FE CC 1C 9D 5C 01 A7 09 51 0B 49 AC 8F 98 92Can George find some other wording that matches the fingerprint? If he had been the proud owner of a billion computers from the time the Earth was formed, each computing a million messages a second, he would not yet have found a message he could substitute.A number of algorithms have been designed to compute these message digests. The two best-known are SHA1, the secure hash algorithm developed by the National Institute of Standards and Technology, and MD5, an algorithm invented by Ronald Rivest of MIT. Both algorithms scramble the bits of a message in ingenious ways. For details about these algorithms, see, for example, Cryptography and Network Security, 4th ed., by William Stallings (Prentice Hall 2005). Note that recently, subtle regularities have been discovered in both algorithms. At this point, most cryptographers recommend avoiding MD5 and using SHA1 until a stronger alternative becomes available. (See /rsalabs/node.asp?id=2834 for more information.) The Java programming language implements both SHA1 and MD5. The MessageDigest class is a factory for creating objects that encapsulate the fingerprinting algorithms. It has a static method, called getInstance, that returns an object of a class that extends the MessageDigest class. This means the MessageDigest class serves double duty:•As a factory class•As the superclass for all message digest algorithmsFor example, here is how you obtain an object that can compute SHA fingerprints:MessageDigest alg = MessageDigest.getInstance("SHA-1");(To get an object that can compute MD5, use the string "MD5" as the argument to getInstance.)After you have obtained a MessageDigest object, you feed it all the bytes in the message by repeatedly calling the update method. For example, the following code passes all bytes in a file to the alg object just created to do the fingerprinting:InputStream in = . . .int ch;while ((ch = in.read()) != -1)alg.update((byte) ch);Alternatively, if you have the bytes in an array, you can update the entire array at once:byte[] bytes = . . .;alg.update(bytes);When you are done, call the digest method. This method pads the input—as required by the fingerprinting algorithm—does the computation, and returns the digest as an array of bytes.byte[] hash = alg.digest();The program in Listing 9-15 computes a message digest, using either SHA or MD5. You can load the data to be digested from a file, or you can type a message in the text area.Message SigningIn the last section, you saw how to compute a message digest, a fingerprint for the original message. If the message is altered, then the fingerprint of the altered message will not match the fingerprint of the original. If the message and its fingerprint are delivered separately, then the recipient can check whether the message has been tampered with. However, if both the message and the fingerprint were intercepted, it is an easy matter to modify the message and then recompute the fingerprint. After all, the message digest algorithms are publicly known, and they don't require secret keys. In that case, the recipient of the forged message and the recomputed fingerprint would never know that the message has been altered. Digital signatures solve this problem.To help you understand how digital signatures work, we explain a few concepts from the field called public key cryptography. Public key cryptography is based on the notion of a public key and private key. The idea is that you tell everyone in the world your public key. However, only you hold the private key, and it is important that you safeguard it and don't release it to anyone else. The keys are matched by mathematical relationships, but the exact nature of these relationships is not important for us. (If you are interested, you can look it up in The Handbook of Applied Cryptography at http://www.cacr.math.uwaterloo.ca/hac/.)The keys are quite long and complex. For example, here is a matching pair of public andprivate Digital Signature Algorithm (DSA) keys.Public key:Code View:p:fca682ce8e12caba26efccf7110e526db078b05edecbcd1eb4a208f3ae1617ae01f35b91a47e6df 63413c5e12ed0899bcd132acd50d99151bdc43ee737592e17q: 962eddcc369cba8ebb260ee6b6a126d9346e38c5g:678471b27a9cf44ee91a49c5147db1a9aaf244f05a434d6486931d2d14271b9e35030b71fd7 3da179069b32e2935630e1c2062354d0da20a6c416e50be794ca4y:c0b6e67b4ac098eb1a32c5f8c4c1f0e7e6fb9d832532e27d0bdab9ca2d2a8123ce5a8018b8161 a760480fadd040b927281ddb22cb9bc4df596d7de4d1b977d50Private key:Code View:p:fca682ce8e12caba26efccf7110e526db078b05edecbcd1eb4a208f3ae1617ae01f35b91a47e6df 63413c5e12ed0899bcd132acd50d99151bdc43ee737592e17q: 962eddcc369cba8ebb260ee6b6a126d9346e38c5g:678471b27a9cf44ee91a49c5147db1a9aaf244f05a434d6486931d2d14271b9e35030b71fd73 da179069b32e2935630e1c2062354d0da20a6c416e50be794ca4x: 146c09f881656cc6c51f27ea6c3a91b85ed1d70aIt is believed to be practically impossible to compute one key from the other. That is, even though everyone knows your public key, they can't compute your private key in your lifetime, no matter how many computing resources they have available.It might seem difficult to believe that nobody can compute the private key from the public keys, but nobody has ever found an algorithm to do this for the encryption algorithms that are in common use today. If the keys are sufficiently long, brute force—simply trying all possible keys—would require more computers than can be built from all the atoms in the solar system, crunching away for thousands of years. Of course, it is possible that someone could come up withalgorithms for computing keys that are much more clever than brute force. For example, the RSA algorithm (the encryption algorithm invented by Rivest, Shamir, and Adleman) depends on the difficulty of factoring large numbers. For the last 20 years, many of the best mathematicians have tried to come up with good factoring algorithms, but so far with no success. For that reason, most cryptographers believe that keys with a "modulus" of 2,000 bits or more are currently completely safe from any attack. DSA is believed to be similarly secure.Figure 9-12 illustrates how the process works in practice.Suppose Alice wants to send Bob a message, and Bob wants to know this message came from Alice and not an impostor. Alice writes the message and then signs the message digest with her private key. Bob gets a copy of her public key. Bob then applies the public key to verify the signature. If the verification passes, then Bob can be assured of two facts:•The original message has not been altered.•The message was signed by Alice, the holder of the private key that matches the public key that Bob used for verification.You can see why security for private keys is all-important. If someone steals Alice's private key or if a government can require her to turn it over, then she is in trouble. The thief or a government agent can impersonate her by sending messages, money transfer instructions, and so on, that others will believe came from Alice.The X.509 Certificate FormatTo take advantage of public key cryptography, the public keys must be distributed. One of the most common distribution formats is called X.509. Certificates in the X.509 format are widely used by VeriSign, Microsoft, Netscape, and many other companies, for signing e-mail messages, authenticating program code, and certifying many other kinds of data. The X.509 standard is part of the X.500 series of recommendations for a directory service by the international telephone standards body, the CCITT.The precise structure of X.509 certificates is described in a formal notation, called "abstract syntax notation #1" or ASN.1. Figure 9-13 shows the ASN.1 definition of version 3 of the X.509 format. The exact syntax is not important for us, but, as you can see, ASN.1 gives a precise definition of the structure of a certificate file. The basic encoding rules, or BER, and a variation, called distinguished encoding rules (DER) describe precisely how to save this structure in abinary file. That is, BER and DER describe how to encode integers, character strings, bit strings, and constructs such as SEQUENCE, CHOICE, and OPTIONAL.中文译文:Java核心技术卷Ⅱ高级特性当Java技术刚刚问世时,令人激动的并不是因为它是一个设计完美的编程语言,而是因为它能够安全地运行通过因特网传播的各种applet。
Linux企业集群——计算机类外文文献翻译、中英文翻译
摘自:《Linux企业集群》(The Linux Enterprise Cluster)英文原文:The Linux Enterprise ClusterOverviewThis chapter will introduce the cluster load-balancing software called IP Virtual Server (IPVS). The IPVS software is a collection of kernel patches that were merged into the stock version of the Linux kernel starting with version 2.4.23. When combined with the kernel's routing and packet-filtering capabilities (discussed in Chapter 2) the IPVS-enabled kernel lets you turn any computer running Linux into a cluster load balancer. Together, the IPVS-enabled cluster load balancer and the cluster nodes are called a Linux Virtual Server (LVS).The LVS cluster load balancer accepts all incoming client computer requests for services and decides which cluster node should reply to each request. The load balancer is sometimes called an LVS Director or simply a Director. In this book the terms LVS Director, Director, and load balancer all refer to the same thing.The nodes inside an LVS cluster are called real servers, and the computers that connect to the cluster to request its services are called client computers. The client computers, the Director, and the real servers communicate with each other using IP addresses the same way computers have always exchanged packets over a network; however, to make it easier to discuss this network communication, the LVS community has developed a naming convention to describe each type of IP address based on its role in the network conversation. So before we consider the different types of LVS clusters and the choices you have for distributing your workload across the cluster nodes (called scheduling methods), let's look at this naming convention and see how it helps describe the LVS cluster.LVS IP Address Name ConventionsIn an LVS cluster, we cannot refer to network addresses as simply "IP addresses." Instead, we must distinguish between different types of IP addresses based on the roles of the nodes inside the cluster. Here are four basic types of IP addressesused in a cluster:Virtual IP (VIP) addressThe IP address the Director uses to offer services to client computersReal IP (RIP) addressThe IP address used on the cluster nodesDirector's IP (DIP) addressThe IP address the Director uses to connect to the D/RIP networkClient computer's IP (CIP) addressThe IP address assigned to a client computer that it uses as a source IP address for requests sent to the clusterThe Virtual IP (VIP)The IP address that client computers use to connect to the services offered by the cluster are called virtual IP addresses (VIPs). VIPs are IP aliases or secondary IP addresses on the NIC that connects the Director to the normal, public network.[1] The LVS VIP is important because it is the address that client computers will use when they connect to the cluster. Client computers send packets from their IP address to the VIP address to access cluster services. You tell the client computers the VIP address using a naming service (such as DNS, DDNS, WINS, LDAP, or NIS), and this is the only name or address that client computers ever need to know in order to use the services inside the cluster. (The remaining IP addresses inside the cluster are not known to the client computer.)A single Director can have multiple VIPs offering different services to client computers, and the VIPs can be public IP addresses that can be routed on the Internet, though this is not required. What is required, however, is that the client computers be able to access the VIP or VIPs of the cluster. (As we'll see later, an LVS-NAT cluster can use a private intranet IP address for the nodes inside the cluster, even though the VIP on the Director is a public Internet IP address.)The Real IP (RIP)In LVS terms, a node offering services to the outside world is called a real server. (We will use the terms cluster node and real server interchangeably throughout thisbook.) The IP address used on the real server is therefore called a real IP address (RIP).The RIP address is the IP address that is permanently assigned to the NIC that connects the real server to the same network as the Director. We'll call this network cluster network or the Director/real-server network (D/RIP network). The Director uses the RIP address for normal network communication with the real servers on the D/RIP network, but only the Director needs to know how to talk to this IP address. The Director's IP (DIP)The Director's IP (DIP) address is used on the NIC that connects the Director to the D/RIP network. As requests for cluster services are received on the Director's VIP, they are forwarded out the DIP to reach a cluster node. As is discussed in Chapter 15, the DIP and the VIP can be on the same NIC.The Client Computer's IP (CIP)The client computer's IP (CIP) address may be a local, private IP address on the same network as the VIP, or it may be a public IP address on the Internet.Types of LVS ClustersNow that we've looked at some of the IP address name conventions used to describe LVS clusters, let's examine the LVS packet-forwarding methods.LVS clusters are usually described by the type of forwarding method the LVS Director uses to relay incoming requests to the nodes inside the cluster. Three methods are currently available:Network address translation (LVS-NAT)Direct routing (LVS-DR)IP tunneling (LVS-TUN)Although more than one forwarding method can be used on a single Director (the forwarding method can be chosen on a per-node basis), I'll simplify this discussion and describe LVS clusters as if the Director is only capable of using one forwarding method at a time.The best forwarding method to use with a Linux Enterprise Cluster is LVS-DR (and the reasons for this will be explained shortly), but an LVS-NAT cluster is theeasiest to build. If you have never built an LVS cluster and want to use one to run your enterprise, you may want to start by building a small LVS-NAT cluster in a lab environment using the instructions in Chapter 12, and then learn how to convert this cluster into an LVS-DR cluster as described in Chapter 13. The LVS-TUN cluster is not generally used for mission-critical applications and is mentioned in this chapter only for the sake of completeness. It will not be described in detail.Network Address Translation (LVS-NAT)In an LVS-NAT configuration, the Director uses the Linux kernel's ability (from the kernel's Netfilter code) to translate network IP addresses and ports as packets pass through the kernel. (This is called Network Address Translation (NAT), and it was introduced in Chapter 2).Note We'll examine the LVS-NAT network communication in more detail in Chapter 12.A request for a cluster service is received by the Director on its VIP, and the Director forwards this requests to a cluster node on its RIP. The cluster node then replies to the request by sending the packet back through the Director so the Director can perform the translation that is necessary to convert the cluster node's RIP address into the VIP address that is owned by the Director. This makes it appear to client computers outside the cluster as if all packets are sent and received from a single IP address (the VIP).Basic Properties of LVS-NATThe LVS-NAT forwarding method has several basic properties:The cluster nodes need to be on the same network (VLAN or subnet) as the Director.The RIP addresses of the cluster nodes normally conform to RFC 1918[2] (that is, they are private, non-routable IP addresses used only for intracluster communication).The Director intercepts all communication (network packets going in either direction) between the client computers and the real servers.The cluster nodes use the Director's DIP as their default gateway for reply packets to the client computers.The Director can remap network port numbers. That is, a request received on the Director's VIP on one port can be sent to a RIP inside the cluster on a different port.Any type of operating system can be used on the nodes inside the cluster.A single Director can become the bottleneck for the cluster.At some point, the Director will become a bottleneck for network traffic as the number of nodes in the cluster increases, because all of the reply packets from the cluster nodes must pass through the Director. However, a 400 MHz processor can saturate a 100 Mbps connection, so the network is more likely to become the bottleneck than the LVS Director under normal circumstances.The LVS-NAT cluster is more difficult to administer than an LVS-DR cluster because the cluster administrator sitting at a computer outside the cluster is blocked from direct access to the cluster nodes, just like all other clients. When attempting to administer the cluster from outside, the administrator must first log on to the Director before being able to telnet or ssh to a specific cluster node. If the cluster is connected to the Internet, and client computers use a web browser to connect to the cluster, having the administrator log on to the Director may be a desirable security feature of the cluster, because an administrative network can be used to allow only internal IP addresses shell access to the cluster nodes. However, in a Linux Enterprise Cluster that is protected behind a firewall, you can more easily administer cluster nodes when you can connect directly to them from outside the cluster. (As we'll see in Part IV of this book, the cluster node manager in an LVS-DR cluster can sit outside the cluster and use the Mon and Ganglia packages to gain diagnostic information about the cluster remotely.)Direct Routing (LVS-DR)In an LVS-DR configuration, the Director forwards all incoming requests to the nodes inside the cluster, but the nodes inside the cluster send their replies directly back to the client computers (the replies do not go back through the Director).[3] As shown in Figure 11-3, the request from the client computer or CIP is sent to the Director's VIP. The Director then forwards the request to a cluster node or real server using the same VIP destination IP address (we'll see how the Director does this inChapter 13). The cluster node then sends a reply packet directly to the client computer, and this reply packet uses the VIP as its source IP address. The client computer is thus fooled into thinking it is talking to a single computer, when in reality it is sending request packets to one computer and receiving reply packets from another.LVS-DR network communicationBasic Properties of LVS-DRThese are the basic properties of a cluster with a Director that uses the LVS- DR forwarding method:The cluster nodes must be on the same network segment as the Director.[4]The RIP addresses of the cluster nodes do not need to be private IP addresses (which means they do not need to conform to RFC 1918).The Director intercepts inbound (but not outbound) communication between the client and the real servers.The cluster nodes (normally) do not use the Director as their default gateway for reply packets to the client computers.The Director cannot remap network port numbers.Most operating systems can be used on the real servers inside the cluster.[5]An LVS-DR Director can handle more real servers than an LVS-NAT Director.Although the LVS-DR Director can't remap network port numbers the way an LVS-NAT Director can, and only certain operating systems can be used on the real servers when LVS-DR is used as the forwarding method,[6] LVS-DR is the best forwarding method to use in a Linux Enterprise Cluster because it allows you to build cluster nodes that can be directly accessed from outside the cluster. Although this may represent a security concern in some environments (a concern that can be addressed with a proper VLAN configuration), it provides additional benefits that can improve the reliability of the cluster and that may not be obvious at first:If the Director fails, the cluster nodes become distributed servers, each with their own IP address. (Client computers on the internal network, in other words, can connect directly to the LVS-DR cluster node using their RIP addresses.) You wouldthen tell users which cluster-node RIP address to use, or you could employ a simple round-robin DNS configuration to hand out the RIP addresses for each cluster node until the Director is operational again.[7] You are protected, in other words, from a catastrophic failure of the Director and even of the LVS technology itself.[8] To test the health and measure the performance of each cluster node, monitoring tools can be used on a cluster node manager that sits outside the cluster (we'll discuss how to do this using the Mon and Ganglia packages in Part IV of this book).To quickly diagnose the health of a node, irrespective of the health of the LVS technology or the Director, you can telnet, ping, and ssh directly to any cluster node when a problem occurs.When troubleshooting what appear to be software application problems, you can tell end-users[9] how to connect to two different cluster nodes directly by IP (RIP) address. You can then have the end-user perform the same task on each node, and you'll know very quickly whether the problem is with the application program or one of the cluster nodes.Note In an LVS-DR cluster, packet filtering or firewall rules can be installed on each cluster node for added security. See the LVS-HOWTO at for a discussion of security issues and LVS. In this book we assume that the Linux Enterprise Cluster is protected by a firewall and that only client computers on the trusted network can access the Director and the real servers.IP Tunneling (LVS-TUN)IP tunneling can be used to forward packets from one subnet or virtual LAN (VLAN) to another subnet or VLAN even when the packets must pass through another network or the Internet. Building on the IP tunneling capability that is part of the Linux kernel, the LVS-TUN forwarding method allows you to place cluster nodes on a cluster network that is not on the same network segment as the Director.Note We will not use the LVS-TUN forwarding method in any recipes in this book, and it is only included here for the sake of completeness.The LVS-TUN configuration enhances the capability of the LVS-DR method ofpacket forwarding by encapsulating inbound requests for cluster services from client computers so that they can be forwarded to cluster nodes that are not on the same physical network segment as the Director. For example, a packet is placed inside another packet so that it can be sent across the Internet (the inner packet becomes the data payload of the outer packet). Any server that knows how to separate these packets, no matter where it is on your intranet or the Internet, can be a node in the cluster, as shown in Figure 11-4.[10]LVS-TUN network communicationThe arrow connecting the Director and the cluster node in Figure 11-4 shows an encapsulated packet (one stored within another packet) as it passes from the Director to the cluster node. This packet can pass through any network, including the Internet, as it travels from the Director to the cluster node.Basic Properties of LVS-TUNAn LVS-TUN cluster has the following properties:The cluster nodes do not need to be on the same physical network segment as the Director.The RIP addresses must not be private IP addresses.The Director can normally only intercept inbound communication between the client and the cluster nodes.The return packets from the real server to the client must not go through the Director. (The default gateway can't be the DIP; it must be a router or another machine separate from the Director.)The Director cannot remap network port numbers.Only operating systems that support the IP tunneling protocol[11] can be servers inside the cluster. (See the comments in the configure-lvs script included with the LVS distribution to find out which operating systems are known to support this protocol.)We won't use the LVS-TUN forwarding method in this book because we want to build a cluster that is reliable enough to run mission-critical applications, and separating the Director from the cluster nodes only increases the potential for acatastrophic failure of the cluster. Although using geographically dispersed cluster nodes might seem like a shortcut to building a disaster recovery data center, such a configuration doesn't improve the reliability of the cluster, because anything that breaks the connection between the Director and the cluster nodes will drop all client connections to the remote cluster nodes. A Linux Enterprise Cluster must be able to share data with all applications running on all cluster nodes (this is the subject of Chapter 16). Geographically dispersed cluster nodes only decrease the speed and reliability of data sharing.[2]RFC 1918 reserves the following IP address blocks for private intranets:10.0.0.0 through 10.255.255.255172.16.0.0 through 172.31.255.255192.168.0.0 through 192.168.255.255[3]Without the special LVS "martian" modification kernel patch applied to the Director, the normal LVS-DR Director will simply drop reply packets if they try to go back out through the Director.[4]The LVS-DR forwarding method requires this for normal operation. See Chapter 13 for more info on LVS-DR clusters[5]The operating system must be capable of configuring the network interface to avoid replying to ARP broadcasts. For more information, see "ARP Broadcasts and the LVS-DR Cluster" in Chapter 13[6]The real servers inside an LVS-DR cluster must be able to accept packets destined for the VIP without replying to ARP broadcasts for the VIP (see Chapter 13)[7]See the "Load Sharing with Heartbeat—Round-Robin DNS" section in Chapter 8 for a discussion of round-robin DNS[8]This is unlikely to be a problem in a properly built and properly tested cluster configuration. We'll discuss how to build a highly available Director in Chapter 15.[9]Assuming the client computer's IP address, the VIP and the RIP are all private (RFC 1918) IP addresses[10]If your cluster needs to communicate over the Internet, you will likely need to encrypt packets before sending them. This can be accomplished with the IPSecprotocol (see the FreeS/WAN project at for details). Buildinga cluster that uses IPSec is outside the scope of this book.[11]Search the Internet for the "Linux 2.4 Advanced Routing HOWTO" for more information about the IP tunneling protocol.LVS Scheduling MethodsHaving discussed three ways to forward packets to the nodes inside the cluster, let's look at how to distribute the workload across the cluster nodes. When the Director receives an incoming request from a client computer to access a cluster service on its VIP, it has to decide which cluster node should get the request. The scheduling methods the Director can use to make this decision fall into two basic categories: fixed scheduling and dynamic scheduling.Note When a node needs to be taken out of the cluster for maintenance, you can set its weight to 0 using either a fixed or a dynamic scheduling method. When a cluster node's weight is 0, no new connections will be assigned to it. Maintenance can be performed after all of the users log out normally at the end of the day. We'll discuss cluster maintenance in detail in Chapter 19.Fixed (or Non-Dynamic) Scheduling MethodsIn the case of fixed, or non-dynamic, scheduling methods, the Director selects the cluster node to use for the inbound request without checking to see how many of the previously assigned connections are active. Here is the current list of fixed scheduling methods:Round-robin (RR)When a new request is received, the Director picks the next server on its list of servers, rotating through them in an endless loop.Weighted round-robin (WRR)You assign each cluster node a weight or ranking, based on how much processing load it can handle. This weight is then used, along with the round-robin technique, to select the next cluster node to be used when a new request is received, regardless of the number of connections that are still active. A server with a weight of 2 will receive twice the number of new connections as a server with a weight of 1. Ifyou change the weight of a server to 0, no new connections will be allowed to the server (but currently active connections will not be dropped). We'll look at how LVS uses this weight to balance the incoming workload in the "Weighted Least-Connection (WLC)" section of this chapter.Destination hashingThis method always sends requests for the same IP address to the same server in the cluster. Like the locality-based least-connection (LBLC) scheduling method (which will be discussed shortly), this method is useful when the servers inside the cluster are really cache or proxy servers.Source hashingThis method can be used when the Director needs to be sure the reply packets are sent back to the same router or firewall that the requests came from. This scheduling method is normally only used when the Director has more than one physical network connection, so that the Director knows which firewall or router to send the reply packet back through to reach the proper client computer.中文翻译:Linux企业集群综述本章将要介绍的是一款叫做IP 虚拟服务器(IPVS)的集群负载均衡软件,它是被合并到Linux2.4.23内核起的主干内核版本的补丁集合,当与内核路由和数据包过滤功能(第2章中讨论了)一起使用时启用了IPVS的内核让你可以将任何运行Linux的计算机变成一个集群负载调度器,启用IPVS的集群负载调度器和集群节点一起叫做一个Linux虚拟服务器(LVS)。
信息系统信息技术中英文对照外文翻译文献
中英文资料外文翻译文献Information Systems Outsourcing Life Cycle And Risks Analysis 1. IntroductionInformation systems outsourcing has obtained tremendous attentions in the information technology industry.Although there are a number of reasons for companies to pursuing information systems (IS)outsourcing , the most prominent motivation for IS outsourcing that revealed in the literatures was “cost saving”. Costfactor has been a major decision factors for IS outsourcing.Other than cost factor, there are other reasons for outsourcing decision.The Outsourcing Institute surveyed outsourcing end-users from their membership in 1998 and found that top 10 reasons companies outsource were:Reduce and control operating costs,improve company focus,gain access to world-class capabilities,free internal resources for other purposes, resources are not available internally, accelerate reengineering benefits, function difficult to manage/out of control,make capital funds available, share risks, and cash infusion.Within these top ten outsourcing reasons, there are three items that related to financial concerns, they are operating costs, capital funds available, and cash infusion. Since the phenomenon of wage difference exists in the outsourced countries, it is obvious that outsourcing companies would save remarkable amount of labor cost.According to Gartner, Inc.'s report, world business outsourcing services would grow from $110 billion in 2002 to $173 billion in 2007,a proximately 9.5% annual growth rate.In addition to cost saving concern, there are other factors that influence outsourcing decision, including the awareness of success and risk factors, the outsourcing risks identification and management,and the project quality management. Outsourcing activities are substantially complicated and outsourcing project usually carries a huge array of risks. Unmanaged outsourcing risks will increase total project cost, devaluatesoftware quality, delay project completion time, and finally lower the success rate of the outsourcing project.Outsourcing risks have been discovered in areas such as unexpected transition and management costs, switching costs, costly contractual amendments, disputes and litigation, service debasement, cost escalation, loss of organizational competence, hidden service costs,and so on.Most published outsourcing studies focused on organizational and managerial issues. We believe that IS outsourcing projects embrace various risks and uncertainty that may inhibit the chance of outsourcing success. In addition to service and management related risk issues, we feel that technical issues that restrain the degree of outsourcing success may have been overlooked. These technical issues are project management, software quality, and quality assessment methods that can be used to implement IS outsourcing projects.Unmanaged risks generate loss. We intend to identify the technical risks during outsourcing period, so these technical risks can be properly managed and the cost of outsourcing project can be further reduced. The main purpose of this paper is to identify the different phases of IS outsourcing life cycle, and to discuss the implications of success and risk factors, software quality and project management,and their impacts to the success of IT outsourcing.Most outsourcing initiatives involve strategic planning and management participation, therefore, the decision process is obviously broad and lengthy. In order to conduct a comprehensive study onto outsourcing project risk analysis, we propose an IS outsourcing life cycle framework to be served as a yardstick. Each IS outsourcing phase is named and all inherited risks are identified in this life cycle framework.Furthermore,we propose to use software qualitymanagement tools and methods in order to enhance the success rate of IS outsourcing project.ISO 9000 is a series of quality systems standards developed by the International Organization for Standardization (ISO).ISO's quality standards have been adopted by many countries as a major target for quality certification.Other ISO standards such as ISO 9001, ISO 9000-3,ISO 9004-2, and ISO 9004-4 are quality standards that can be applied to the software industry.Currently, ISO is working on ISO 31000, a risk management guidance standard. These ISO quality systems and risk management standards are generic in nature, however, they may not be sufficient for IS outsourcing practice. This paper, therefore,proposes an outsourcing life cycle framework to distinguish related quality and risk management issues during outsourcing practice.The following sections start with needed theoretical foundations to IS outsourcing,including economic theories, outsourcing contracting theories, and risk theories. The IS outsourcing life cycle framework is then introduced.It continues to discuss the risk implications in precontract,contract, and post-contract phases. ISO standards on quality systems and risk management are discussed and compared in the next section. A conclusion and direction for future study are provided in the last section.2. Theoretical foundations2.1. Economic theories related to outsourcingAlthough there are a number of reasons for pursuing IS outsourcing,the cost savingis a main attraction that leads companies to search for outsourcing opportunities. In principle, five outsourcing related economic theories that lay the groundwork of outsourcing practice, theyare:(1)production cost economics,(2)transaction cost theory,(3)resource based theory,(4)competitive advantage, and(5)economies of scale.Production cost economics was proposed by Williamson, who mentioned that “a firm seeks to maximize its profit also subjects to its production function and market opportunities for selling outputs and buying inputs”. It is clear that production cost economics identifies the phenomenon that a firm may pursue the goal of low-cost production process.Transaction cost theory was proposed by Coase. Transaction cost theory implies that in an economy, there are many economic activities occurred outside the price systems. Transaction costs in business activities are the time and expense of negotiation, and writing and enforcing contracts between buyers and suppliers .When transaction cost is low because of lower uncertainty, companies are expected to adopt outsourcing.The focus of resource-based theory is “the heart of the firm centers on deployment and combination of specific inputs rather than on avoidance of opportunities”. Conner suggested that “Firms as seekers of costly-to-copy inputs for production and distribution”.Through resource-based theory, we can infer that “outsourcing decision is to seek external resources or capability for meeting firm's objectives such as cost-saving and capability improving”.Porter, in his competitive forces model, proposed the concept of competitive advantage. Besanko et al.explicated the term of competitive advantage, through economic concept, as “When a firm(or business unit within a multi-business firm) earns a higher rate of economic profit than the average rate of economic profit of other firms competing within the same market, the firm has a competitive advantage.” Outsourcing decision, therefore, is to seek cost saving that meets the goal of competitive advantage within a firm.The economies of scale is a theoretical foundation for creating and sustaining the consulting business. Information systems(IS) and information technology(IT) consulting firms, in essence, bear the advantage of economies of scale since their average costs decrease because they offer a mass amount of specialized IS/IT services in the marketplace.2.2. Economic implication on contractingAn outsourcing contract defines the provision of services and charges that need to be completed in a contracting period between two contracting parties. Since most IS/IT projects are large in scale, a valuable contract should list complete set of tasks and responsibilities that each contracting party needs to perform. The study of contracting becomes essential because a complete contract setting could eliminate possible opportunistic behavior, confusion, and ambiguity between two contracting parties.Although contracting parties intend to reach a complete contract,in real world, most contracts are incomplete. Incomplete contracts cause not only implementing difficultiesbut also resulting in litigation action. Business relationship may easily be ruined by holding incomplete contracts. In order to reach a complete contract, the contracting parties must pay sufficient attention to remove any ambiguity, confusion, and unidentified and immeasurable conditions/ terms from the contract. According to Besanko et al., incomplete contracting stems from the following three factors: bounded rationality, difficulties on specifying or measuring performance, and asymmetric information.Bounded rationality describes human limitation on information processing, complexity handling, and rational decision-making. An incomplete contract stems from unexpected circumstances that may be ignored during contract negotiation. Most contracts consist of complex product requirements and performance measurements. In reality, it is difficult to specify a set of comprehensive metrics for meeting each party's right and responsibility. Therefore, any vague or open-ended statements in contract will definitely result in an incomplete contract. Lastly, it is possible that each party may not have equal access to all contract-relevant information sources. This situation of asymmetric information results in an unfair negotiation,thus it becomes an incomplete contract.2.3. Risk in outsource contractingRisk can be identified as an undesirable event, a probability function,variance of the distribution of outcomes, or expected loss. Risk can be classified into endogenous and exogenous ris ks. Exogenous risks are“risks over which we have no control and which are not affected by our actions.”. For example, natural disasters such as earthquakes and flood are exogenous risks. Endogenous risks are “risks that are dependent on our actions”.We can infer that risks occurring during outsource contracting should belong to such category.Risk (RE) can be calculated through “a function of the probability of a negative outcome and the importance of the loss due to the occurrence of this outcome:RE = ΣiP(UOi)≠L(UOi) (1) where P(UOi) is the probability of an undesirable outcome i, and L(UOi) is the loss due to the undesirable outcome i.”.Software risks can also be analyzed through two characteristics :uncertainty and loss. Pressman suggested that the best way to analyze software risks is to quantify the level of uncertainty and the degree of loss that associated with each kind of risk. His risk content matches to above mentioned Eq.(1).Pressman classified software risks into the following categories: project risks, technical risks, and business risks.Outsourcing risks stem from various sources. Aubert et al. adopted transaction cost theory and agency theory as the foundation for deriving undesirable events and their associated risk factors.Transaction cost theory has been discussed in the Section 2.2. Agency theory focuses on client's problem while choosing an agent(that is, a service provider), and working relationship building and maintenance, under the restriction of information asymmetry.Various risk factors would be produced if such agent–client relationship becomes crumble.It is evident that a complete contract could eliminate the risk that caused by an incomplete contract and/or possible opportunistic behavior prompted by any contracting party. Opportunistic behavior is one of the main sources that cause transactional risk. Opportunistic behavior occurs when a transactional partner observes away of saving cost or removing responsibility during contracting period, this company may take action to pursue such opportunity. This type of opportunistic behavior could be encouraged if such contract was not completely specified at the first place.Outsourcing risks could generate additional unexpected cost to an outsourcing project. In order to conduct a better IS outsourcing project, identifying possible risk factors and implementing matured risk management process could make information systems outsourcing more successful than ever.rmation system outsourcing life cycleThe life cycle concept is originally used to describe a period of one generation of organism in biological system. In essence, the term of life cycle is the description of all activities that a subject is involved in a period from its birth to its end. The life cycle concept has been applied into project management area. A project life cycle, according to Schwalbe, is a collection of project phases such as concept,development, implementation, and close-out. Within the above mentioned four phases, the first two phases center on “planning”activity and the last two phases focus on “delivery the actual work” Of project management.Similarly, the concept of life cycle can be applied into information systems outsourcing analysis. Information systems outsourcing life cycle describes a sequence of activities to be performed during company's IS outsourcing practice. Hirsch heim and Dibbern once described a client-based IS outsourcing life cycle as: “It starts with the IS outsourcing decision, continues with the outsourcing relationship(life of the contract)and ends with the cancellation or end of the relationship, i.e., the end of the contract. The end of the relationship forces a new outsourcing decision.” It is clear that Hirsch heim and Dibbern viewed “outsourcing relationship” as a determinant in IS outsourcing life cycle.IS outsourcing life cycle starts with outsourcing need and then ends with contract completion. This life cycle restarts with the search for a new outsourcing contract if needed. An outsourcing company may be satisfied with the same outsourcing vendor if the transaction costs remain low, then a new cycle goes on. Otherwise, a new search for an outsourcing vendor may be started. One of the main goals for seeking outsourcing contract is cost minimization. Transaction cost theory(discussed in the Section 2.1)indicates that company pursuing contract costs money, thus low transaction cost will be the driver of extending IS outsourcing life cycle.The span of IS outsourcing life cycle embraces a major portion of contracting activities. The whole IS outsourcing life cycle can be divided into three phases(see Fig.1): pre-contract phase, contract phase, and post-contract phase. Pre-contract phase includes activities before a major contract is signed, such as identifying the need for outsourcing, planning and strategic setting, and outsourcing vendor selection. Contract phase startswhile an outsourcing contract is signed and then lasted until the end of contracting period. It includes activities such as contracting process, transitioning process, and outsourcing project execution. Post-contract phase contains those activities to be done after contract expiration, such as outsourcing project assessment, and making decision for the next outsourcing contract.Fig.1. The IS outsourcing life cycleWhen a company intends to outsource its information systems projects to external entities, several activities are involved in information systems outsourcing life cycle. Specifically, they are:1. Identifying the need for outsourcing:A firm may face strict external environment such as stern market competition,competitor's cost saving through outsourcing, or economic downturn that initiates it to consider outsourcing IS projects. In addition to external environment, some internal factors may also lead to outsourcing consideration. These organizational predicaments include the need for technical skills, financial constraint, investors' request, or simply cost saving concern. A firm needs to carefully conduct a study to its internal and external positioning before making an outsourcing decision.2. Planning and strategic setting:If a firm identifies a need for IS outsourcing, it needs to make sure that the decision to outsource should meet with company's strategic plan and objectives. Later, this firm needs to integrate outsourcing plan into corporate strategy. Many tasks need to be fulfilled during planning and strategic setting stages, including determining outsourcing goals, objectives, scope, schedule, cost, business model, and processes. A careful outsourcing planning prepares a firm for pursuing a successful outsourcing project.3. Outsourcing vendor selection:A firm begins the vendor selection process with the creation of request for information (RFI) and request for proposal (RFP) documents. An outsourcing firm should provide sufficient information about the requirements and expectations for an outsourcing project. After receiving those proposals from vendors, this company needs to select a prospective outsourcing vendor, based on the strategic needs and project requirements.4. Contracting process:A contract negotiation process begins after the company selects a probable outsourcing vendor. Contracting process is critical to the success of an outsourcing project since all the aspects of the contract should be specified and covered, including fundamental, managerial, technological, pricing, financial, and legal features. In order to avoid resulting in an incomplete contract, the final contract should be reviewed by two parties' legal consultants.Most importantly, the service level agreements (SLA) must be clearly identified in the contract.5. Transitioning process:Transitioning process starts after a company signed an outsourcing contract with a vendor. Transition management is defined as “the detailed, desk-level knowledge transfer and documentation of all relevant tasks, technologies, workflows, people, and functions”.Transitioni ng process is a complicate phase in IS outsourcing life cycle since it involves many essential workloads before an outsourcing project can be actually implemented. Robinson et al.characterized transition management into the following components:“employee management, communication management, knowledge management, and quality management”. It is apparent that conducting transitioning process needs the capabilities of human resources, communication skill, knowledge transfer, and quality control.6. Outsourcing project execution:After transitioning process, it is time for vendor and client to execute their outsourcing project. There are four components within this“contract governance” stage:project management, relationship management, change management, and risk management. Any items listed in the contract and its service level agreements (SLAs) need to be delivered and implemented as requested. Especially, client and vendor relationships, change requests and records, and risk variables must be carefully managed and administered.7. Outsourcing project assessment:During the end of an outsourcing project period, vendor must deliver its final product/service for client's approval. The outsourcing client must assess the quality of product/service that provided by its client. The outsourcing client must measure his/her satisfaction level to the product/service provided by the client. A satisfied assessment and good relationship will guarantee the continuation of the next outsourcing contract.The results of the previous activity (that is, project assessment) will be the base of determining the next outsourcing contract. A firm evaluates its satisfaction level based on predetermined outsourcing goals and contracting criteria. An outsourcing company also observes outsourcing cost and risks involved in the project. If a firm is satisfied with the current outsourcing vendor, it is likely that a renewable contract could start with the same vendor. Otherwise, a new “precontract phase” would restart to s earch for a new outsourcing vendor.This activity will lead to a new outsourcing life cycle. Fig.1 shows two dotted arrowlines for these two alternatives: the dotted arrow line 3.a.indicates “renewable contract” path and the dotted arrow line 3.b.indicates “a new contract search” path.Each phase in IS outsourcing life cycle is full of needed activities and processes (see Fig.1). In order to clearly examine the dynamics of risks and outsourcing activities, the following sections provide detailed analyses. The pre-contract phase in IS outsourcing life cycle focuses on the awareness of outsourcing success factors and related risk factors. The contract phase in IS outsourcing life cycle centers on the mechanism of project management and risk management. The post-contract phase in IS outsourcing life cycle concentrates on the need of selecting suitable project quality assessment methods.4. Actions in pre-contract phase: awareness of success and risk factorsThe pre-contract period is the first phase in information systems outsourcing life cycle (see Fig.1). While in this phase, an outsourcing firm should first identify its need for IS outsourcing. After determining the need for IS outsourcing, the firm needs to carefully create an outsourcing plan. This firm must align corporate strategy into its outsourcing plan.In order to well prepare for corporate IS outsourcing, a firm must understand current market situation, its competitiveness, and economic environment. The next important task to be done is to identify outsourcing success factors, which can be used to serve as the guidance for strategic outsourcing planning. In addition to know success factors,an outsourcing firm must also recognize possible risks involved in IS outsourcing, thus allows a firm to formulate a better outsourcing strategy.Conclusion and research directionsThis paper presents a three-phased IS outsourcing life cycle and its associated risk factors that affect the success of outsourcing projects.Outsourcing life cycle is complicated and complex in nature. Outsourcing companies usually invest a great effort to select suitable service vendors However,many risks exit in vendor selection process. Although outsourcing costs are the major reason for doing outsourcing, the firms are seeking outsourcing success through quality assurance and risk control. This decision path is understandable since the outcome of project risks represents the amount of additional project cost. Therefore, carefully manage the project and its risk factors would save outsourcing companies a tremendous amount of money.This paper discusses various issues related to outsourcing success, risk factors, quality assessment methods, and project management techniques. The future research may touch alternate risk estimation methodology. For example, risk uncertainty can be used to identify the accuracy of the outsourcing risk estimation. Another possible method to estimate outsourcing risk is through the Total Cost of Ownership(TCO) method. TCO method has been used in IT management for financial portfolio analysis and investment decision making. Since the concept of risk is in essence the cost (of loss) to outsourcing clients, it thus becomes a possible research method to solve outsourcing decision.信息系统的生命周期和风险分析1.绪言信息系统外包在信息技术工业已经获得了巨大的关注。
计算机科学与技术Java垃圾收集器中英文对照外文翻译文献
中英文资料中英文资料外文翻译文献原文:How a garbage collector works of Java LanguageIf you come from a programming language where allocating objects on the heap is expensive, you may naturally assume that Java’s scheme of allocating everything (except primitives) on the heap is also expensive. However, it turns out that the garbage collector can have a significant impact on increasing the speed of object creation. This might sound a bit odd at first—that storage release affects storage allocation—but it’s the way some JVMs work, and it means that allocating storage for heap objects in Java can be nearly as fast as creating storage on the stack in other languages.For example, you can think of the C++ heap as a yard where each stakes out its own piece of turf object. This real estate can become abandoned sometime later and must be reused. In some JVMs, the Java heap is quite different; it’s more like a conveyor belt that moves forwardevery time you allocate a new object. This means that object storage allocation is remarkab ly rapid. The “heap pointer” is simply moved forward into virgin territory, so it’s effectively the same as C++’s stack allocation. (Of course, there’s a little extra overhead for bookkeeping, but it’s nothing like searching for storage.)You might observ e that the heap isn’t in fact a conveyor belt, and if you treat it that way, you’ll start paging memory—moving it on and off disk, so that you can appear to have more memory than you actually do. Paging significantly impacts performance. Eventually, after you create enough objects, you’ll run out of memory. The trick is that the garbage collector steps in, and while it collects the garbage it compacts all the objects in the heap so that you’ve effectively moved the “heap pointer” closer to the beginning of the conveyor belt and farther away from a page fault. The garbage collector rearranges things and makes it possible for the high-speed, infinite-free-heap model to be used while allocating storage.To understand garbage collection in Java, it’s helpful le arn how garbage-collection schemes work in other systems. A simple but slow garbage-collection technique is called reference counting. This means that each object contains a reference counter, and every time a reference is attached to that object, the reference count is increased. Every time a reference goes out of scope or is set to null, the reference count isdecreased. Thus, managing reference counts is a small but constant overhead that happens throughout the lifetime of your program. The garbage collector moves through the entire list of objects, and when it finds one with a reference count of zero it releases that storage (however, reference counting schemes often release an object as soon as the count goes to zero). The one drawback is that if objects circularly refer to each other they can have nonzero reference counts while still being garbage. Locating such self-referential groups requires significant extra work for the garbage collector. Reference counting is commonly used to explain one kind of g arbage collection, but it doesn’t seem to be used in any JVM implementations.In faster schemes, garbage collection is not based on reference counting. Instead, it is based on the idea that any non-dead object must ultimately be traceable back to a reference that lives either on the stack or in static storage. The chain might go through several layers of objects. Thus, if you start in the stack and in the static storage area and walk through all the references, you’ll find all the live objects. For each reference that you find, you must trace into the object that it points to and then follow all the references in that object, tracing into the objects they point to, etc., until you’ve moved through the entire Web that originated with the reference on the stack or in static storage. Each object that you move through must still be alive. Note that there is no problem withdetached self-referential groups—these are simply not found, and are therefore automatically garbage.In the approach described here, the JVM uses an adaptive garbage-collection scheme, and what it does with the live objects that it locates depends on the variant currently being used. One of these variants is stop-and-copy. This means that—for reasons that will become apparent—the program is first stopped (this is not a background collection scheme). Then, each live object is copied from one heap to another, leaving behind all the garbage. In addition, as the objects are copied into the new heap, they are packed end-to-end, thus compacting the new heap (and allowing new storage to simply be reeled off the end as previously described).Of course, when an object is moved from one place to another, all references that point at the object must be changed. The reference that goes from the heap or the static storage area to the object can be changed right away, but there can be other references pointing to this object Initialization & Cleanup that will be encountered later during the “walk.” These are fixed up as they are found (you could imagine a table that maps old addresses to new ones).There are two issues that make these so-called “copy collectors” inefficient. The first is the idea that you have two heaps and you slosh all the memory back and forth between these two separate heaps,maintaining twice as much memory as you actually need. Some JVMs deal with this by allocating the heap in chunks as needed and simply copying from one chunk to another.The second issue is the copying process itself. Once your program becomes stable, it might be generating little or no garbage. Despite that, a copy collector will still copy all the memory from one place to another, which is wasteful. To prevent this, some JVMs detect that no new garbage is being generated and switch to a different scheme (this is the “adaptive” part). This other scheme is called mark-and-sweep, and it’s what earlier versions of Sun’s JVM used all the time. For general use, mark-and-sweep is fairly slow, but when you know you’re generating little or no garbage, it’s fast. Mark-and-sweep follows the same logic of starting from the stack and static storage, and tracing through all the references to find live objects.However, each time it finds a live object, that object is marked by setting a flag in it, but the object isn’t collected yet.Only when the marking process is finished does the sweep occur. During the sweep, the dead objects are released. However, no copying happens, so if the collector chooses to compact a fragmented heap, it does so by shuffling objects around. “Stop-and-copy”refers to the idea that this type of garbage collection is not done in the background; Instead, the program is stopped while the garbage collection occurs. In the Sun literature you’llfind many references to garbage collection as a low-priority background process, but it turns out that the garbage collection was not implemented that way in earlier versions of the Sun JVM. Instead, the Sun garbage collector stopped the program when memory got low. Mark-and-sweep also requires that the program be stopped.As previously mentioned, in the JVM described here memory is allocated in big blocks. If you allocate a large object, it gets its own block. Strict stop-and-copy requires copying every live object from the source heap to a new heap before you can free the old one, which translates to lots of memory. With blocks, the garbage collection can typically copy objects to dead blocks as it collects. Each block has a generation count to keep track of whether it’s alive. In the normal case, only the blocks created since the last garbage collection are compacted; all other blocks get their generation count bumped if they have been referenced from somewhere. This handles the normal case of lots of short-lived temporary objects. Periodically, a full sweep is made—large objects are still not copied (they just get their generation count bumped), and blocks containing small objects are copied and compacted.The JVM monitors the efficiency of garbage collection and if it becomes a waste of time because all objects are long-lived, then it switches to mark-and sweep. Similarly, the JVM keeps track of how successful mark-and-sweep is, and if the heap starts to becomefragmented, it switches back to stop-and-copy. This is where the “adaptive” part comes in, so you end up with a mouthful: “Adaptive generational stop-and-copy mark-and sweep.”There are a number of additional speedups possible in a JVM. An especially important one involves the operation of the loader and what is called a just-in-time (JIT) compiler. A JIT compiler partially or fully converts a program into native machine code so that it doesn’t need to be interpreted by the JVM and thus runs much faster. When a class must be loaded (typically, the first time you want to create an object of that class), the .class file is located, and the byte codes for that class are brought into memory. At this point, one approach is to simply JIT compile all the code, but this has two drawbacks: It takes a little more time, which, compounded throughout the life of the program, can add up; and it increases the size of the executable (byte codes are significantly more compact than expanded JIT code), and this might cause paging, which definitely slows down a program. An alternative approach is lazy evaluation, which means that the code is not JIT compiled until necessary. Thus, code that never gets executed might never be JIT compiled. The Java Hotspot technologies in recent JDKs take a similar approach by increasingly optimizing a piece of code each time it is executed, so the more the code is executed, the faster it gets.译文:Java垃圾收集器的工作方式如果你学下过一种因为在堆里分配对象所以开销过大的编程语言,很自然你可能会假定Java 在堆里为每一样东西(除了primitives)分配内存资源的机制开销也会很大。
计算机专业外文文献翻译
毕业设计(论文)外文文献翻译(本科学生用)题目:Plc based control system for the music fountain 学生姓名:_ ___学号:060108011117 学部(系): 信息学部专业年级: _06自动化(1)班_指导教师: ___职称或学位:助教__20 年月日外文文献翻译(译成中文1000字左右):【主要阅读文献不少于5篇,译文后附注文献信息,包括:作者、书名(或论文题目)、出版社(或刊物名称)、出版时间(或刊号)、页码。
提供所译外文资料附件(印刷类含封面、封底、目录、翻译部分的复印件等,网站类的请附网址及原文】英文节选原文:Central Processing Unit (CPU) is the brain of a PLC controller. CPU itself is usually one of the microcontrollers. Aforetime these were 8-bit microcontrollers such as 8051, and now these are 16-and 32-bit microcontrollers. Unspoken rule is that you’ll find mostly Hitachi and Fujicu microcontrollers in PLC controllers by Japanese makers, Siemens in European controllers, and Motorola microcontrollers in American ones. CPU also takes care of communication, interconnectedness among other parts of PLC controllers, program execution, memory operation, overseeing input and setting up of an output. PLC controllers have complex routines for memory checkup in order to ensure that PLC memory was not damaged (memory checkup is done for safety reasons).Generally speaking, CPU unit makes a great number of check-ups of the PLC controller itself so eventual errors would be discovered early. You can simply look at any PLC controller and see that there are several indicators in the form. of light diodes for error signalization.System memory (today mostly implemented in FLASH technology) is used by a PLC for a process control system. Aside form. this operating system it also contains a user program translated forma ladder diagram to a binary form. FLASH memory contents can be changed only in case where user program is being changed. PLC controllers were used earlier instead of PLASH memory and have had EPROM memory instead of FLASH memory which had to be erased with UV lamp and programmed on programmers. With the use of FLASH technology this process was greatly shortened. Reprogramming a program memory is done through a serial cable in a program for application development.User memory is divided into blocks having special functions. Some parts of a memory are used for storing input and output status. The real status of an input is stored either as “1”or as “0”in a specific memory bit/ each input or output has one corresponding bit in memory. Other parts of memory are used to store variable contents for variables used in used program. For example, time value, or counter value would be stored in this part of the memory.PLC controller can be reprogrammed through a computer (usual way), but also through manual programmers (consoles). This practically means that each PLC controller can programmed through a computer if you have the software needed for programming. Today’s transmission computers are ideal for reprogramming a PLC controller in factory itself. This is of great importance to industry. Once the system is corrected, it is also important to read the right program into a PLC again. It is also good to check from time to time whether program in a PLC has not changed. This helps to avoid hazardous situations in factory rooms (some automakers have established communication networks which regularly check programs in PLC controllers to ensure execution only of good programs). Almost every program for programming a PLC controller possesses various useful options such as: forced switching on and off of the system input/outputs (I/O lines),program follow up in real time as well as documenting a diagram. This documenting is necessary to understand and define failures and malfunctions. Programmer can add remarks, names of input or output devices, and comments that can be useful when finding errors, or with system maintenance. Adding comments and remarks enables any technician (and not just a person who developed the system) to understand a ladder diagram right away. Comments and remarks can even quote precisely part numbers if replacements would be needed. This would speed up a repair of any problems that come up due to bad parts. The old way was such that a person who developed a system had protection on the program, so nobody aside from this person could understand how it was done. Correctly documented ladder diagram allows any technician to understand thoroughly how system functions.Electrical supply is used in bringing electrical energy to central processing unit. Most PLC controllers work either at 24 VDC or 220 VAC. On some PLC controllers you’ll find electrical supply as a separate module. Those are usually bigger PLC controllers, while small and medium series already contain the supply module. User has to determine how much current to take from I/O module to ensure that electrical supply provides appropriate amount of current. Different types of modules use different amounts of electrical current. This electrical supply is usually not used to start external input or output. User has to provide separate supplies in starting PLC controller inputs because then you can ensure so called “pure” supply for the PLC controller. With pure supply we mean supply where industrial environment can not affect it damagingly. Some of the smaller PLC controllers supply their inputs with voltage from a small supply source already incorporated into a PLC.中文翻译:从结构上分,PLC分为固定式和组合式(模块式)两种。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
1外文文献:Core Java™ Volume II–Advanced Features When Java technology first appeared on the scene, the excitement was not about a well-crafted programming language but about the possibility of safely executing applets that are delivered over the Internet (see V olume I, Chapter 10 for more information about applets). Obviously, delivering executable applets is practical only when the recipients are sure that the code can't wreak havoc on their machines. For this reason, security was and is a major concern of both the designers and the users of Java technology. This means that unlike other languages and systems, where security was implemented as an afterthought or a reaction to break-ins, security mechanisms are an integral part of Java technology.Three mechanisms help ensure safety:•Language design features (bounds checking on arrays, no unchecked type conversions, no pointer arithmetic, and so on).•An access control mechanism that controls what the code can do (such as file access, network access, and so on).•Code signing, whereby code authors can use standard cryptographic algorithms to authenticate Java code. Then, the users of the code can determine exactly who created the code and whether the code has been altered after it was signed.Below, you'll see the cryptographic algorithms supplied in the java.security package, which allow for code signing and user authentication.As we said earlier, applets were what started the craze over the Java platform. In practice, people discovered that although they could write animated applets like the famous "nervous text" applet, applets could not do a whole lot of useful stuff in the JDK 1.0 security model. For example, because applets under JDK 1.0 were so closely supervised, they couldn't do much good on a corporate intranet, even though relatively little risk attaches to executing an applet from your company's secure intranet. It quickly became clear to Sun that for applets to become truly useful, it was important for users to be able to assign different levels of security, depending on where the applet originated. If an applet comes from a trusted supplier and it has not been tampered with, the user of that applet can then decide whether to give the applet more privileges.To give more trust to an applet, we need to know two things:•Where did the applet come from?•Was the code corrupted in transit?In the past 50 years, mathematicians and computer scientists have developed sophisticated algorithms for ensuring the integrity of data and for electronic signatures. The java.security package contains implementations of many of these algorithms. Fortunately, you don't need to understand the underlying mathematics to use the algorithms in the java.security package. In the next sections, we show you how message digests can detect changes in data files and how digital signatures can prove the identity of the signer.A message digest is a digital fingerprint of a block of data. For example, the so-called SHA1 (secure hash algorithm #1) condenses any data block, no matter how long, into a sequence of 160 bits (20 bytes). As with real fingerprints, one hopes that no two messages have the same SHA1 fingerprint. Of course, that cannot be true—there are only 2160 SHA1 fingerprints, so there must be some messages with the same fingerprint. But 2160is so large that the probability of duplication occurring is negligible. How negligible? According to James Walsh in True Odds: How Risks Affect Your Everyday Life (Merritt Publishing 1996), the chance that you will die from being struck by lightning is about one in 30,000. Now, think of nine other people, for example, your nine least favorite managers or professors. The chance that you and all of them will die from lightning strikes is higher than that of a forged message having the same SHA1 fingerprint as the original. (Of course, more than ten people, none of whom you are likely to know, will die from lightning strikes. However, we are talking about the far slimmer chance that your particular choice of people will be wiped out.)A message digest has two essential properties:•If one bit or several bits of the data are changed, then the message digest also changes.• A forger who is in possession of a given message cannot construct a fake message that has the same message digest as the original.The second property is again a matter of probabilities, of course. Consider the following message by the billionaire father:"Upon my death, my property shall be divided equally among my children; however, my son George shall receive nothing."That message has an SHA1 fingerprint of2D 8B 35 F3 BF 49 CD B1 94 04 E0 66 21 2B 5E 57 70 49 E1 7EThe distrustful father has deposited the message with one attorney and the fingerprint with another. Now, suppose George can bribe the lawyer holding the message. He wants to change the message so that Bill gets nothing. Of course, that changes the fingerprint to a completely different bit pattern:2A 33 0B 4B B3 FE CC 1C 9D 5C 01 A7 09 51 0B 49 AC 8F 98 92Can George find some other wording that matches the fingerprint? If he had been the proud owner of a billion computers from the time the Earth was formed, each computing a million messages a second, he would not yet have found a message he could substitute.A number of algorithms have been designed to compute these message digests. The two best-known are SHA1, the secure hash algorithm developed by the National Institute of Standards and Technology, and MD5, an algorithm invented by Ronald Rivest of MIT. Both algorithms scramble the bits of a message in ingenious ways. For details about these algorithms, see, for example, Cryptography and Network Security, 4th ed., by William Stallings (Prentice Hall 2005). Note that recently, subtle regularities have been discovered in both algorithms. At this point, most cryptographers recommend avoiding MD5 and using SHA1 until a stronger alternative becomes available. (See /rsalabs/node.asp?id=2834 for more information.) The Java programming language implements both SHA1 and MD5. The MessageDigest class is a factory for creating objects that encapsulate the fingerprinting algorithms. It has a static method, called getInstance, that returns an object of a class that extends the MessageDigest class. This means the MessageDigest class serves double duty:•As a factory class•As the superclass for all message digest algorithmsFor example, here is how you obtain an object that can compute SHA fingerprints:MessageDigest alg = MessageDigest.getInstance("SHA-1");(To get an object that can compute MD5, use the string "MD5" as the argument to getInstance.)After you have obtained a MessageDigest object, you feed it all the bytes in the message by repeatedly calling the update method. For example, the following code passes all bytes in a file to the alg object just created to do the fingerprinting:InputStream in = . . .int ch;while ((ch = in.read()) != -1)alg.update((byte) ch);Alternatively, if you have the bytes in an array, you can update the entire array at once:byte[] bytes = . . .;alg.update(bytes);When you are done, call the digest method. This method pads the input—as required by the fingerprinting algorithm—does the computation, and returns the digest as an array of bytes.byte[] hash = alg.digest();The program in Listing 9-15 computes a message digest, using either SHA or MD5. You can load the data to be digested from a file, or you can type a message in the text area.Message SigningIn the last section, you saw how to compute a message digest, a fingerprint for the original message. If the message is altered, then the fingerprint of the altered message will not match the fingerprint of the original. If the message and its fingerprint are delivered separately, then the recipient can check whether the message has been tampered with. However, if both the message and the fingerprint were intercepted, it is an easy matter to modify the message and then recompute the fingerprint. After all, the message digest algorithms are publicly known, and they don't require secret keys. In that case, the recipient of the forged message and the recomputed fingerprint would never know that the message has been altered. Digital signatures solve this problem.To help you understand how digital signatures work, we explain a few concepts from the field called public key cryptography. Public key cryptography is based on the notion of a public key and private key. The idea is that you tell everyone in the world your public key. However, only you hold the private key, and it is important that you safeguard it and don't release it to anyone else. The keys are matched by mathematical relationships, but the exact nature of these relationships is not important for us. (If you are interested, you can look it up in The Handbook of Applied Cryptography at http://www.cacr.math.uwaterloo.ca/hac/.)The keys are quite long and complex. For example, here is a matching pair of public andprivate Digital Signature Algorithm (DSA) keys.Public key:Code View:p:fca682ce8e12caba26efccf7110e526db078b05edecbcd1eb4a208f3ae1617ae01f35b91a47e6df 63413c5e12ed0899bcd132acd50d99151bdc43ee737592e17q: 962eddcc369cba8ebb260ee6b6a126d9346e38c5g:678471b27a9cf44ee91a49c5147db1a9aaf244f05a434d6486931d2d14271b9e35030b71fd7 3da179069b32e2935630e1c2062354d0da20a6c416e50be794ca4y:c0b6e67b4ac098eb1a32c5f8c4c1f0e7e6fb9d832532e27d0bdab9ca2d2a8123ce5a8018b8161 a760480fadd040b927281ddb22cb9bc4df596d7de4d1b977d50Private key:Code View:p:fca682ce8e12caba26efccf7110e526db078b05edecbcd1eb4a208f3ae1617ae01f35b91a47e6df 63413c5e12ed0899bcd132acd50d99151bdc43ee737592e17q: 962eddcc369cba8ebb260ee6b6a126d9346e38c5g:678471b27a9cf44ee91a49c5147db1a9aaf244f05a434d6486931d2d14271b9e35030b71fd73 da179069b32e2935630e1c2062354d0da20a6c416e50be794ca4x: 146c09f881656cc6c51f27ea6c3a91b85ed1d70aIt is believed to be practically impossible to compute one key from the other. That is, even though everyone knows your public key, they can't compute your private key in your lifetime, no matter how many computing resources they have available.It might seem difficult to believe that nobody can compute the private key from the public keys, but nobody has ever found an algorithm to do this for the encryption algorithms that are in common use today. If the keys are sufficiently long, brute force—simply trying all possible keys—would require more computers than can be built from all the atoms in the solar system, crunching away for thousands of years. Of course, it is possible that someone could come up withalgorithms for computing keys that are much more clever than brute force. For example, the RSA algorithm (the encryption algorithm invented by Rivest, Shamir, and Adleman) depends on the difficulty of factoring large numbers. For the last 20 years, many of the best mathematicians have tried to come up with good factoring algorithms, but so far with no success. For that reason, most cryptographers believe that keys with a "modulus" of 2,000 bits or more are currently completely safe from any attack. DSA is believed to be similarly secure.Figure 9-12 illustrates how the process works in practice.Suppose Alice wants to send Bob a message, and Bob wants to know this message came from Alice and not an impostor. Alice writes the message and then signs the message digest with her private key. Bob gets a copy of her public key. Bob then applies the public key to verify the signature. If the verification passes, then Bob can be assured of two facts:•The original message has not been altered.•The message was signed by Alice, the holder of the private key that matches the public key that Bob used for verification.You can see why security for private keys is all-important. If someone steals Alice's private key or if a government can require her to turn it over, then she is in trouble. The thief or a government agent can impersonate her by sending messages, money transfer instructions, and so on, that others will believe came from Alice.The X.509 Certificate FormatTo take advantage of public key cryptography, the public keys must be distributed. One of the most common distribution formats is called X.509. Certificates in the X.509 format are widely used by VeriSign, Microsoft, Netscape, and many other companies, for signing e-mail messages, authenticating program code, and certifying many other kinds of data. The X.509 standard is part of the X.500 series of recommendations for a directory service by the international telephone standards body, the CCITT.The precise structure of X.509 certificates is described in a formal notation, called "abstract syntax notation #1" or ASN.1. Figure 9-13 shows the ASN.1 definition of version 3 of the X.509 format. The exact syntax is not important for us, but, as you can see, ASN.1 gives a precise definition of the structure of a certificate file. The basic encoding rules, or BER, and a variation, called distinguished encoding rules (DER) describe precisely how to save this structure in abinary file. That is, BER and DER describe how to encode integers, character strings, bit strings, and constructs such as SEQUENCE, CHOICE, and OPTIONAL.中文译文:Java核心技术卷Ⅱ高级特性当Java技术刚刚问世时,令人激动的并不是因为它是一个设计完美的编程语言,而是因为它能够安全地运行通过因特网传播的各种applet。