计算机专业中英文翻译(外文翻译、文献翻译)

合集下载

cpu中央处理器中英文对照外文翻译文献

cpu中央处理器中英文对照外文翻译文献

中英文资料翻译中央处理器设计摘要CPU(中央处理单元)是数字计算机的重要组成部分,其目的是对从内存中接收的指令进行译码,同时对存储于内部寄存器、存储器或输入输出接口单元的数据执行传输、算术运算、逻辑运算以及控制操作。

在外部,CPU为转换指令数据和控制信息提供一个或多个总线并从组件连接到它。

在通用计算机开始的第一章,CPU作为处理器的一部分被屏蔽了。

但是CPU有可能出现在很多电脑之间,小,相对简单的所谓微控制器的计算机被用在电脑和其他数字化系统中,以执行限制或专门任务。

例如,一个微控制器出现在普通电脑的键盘和检测器中,但是这些组件也被屏蔽。

在这种微控制器中,与我们在这一章中所讨论的CPU可能十分不同。

字长也许更短,(或者说4或8个字节),编制数量少,指令集有限。

相对而言,性能差,但对完成任务来说足够了。

最重要的是它的微控制器的成本很低,符合成本效益。

在接下去的几页里,我考虑的是两个计算机的CPU,一个是一个复杂指令集计算机( CISC),另一个是精简指令集计算机(RISC)。

在详细的设计检查之后,我们比较了两个CPU的性能,并提交了用来提高性能的一些方法的简要概述。

最后,我们讨论了关于一般数字系统设计的设计思路。

1.双CPU的设计正如我们前一章提到的,一个典型的CPU通常被分成两部分:数据路径和控制单元。

该数据路径由一个功能单元、登记册和内部总线组成,为在功能单元、存储器以及其他计算机组件之间提供转移信息的途径。

这个数据途径有可能是流水线,也有可能不是。

控制单元由一个程序计数器,一个指令寄存器,控制逻辑,和可能有其他硬或微程序组成。

如果数据途径是流水线那么控制单元也有可能是流水线。

电脑的CPU是一个部分,要么是复杂指令集计算机( CISC),要么是精简指令集计算机(RISC),有自己的指令集架构。

本章的目的是提交两个CPU的设计,用来说明指令集,数据路径,和控制单元的构造特征的合并。

该设计将自上而下,但随着先前组件设计的重新使用,来说明指令集构架在数据路径和控制单元上的影响,数据路径上的单元的影响力。

计算机专业外文文献翻译6

计算机专业外文文献翻译6

外文文献翻译(译成中文2000字左右):As research laboratories become more automated,new problems are arising for laboratory managers.Rarely does a laboratory purchase all of its automation from a single equipment vendor. As a result,managers are forced to spend money training their users on numerous different software packages while purchasing support contracts for each. This suggests a problem of scalability. In the ideal world,managers could use the same software package to control systems of any size; from single instruments such as pipettors or readers to large robotic systems with up to hundreds of instruments. If such a software package existed, managers would only have to train users on one platform and would be able to source software support from a single vendor.If automation software is written to be scalable, it must also be flexible. Having a platform that can control systems of any size is far less valuable if the end user cannot control every device type they need to use. Similarly, if the software cannot connect to the customer’s Laboratory Information Management System (LIMS) database,it is of limited usefulness. The ideal automation software platform must therefore have an open architecture to provide such connectivity.Two strong reasons to automate a laboratory are increased throughput and improved robustness. It does not make sense to purchase high-speed automation if the controlling software does not maximize throughput of the system. The ideal automation software, therefore, would make use of redundant devices in the system to increase throughput. For example, let us assume that a plate-reading step is the slowest task in a given method. It would make that if the system operator connected another identical reader into the system, the controller software should be able to use both readers, cutting the total throughput time of the reading step in half. While resource pooling provides a clear throughput advantage, it can also be used to make the system more robust. For example, if one of the two readers were to experience some sort of error, the controlling software should be smart enough to route all samples to the working reader without taking the entire system offline.Now that one embodiment of an ideal automation control platform has been described let us see how the use of C++ helps achieving this ideal possible.DISCUSSIONC++: An Object-Oriented LanguageDeveloped in 1983 by BjarneStroustrup of Bell Labs,C++ helped propel the concept of object-oriented programming into the mainstream.The term ‘‘object-oriented programming language’’ is a familiar phrase that has been in use for decades. But what does it mean? And why is it relevant for automation software? Essentially, a language that is object-oriented provides three important programming mechanisms:encapsulation, inheritance, and polymorphism.Encapsulation is the ability of an object to maintain its own methods (or functions) and properties (or variables).For example, an ‘‘engine’’ object might contain methods for starting, stopping, or accelerating, along with properties for ‘‘RPM’’ and ‘‘Oil pressure’’. Further, encapsulation allows an object to hide private data from a ny entity outside the object. The programmer can control access to the object’s data by marking methods or properties as public, protected,or private. This access control helps abstract away the inner workings of a class while making it obvious to a caller which methods and properties are intended to be used externally.Inheritance allows one object to be a superset of another object. For example, one can create an object called Automobile that inherits from Vehicle. The Automobile object has access to all non-private methods and properties of Vehicle plus any additional methods or properties that makes it uniquely an automobile.Polymorphism is an extremely powerful mechanism that allows various inherited objects to exhibit different behaviors when the same named method is invoked upon them. For example, let us say our Vehicle object contains a method called CountWheels. When we invoke this method on our Automobile, we learn that the Automobile has four wheels.However, when we call this method on an object called Bus,we find that the Bus has 10 wheels.Together, encapsulation, inheritance, and polymorphism help promote code reuse, which is essential to meeting our requirement that the software package be flexible. A vendor can build up a comprehensive library of objects (a serial communications class, a state machine class, a device driver class,etc.) that can be reused across many different code modules.A typical control software vendor might have 100 device drivers. It would be a nightmare if for each of these drivers there were no building blocks for graphical user interface (GUI) or communications to build on. By building and maintaining a library of foundation objects, the vendor will save countless hours of programming and debugging time.All three tenets of object-oriented programming are leveraged by the use of interfaces. An interface is essentially a specification that is used to facilitate communication between software components, possibly written by different vendors. An interface says, ‘‘if your cod e follows this set of rules then my software component will be able to communicate with it.’’ In the next section we will see how interfaces make writing device drivers a much simpler task.C++ and Device DriversIn a flexible automation platform, one optimal use for interfaces is in device drivers. We would like our open-architecture software to provide a generic way for end users to write their own device drivers without having to divulge the secrets of our source code to them. To do this, we define a simplifiedC++ interface for a generic device, as shown here:class IDevice{public:virtual string GetName() ? 0; //Returns the name//of the devicevirtual void Initialize() ? 0; //Called to//initialize the devicevirtual void Run() ? 0; // Called to run the device};In the example above, a Ctt class (or object) called IDevice has been defined. The prefix I in IDevice stands for ‘‘interface’’. This class defines three public virtual methods: GetName, Initialize, and Run. The virtual keyword is what enables polymorphism, allowing the executing program to run the methods of the inheriting class. When a virtual method declaration is suffixed with ?0, there is no base class implementation. Such a method is referred to as ‘‘pure virtual’’. A class like IDevice that contains only pure virtual functions is known as an ‘‘abstract class’’, or an‘‘interface’’. The IDevice definition, along with appropriate documentation, can be published to the user community,allowing developers to generate their own device drivers that implement the IDevice interface.Suppose a thermal plate sealer manufacturer wants to write a driver that can be controlled by our software package. They would use inheritance to implement our IDevice interface and then override the methods to produce the desired behavior: class CSealer : public IDevice{public:virtual string GetName() {return ‘‘Sealer’’;}virtual void Initialize() {InitializeSealer();}virtual void Run() {RunSealCycle();}private:void InitializeSealer();void RunSealCycle();};Here the user has created a new class called CSealer that inherits from the IDevice interface. The public methods,those that are accessible from outside of the class, are the interface methods defined in IDevice. One, GetName, simply returns the name of the device type that this driver controls.The other methods,Initialize() and Run(), call private methods that actually perform the work. Notice how the privatekeyword is used to prevent external objects from calling InitializeSealer() and RunSealCycle() directly.When the controlling software executes, polymorphism will be used at runtime to call the GetName, Initialize, and Run methods in the CSealer object, allowing the device defined therein to be controlled.DoSomeWork(){//Get a reference to the device driver we want to useIDevice&device ? GetDeviceDriver();//Tell the world what we’re about to do.cout !! ‘‘Initializing ’’!! device.GetName();//Initialize the devicedevice.Initialize();//Tell the world what we’re about to do.cout !! ‘‘Running a cycle on ’’ !!device.GetName();//Away we go!device.Run();}The code snippet above shows how the IDevice interface can be used to generically control a device. If GetDevice-Driver returns a reference to a CSealer object, then DoSomeWork will control sealers. If GetDeviceDriver returns a reference to a pipettor, then DoSomeWork will control pipettors. Although this is a simplified example, it is straightforward to imagine how the use of interfaces and polymorphism can lead to great economies of scale in controller software development.Additional interfaces can be generated along the same lines as IDevice. For example, an interface perhaps called ILMS could be used to facilitate communication to and from a LIMS.The astute reader will notice that the claim that anythird party can develop drivers simply by implementing the IDevice interface is slightly flawed. The problem is that any driver that the user writes, like CSealer, would have to be linked directly to the controlling software’s exec utable to be used. This problem is solved by a number of existing technologies, including Microsoft’s COMor .NET, or by CORBA. All of these technologies allow end users to implement abstract interfaces in standalone components that can be linked at runtime rather than at design time. The details are beyond the scope of this article.中文翻译:随着研究实验室更加自动化,实验室管理人员出现的新问题。

软件工程毕业论文文献翻译中英文对照

软件工程毕业论文文献翻译中英文对照

软件工程毕业论文文献翻译中英文对照学生毕业设计(论文)外文译文学生姓名: 学号专业名称:软件工程译文标题(中英文):Qt Creator白皮书(Qt Creator Whitepaper)译文出处:Qt network 指导教师审阅签名: 外文译文正文:Qt Creator白皮书Qt Creator是一个完整的集成开发环境(IDE),用于创建Qt应用程序框架的应用。

Qt是专为应用程序和用户界面,一次开发和部署跨多个桌面和移动操作系统。

本文提供了一个推出的Qt Creator和提供Qt开发人员在应用开发生命周期的特点。

Qt Creator的简介Qt Creator的主要优点之一是它允许一个开发团队共享一个项目不同的开发平台(微软Windows?的Mac OS X?和Linux?)共同为开发和调试工具。

Qt Creator的主要目标是满足Qt开发人员正在寻找简单,易用性,生产力,可扩展性和开放的发展需要,而旨在降低进入新来乍到Qt的屏障。

Qt Creator 的主要功能,让开发商完成以下任务: , 快速,轻松地开始使用Qt应用开发项目向导,快速访问最近的项目和会议。

, 设计Qt物件为基础的应用与集成的编辑器的用户界面,Qt Designer中。

, 开发与应用的先进的C + +代码编辑器,提供新的强大的功能完成的代码片段,重构代码,查看文件的轮廓(即,象征着一个文件层次)。

, 建立,运行和部署Qt项目,目标多个桌面和移动平台,如微软Windows,Mac OS X中,Linux的,诺基亚的MeeGo,和Maemo。

, GNU和CDB使用Qt类结构的认识,增加了图形用户界面的调试器的调试。

, 使用代码分析工具,以检查你的应用程序中的内存管理问题。

, 应用程序部署到移动设备的MeeGo,为Symbian和Maemo设备创建应用程序安装包,可以在Ovi商店和其他渠道发布的。

, 轻松地访问信息集成的上下文敏感的Qt帮助系统。

C#编程语言外文文献翻译中英文

C#编程语言外文文献翻译中英文

C# 编程语言概述外文文献翻译(含:英文原文及中文译文)文献出处:Barnett M. C# Programming Language Overview [J]Lecture Notes in Computer Science, 2016, 3(4):49-59.英文原文C# Programming Language OverviewBarnett M1. History of C, C++, C#The C# programming language is based on the spirit of the C and C++ programming languages. This account has powerful features and an easy-to-learn curve. It cannot be said that C# is the same as C and C++, but because C# is built on both, Microsoft has removed some features that have become more burdensome, such as pointers. This section looks at C and C++ and tracks their development in C#.The C programming language was originally defined in the UNIX operating system. In the past, we often wrote some UNIX applications, including a C compiler, and finally used to write UNIX itself. It is generally accepted that this academic competition extends to the world that contains this business. The original Windows API was defined to work with C using Windows code, and until now at least the core Windows operating system APIS maintains the C compiler.From a defined point of view, C lacks a single detail, like thelanguage Smalltalk does, and the concept of an object. Y ou will learn more about the contents of the object. In Chapter 8, "Write Object-Oriented Code," an object is collected as a data set and some operations are set. The code can be completed by C, but the concept of the object cannot be Forced to appear in this language. If you want to construct your code to make it like an object, that's fine. If you don't want to do this, C will really not mind. The object is not an intrinsic part. Many people in this language did not spend a lot of time in this program example. When the development of object-oriented perspectives began to gain acceptance, think about the code approach. C++ was developed to include this improvement. It is defined to be compatible with C (just as all C programs are also C++ programs and can be compiled by a C++ compiler) The main addition to the C++ language is to provide this new concept. C++ additionally provides a derivative of the class (object template) behavior.The C++ language is a modified version of the C language. Unfamiliar, infrequent languages such as VB, C, and C++ are very low-level and require a lot of coding to make your application run well. Reason and error checking. And C++ can be handled in some very powerful applications, the code works very smoothly. The goal is set to maintain compatibility with C. C++ cannot break the low-level features of C.Microsoft defined C# retains a lot of C and C++ statements. The code can also want to identify the code quickly. A big advantage for C# is that its designers did not make it compatible with C and C++. When this may seem like a wrong treatment, it is actually good news. C# eliminates something that makes C and C++ difficult to work with. Beginning with quirks and defects found in C. C# is starting a clean slate and does not have any compatibility requirements. So it can maintain the strengths of its predecessors and discard weaknesses that make C and C++ programs difficult to survive.2. Introduce C#C#, the new language introduced in the .NET system, is derived from C++. However, C# is a popular, object-oriented (from beginning to end) type-safe language.Language featuresThe following section provides a quick perspective on some of the features of the C# language. If some of them are unfamiliar to you, don't worry, everything will be explained in detail in the following sections. In C#, all code and data must be attached to a class. Y ou cannot define a variable outside the class, nor can you write any code that is not in the class. When a class object is created and run, the class is constructed. When the object of the class is released, the class is destroyed. The class provides single inheritance, and all the classes eventually get from thebase class is the object. Over time, C# provides versioned techniques to help with the formation of your classes to maintain code compatibility when you use code from your earlier classes.Let's look at an example of a class called Family. This class contains two static fields to hold the first and last names of family members. In the same way, there is a way to return the full name of a family member.Class Class1{Public string FirstName;Public string LastName;Public string FullName(){}Return FirstName + LastName;}Note: Single inheritance means that a C# class can only inherit from a base class.C# is a collection that you can package your class into a namespace called the namespace class. And you can help arrange collection of classes on logical aggregations. When you started learning C#, it was clear that all namespaces were related to .NET type systems. Microsoft also chose to include channels that assist in the compatibility of previouscode and APIs. These classes are also included in Microsoft's namespace.Type of dataC# lets you work with two types of data: value types and reference types. The value type holds the actual value. The reference type saves the actual value stored elsewhere in the memory. Raw data types, such as character, integer, float, enumeration, and structure types, are all value types. Objects and array types are treated as reference types. C# predefines reference types (objects and strings) New, Byte, Unsigned Short, Unsigned Integer, Unsigned Long, Float, Double-Float, Boolean, Character, and The value type and reference type of the decimal type will eventually be executed by a primitive type object. C# also allows you to convert a value or a type to another value or a type. Y ou can use an implicit conversion strategy or an explicit conversion strategy. Implicit conversion strategies are always successful and do not lose any information (for example, you can convert an integer to a long integer without losing any information because long integers are longer than integers) Some data is lost because long integers can hold more value than integers. Conversion occurs.Before and after referenceRefer to Chapter 3 "Working with V ariables" to find out more about explicit and implicit conversion strategies.Y ou can use single-dimensional and multidimensional arrays in C#at the same time. Multidimensional arrays can become a matrix. When this matrix has the same area size as a multidimensional array. Or jagged, when some arrays have different sizes.Classes and structures can have data members called attributes and fields. Y ou can define a structure called Employee. For example, there is a field called Name. If you define an Employee type variable called CurrenrEmployee, you can retrieve the employee's name by writing . What should happen after the code assignment? If the employee's name must be read by a database, for example, you can write a code "When some people ask for the value of the name attribute, read the name from the database and return the name with the string type".FunctionA function is a code that can be used at any time, code. An example of a function will appear earlier than the FullName function, in this chapter, in the Family class. A function is usually combined with some code that returns information, and a method usually does not return information. However, for us, we generally attribute them to functions.The function can have four parameters:•The input parameters have values passed into the function, but the function cannot change their values.•The output parameters have no value when they are passed to thefunction, but the function can give them a value and pass the value back to its caller. ,•The reference parameter passes another value by reference. They have a value into the function, and this value can be changed in the function.•The parameter parameter defines an array variable in the list.C# and CLR work together to provide automatic storage management. Or "Leave enough space for this object to use" code like this. The CLR monitors your memory usage and automatically retrieves it when you need it.C# provides a large number of operators that allow you to write a large number of mathematical and bitwise expressions. Many (but not all) of them can be redefined, and you can change the job of these operators.C# provides a long list of reports that you can define through a variety of processing paths through your code. Through the report's operations, using keywords like switch, while, for, break, and continue enables your code to be split into different paths depending on the value of the variable.Classes can contain code and data. Visibility of each member to other objects. C# provides such accessible ranges as public, protected, internal, protected internal, and private.V ariableV ariables can be defined as constants. The constant has a fixed value and cannot be changed during the execution of your code. The value of PI, for example, is a good example of a constant because her value will not be changed while your code is running. The enumeration type defines a specific name for the constant. For example, you can define an enumerated type of planet using Mercury V in your code. If you use a variable to represent the planet, using the names of this enum type can make your code easier to read.C# provides an embedded mechanism to define and handle some events. If you write a class that performs a long operation, you may want to call an event. When the event ends, the client can sign this time and grab the event in their own code, he can let them be notified When you have completed this long budget, this event handling mechanism uses delegates in C#, a variable that references a function.Note: Event processing is a program in your code that determines what action will take place when a time occurs.For example, the user clicks on a button. If your class holds a value, write some code called a protractor that your class can be accessed as if it were an array. Suppose you write a class called Rainbow. For example, it contains a set of colors in this rainbow. Visitors may want some MYRainbow to retrieve the first color in the rainbow. Y ou can write an indexer in your Rainbow class to define what will be returned when thevisitor accesses your class as if it were an array of values.InterfaceC# provides an interface that aggregates properties, methods, and events that describe a set of functions. The class of C# can execute the interface. It tells the user through the interface a set of function files provided by this class. What existing code can have as few compatibility issues as possible. Once there was an interface exposed, it could not be changed, but it could evolve through inheritance. C# classes can perform many interfaces, even if the class can only inherit from a base class.Let's look at an example of a very clear rule in the real world of C# that helps illustrate the interface. Many applications use the additions provided today. There is the ability to read additional items when executed. To do this, this added item must follow some rules. DLL add items must display a function called CEEntry. And you must use CEd as the beginning of the DLL file name. When we run our code, it scans the directories of all the DLLs that are starting with CEd. When it finds one, it is read. Then it uses GetProcAddress to find the CEEntry function in the DLL. This proves that it is necessary for you to obey all the rules to establish an addition. This kind of creating a read addition is necessary because it carries more unnecessary code responsibility. If we use an interface in this example, your DLL additions can be applied to an interface. This ensures that all necessary methods, properties, and eventsappear in the DLL and are specified as files.AttributesThe attribute declares additional information about your class for the CLR. In the past, if you wanted to describe your classes yourself, you would have to use a few decentralized ways to store them in external files, such as IDL or event HTML files. Through your efforts, the property solves this problem. The developer has constrained some information in the class and any kind of information, for example, in the class, defines how it acts when it is used. The possibilities are endless, which is why Microsoft will contain a lot of predefined attributes in the .NET framework.Compile C#Running your C# code generates two important types of information through the C# compiler: code and metadata. The next section describes these two topics and completes a binary review built on .NET code, which is assembly.Microsoft Intermediate Language (MSIL)The code output by the C# compiler is written in an intermediate language called Microsoft. MSIL is your code that is used to construct a detailed set of instructions to guide you on how to perform. It contains instructions for operations, such as initialization of variables, methods for evoking objects, error handling, and declaring something new. C# is notjust a language from the MSIL source code that changes during the writing process. All .NET-compatible languages, including and C++ management, generate MSIL when their source code is compiled. All .NET languages use the same runtime, so code from different languages and different compilers can easily work together.For physical CPUs, MISL is not a set of explicit instructions. It doesn't know anything about your machine's CPU, and your machine doesn't know anything about MSIL. Then, when your CPU can't read MSIL, explain the code. This sinking is called just enough to write, or JIT. The job of the JIT compiler is to translate your universal MSIL code to the machine so that the CPU can execute your code.Y ou may want to know what an extra step is in the process. When a compiler can immediately generate CPU-interpreted code for why MSIL was generated, the compiler does this later. There are many reasons for this. First, MSIL makes it easier for you to write code as it moves to a different piece of hardware. Suppose you have written some C# code and you want it to run on your desktop and handheld devices at the same time. It is very likely that these two devices have different CPUs. If you only have one C# compiler whose goal is a clear CPU, then you need two C# compilers: one with the desktop CPU and the other with the handheld device CPU. Y ou have to compile your code twice to ensure that your correct code is used on the right device. With MSIL, you only write once.The .NET Framework is installed on your desktop and it contains a JIT compiler that translates your MSIL-specific CPU code to your machine. The .NET Framework is installed on your handheld device and it contains a JIT compiler that translates the same MSIL-specific CPU-specific code to your handheld device. To run MSIL code base on any device that has a .NET JIT compiler. Y ou now have only one MSIL basic code that can run on any device that has a .NET JIT compiler. The JIT compiler on these devices can take care of your code and make them run smoothly.Another reason why the compiler uses MSIL is that the settings of the instruction can be easily read by an authenticated proximity. Part of the compiler's job is to verify your code to make it as clear as possible. When properly accessed, these checks ensure that your code does not execute any instructions that can cause your code to crash. The definition of MSIL directives makes this check process easier to understand. CPU-specific instruction settings are optimized for fast code execution. However, they make the code difficult to read and therefore difficult to check. Having a C# compiler that can output CPU-specific code at once can make code inspection difficult or even impossible. Allow the .NET Framework's JIT compiler to verify your code to ensure that your code accesses memory through a buggy path and that the variable types are used correctly.MetadataThe assembly process outputs the same amount of metadata. This is a very important part of the .NET code sharing story. Whether you use C# to build a client application or use C# to build a library that some people use for your application, you will want to take advantage of some compiled .NET code. That code may have been provided by Microsoft as part of the .NET framework, or it may be provided by some online users. The key to using a foreign code is to let the C# compiler know that the class and that variable are in another base code so that it can be found in the precompilation of your work and match the code you write with the source code.Look at the metadata for the directory for your compiled code. The number of bits of source code compiled by C# exists in the compiled code along with the generation of MSIL. The types of methods and variables are completely described in the metadata and are ready to be read by other applications. For example, can read metadata from a .NET library to provide intelligent sensing of all the methods that can be used effectively for a particular class.If you have already worked with COM, you may be familiar with type libraries. The goal of the type library is to provide the same directory functionality to COM objects. However, the type library is provided from a few limitations, and in fact not all data about the target can be put into the type library. Metadata in .NET does not have this disadvantage. Allthe code used to describe the class's information is placed in the metadata.memberSometimes you need to use C# to build a terminal application. These applications are packaged into an executable file and use .EXE as an extension. C# completely supports the creation of .EXE files. However, there are also times when you do not want to be used in other programs. Y ou may want to create some useful C# classes, such as a developer who wants to use your class in a application. In this case, you will not create an application, instead you will build a component. A component is a metadata package. As a unit to configure, these classes will share the same level of version control, security information, and dynamic requirements. Think of a component as a logical DLL. If you are familiar with Microsoft's translation services or COM+, then you can think of components as equivalent to .NET packages.There are two kinds of components: private components and global components. When you build your own component, you don't need to specify whether you want to create a global component or a private component. Y ou can only make your code accessible by a separate application. Y our component is a package similar to a DLL and is installed into the same directory when your application runs it. The application is only executable when it is in the same directory as yourcomponent.If you want to share your code, more global components in more applications. Global components can be used by any system's .NET application regardless of the directory in which it is installed. Microsoft installs components as part of the .NET structure, and each Microsoft component is installed as a global component. The Microsoft Architecture SDK contains the public functionality to install and remove artifacts from global widget storage.C# can be viewed to some extent as a programming language for the .NET Windows-oriented environment. In the past ten years, although VB and C++ have finally become very powerful languages, some of the content has come. For Visual Basic, its main advantage is that it is easy to understand. Many programming tasks are easy to accomplish and basically hide the connotations of the Windows API and the COM component structure. The downside is that Visual Basic has never implemented an early version of object-oriented, real-world (BASIC is primarily intended to make beginners easier to understand than to write large commercial applications), so it cannot really be structured or object-oriented. Programming language.On the other hand, C++ has its own root in the ANSI C++ language definition. It is not fully compatible with ANSI because Microsoft wrote the C++ compiler before the ANSI definition was standardized, but it isalready quite close. Unfortunately, this leads to two problems. First, ANSI C++ was developed under technical conditions more than a decade ago, so it does not support current concepts (such as Unicode strings and generating XML documents), and some of the older grammatical structures were designed for previous compilers ( For example, the declaration and definition of member functions are separate.) Second, Microsoft also tried to evolve C++ into a language for performing high-performance tasks on Windows - avoiding the addition of large numbers of Microsoft-specific keywords and libraries in the language. The result is that in Windows, the language becomes a very messy language. Let a C++ developer talk about how many strings are defined in this way: char*, LPTSTR, (MFC version), CString (WTL version), wchar_t*, OLECHAR*, and so on.Now entering the .NET era - a new environment, it has made new extensions to both languages. Microsoft added many Microsoft-specific keywords to C++ and evolved VB to , retaining some basic VB syntax, but it is completely different in design. From a practical application perspective, is a New language. Here, Visua l C# .NET. Microsoft describes C# as a simple, modern, object-oriented, type-safe, and C and C++-derived programming language. Most in dependent commentators are “derived from C, C++, and Java” from their claims. C# is very similar to C++ and Java. It uses parentheses ({})to mark blocks of code, and semicolons separate lines of statements. The first impression of C# code is that it is very similar to C++ or Java code. But after these seeming similarities, C# is much easier to learn than C++ but harder than Java. Its design and modern development tools are more adaptable than other languages. It also has Visua Basic's ease of use, high performance, and low memory accessibility of C++. C# includes the following features:●Full support for class and object-oriented programming, including interface and inheritance, virtual functions, and operator overloading.●Define a complete, consistent set of basic types.●Built-in support for automatically generating XML document descriptions.●Automatically clean dynamically allocated memory.●Classes or methods can be marked with user-defined properties. This can be used for documentation purposes and has a certain impact on compilation (for example, marking a method to compile only when debugging).●Full access to the .NET base class library and easy access to the Windows API.●Y ou can use pointers and direct memory access, but the C# language can access memory without them.●Supports attributes and events in VB style.●Changing compiler options, ActiveX controls (COM components) are called by other code in the same way. ●C# can be used to write dynamic Web pages and XML Web services.It should be noted that for most of these features, and Managed C++ are also available. But since C# used .NET from the beginning, support for .NET features was not only complete, but also provided a more suitable syntax than other languages. The C# language itself is very similar to Java, but there are some improvements because Java is not designed for use in a .NET environment. Before ending this topic, we must also point out two limitations of C#. One is that the language is not suitable for writing time-critical or very high-performance codes, such as a loop that runs 1000 or 1050 times, and immediately clears the resources they occupy when they are not needed. In this regard, C++ may still be the best of all low-level languages. The second is that C# lacks the key functions needed for very high-performance applications. The parcels guarantee inlining and destructor functions in specific areas of the code. However, such applications are very few.中文译文C# 编程语言概述作者:Barnett M1. C,C++,C#的历史C#程序语言是建立在C 和C++程序语言的精神上的。

【计算机专业文献翻译】工程工作站

【计算机专业文献翻译】工程工作站

附件1:外文资料翻译译文工程工作站就原始性能而言,工程工作站大体上介于PC机和大的小型机之间;尽管随着PC 机和工作站两者功能的不断增强,这三者之间上的差别越来越难以分清了。

但是,工程工作站不论同PC机,或是同传统的分时共享技术(或称小型机技术)相比确实有几个优点。

跟PC机相比,工作站通常具有更多的功能强的CPU,而且能够支持更多的主存,尽管PC机在功能上同低档工作站有重叠现象。

同PC机不同的是,工作站能够提供多用户,多任务操作系统,这已成为它的一种标准特点。

OS/2和UNIX可用于PC机,尤其是以Intel80386为基础的PC机。

然而,PC机用得最多的操作系统仍是MS—DOS。

多任务系统同单任务系统相比有几个优点。

首先,用户可同时运行多道程序,因此对于应用程序是透明的。

虽然PC机的台式附件和常驻RAM程序可给用户提供某种原始的多任务功能,足以运行后台打印假脱机程序以及诸如此类的程序。

但是,他们对应用程序可能是不透明的,而且不能提供像过程间通信和支持多个并行用户这样的重要特点。

对于当今的工程应用来说,也许更为重要的是PC机上缺少大容量的物理内存和虚拟内存。

对于大型应用程序而言,虚拟存储器是很重要的,因为数据组太长,这种大型应用程序简直不能全部在物理存储器内运行。

要是没有虚拟内存的话,像编辑大型文件之类的简单任务都会慢的令人头疼,甚至不可能完成。

加上,许多应用程序更加复杂,因为它们必须缓冲数据或采用覆盖方式将应用程序的不同部分分页从物理内存中调进调出。

最后,大多数工作站的用户接口要比大多数PC机的用户接口高级一个明显的例外情形是Macintosh苹果机上的用户接口。

计算机的用户接口。

计算机的用户接口和连接它的可编程接口决定了应用程序接口的高级程度。

强有力的开发手段可让程序员创建直观的用户接口。

虽然工作站比PC机功能强,但跟现代小型机例如数字设备公司(DEC)VAX—8000系列的小型机相比,情况通常就不是那样了。

计算机专业外文文献及翻译微软Visual Studio

计算机专业外文文献及翻译微软Visual Studio

计算机专业外文文献及翻译微软Visual Studio 微软 Visual Studio1 微软 Visual Studio Visual Studio 是微软公司推出的开发环境,Visual Studio 可以用来创建 Windows 平台下的Windows 应用程序和网络应用程序,也可以用来创建网络服务、智能设备应用程序和 Office 插件。

Visual Studio 是一个来自微软的集成开发环境 IDE(inteqrated development environment),它可以用来开发由微软视窗,视窗手机,Windows CE、.NET 框架、.NET 精简框架和微软的 Silverlight 支持的控制台和图形用户界面的应用程序以及 Windows 窗体应用程序,网站,Web 应用程序和网络服务中的本地代码连同托管代码。

Visual Studio 包含一个由智能感知和代码重构支持的代码编辑器。

集成的调试工作既作为一个源代码级调试器又可以作为一台机器级调试器。

其他内置工具包括一个窗体设计的 GUI 应用程序,网页设计师,类设计师,数据库架构设计师。

它有几乎各个层面的插件增强功能,包括增加对支持源代码控制系统(如 Subversion 和 Visual SourceSafe)并添加新的工具集设计和可视化编辑器,如特定于域的语言或用于其他方面的软件开发生命周期的工具(例如 Team Foundation Server 的客户端:团队资源管理器)。

Visual Studio 支持不同的编程语言的服务方式的语言,它允许代码编辑器和调试器(在不同程度上)支持几乎所有的编程语言,提供了一个语言特定服务的存在。

内置的语言中包括 C/C 中(通过Visual C)(通过 Visual ),C,中(通过 Visual C,)和 F,(作为Visual Studio2010),为支持其他语言,如 MPython和 Ruby 等,可通过安装单独的语言服务。

计算机科学与技术专业外文翻译--数据库

计算机科学与技术专业外文翻译--数据库

外文原文:Database1.1Database conceptThe database concept has evolved since the 1960s to ease increasing difficulties in designing, building, and maintaining complex information systems (typically with many concurrent end-users, and with a large amount of diverse data). It has evolved together with database management systems which enable the effective handling of databases. Though the terms database and DBMS define different entities, they are inseparable: a database's properties are determined by its supporting DBMS and vice-versa. The Oxford English dictionary cites[citation needed] a 1962 technical report as the first to use the term "data-base." With the progress in technology in the areas of processors, computer memory, computer storage and computer networks, the sizes, capabilities, and performance of databases and their respective DBMSs have grown in orders of magnitudes. For decades it has been unlikely that a complex information system can be built effectively without a proper database supported by a DBMS. The utilization of databases is now spread to such a wide degree that virtually every technology and product relies on databases and DBMSs for its development and commercialization, or even may have such embedded in it. Also, organizations and companies, from small to large, heavily depend on databases for their operations.No widely accepted exact definition exists for DBMS. However, a system needs to provide considerable functionality to qualify as a DBMS. Accordingly its supported data collection needs to meet respective usability requirements (broadly defined by the requirements below) to qualify as a database. Thus, a database and its supporting DBMS are defined here by a set of general requirements listed below. Virtually all existing mature DBMS products meet these requirements to a great extent, while less mature either meet them or converge to meet them.1.2Evolution of database and DBMS technologyThe introduction of the term database coincided with the availability of direct-access storage (disks and drums) from the mid-1960s onwards. The term represented a contrast with the tape-based systems of the past, allowing shared interactive use rather than daily batch processing.In the earliest database systems, efficiency was perhaps the primary concern, but it was already recognized that there were other important objectives. One of the key aims was to make the data independent of the logic of application programs, so that the same data could be made available to different applications.The first generation of database systems were navigational,[2] applications typically accessed data by following pointers from one record to another. The two main data models at this time were the hierarchical model, epitomized by IBM's IMS system, and the Codasyl model (Network model), implemented in a number ofproducts such as IDMS.The Relational model, first proposed in 1970 by Edgar F. Codd, departed from this tradition by insisting that applications should search for data by content, rather than by following links. This was considered necessary to allow the content of the database to evolve without constant rewriting of applications. Relational systems placed heavy demands on processing resources, and it was not until the mid 1980s that computing hardware became powerful enough to allow them to be widely deployed. By the early 1990s, however, relational systems were dominant for all large-scale data processing applications, and they remain dominant today (2012) except in niche areas. The dominant database language is the standard SQL for the Relational model, which has influenced database languages also for other data models.Because the relational model emphasizes search rather than navigation, it does not make relationships between different entities explicit in the form of pointers, but represents them rather using primary keys and foreign keys. While this is a good basis for a query language, it is less well suited as a modeling language. For this reason a different model, the Entity-relationship model which emerged shortly later (1976), gained popularity for database design.In the period since the 1970s database technology has kept pace with the increasing resources becoming available from the computing platform: notably the rapid increase in the capacity and speed (and reduction in price) of disk storage, and the increasing capacity of main memory. This has enabled ever larger databases and higher throughputs to be achieved.The rigidity of the relational model, in which all data is held in tables with a fixed structure of rows and columns, has increasingly been seen as a limitation when handling information that is richer or more varied in structure than the traditional 'ledger-book' data of corporate information systems: for example, document databases, engineering databases, multimedia databases, or databases used in the molecular sciences. Various attempts have been made to address this problem, many of them gathering under banners such as post-relational or NoSQL. Two developments of note are the Object database and the XML database. The vendors of relational databases have fought off competition from these newer models by extending the capabilities of their own products to support a wider variety of data types.1.3General-purpose DBMSA DBMS has evolved into a complex software system and its development typically requires thousands of person-years of development effort.[citation needed] Some general-purpose DBMSs, like Oracle, Microsoft SQL Server, and IBM DB2, have been undergoing upgrades for thirty years or more. General-purpose DBMSs aim to satisfy as many applications as possible, which typically makes them even more complex than special-purpose databases. However, the fact that they can be used "off the shelf", as well as their amortized cost over many applications and instances, makes them an attractive alternative (Vsone-time development) whenever they meet an application's requirements.Though attractive in many cases, a general-purpose DBMS is not always the optimal solution: When certain applications are pervasive with many operating instances, each with many users, a general-purpose DBMS may introduce unnecessary overhead and too large "footprint" (too large amount of unnecessary, unutilized software code). Such applications usually justify dedicated development.Typical examples are email systems, though they need to possess certain DBMS properties: email systems are built in a way that optimizes email messages handling and managing, and do not need significant portions of a general-purpose DBMS functionality.1.4Database machines and appliancesIn the 1970s and 1980s attempts were made to build database systems with integrated hardware and software. The underlying philosophy was that such integration would provide higher performance at lower cost. Examples were IBM System/38, the early offering of Teradata, and the Britton Lee, Inc. database machine. Another approach to hardware support for database management was ICL's CAFS accelerator, a hardware disk controller with programmable search capabilities. In the long term these efforts were generally unsuccessful because specialized database machines could not keep pace with the rapid development and progress of general-purpose computers. Thus most database systems nowadays are software systems running on general-purpose hardware, using general-purpose computer data storage. However this idea is still pursued for certain applications by some companies like Netezza and Oracle (Exadata).1.5Database researchDatabase research has been an active and diverse area, with many specializations, carried out since the early days of dealing with the database concept in the 1960s. It has strong ties with database technology and DBMS products. Database research has taken place at research and development groups of companies (e.g., notably at IBM Research, who contributed technologies and ideas virtually to any DBMS existing today), research institutes, and Academia. Research has been done both through Theory and Prototypes. The interaction between research and database related product development has been very productive to the database area, and many related key concepts and technologies emerged from it. Notable are the Relational and the Entity-relationship models, the Atomic transaction concept and related Concurrency control techniques, Query languages and Query optimization methods, RAID, and more. Research has provided deep insight to virtually all aspects of databases, though not always has been pragmatic, effective (and cannot and should not always be: research is exploratory in nature, and not always leads to accepted or useful ideas). Ultimately market forces and real needs determine the selection of problem solutions and related technologies, also among those proposed by research. However, occasionally, not the best and most elegant solution wins (e.g., SQL). Along their history DBMSs and respective databases, to a great extent, have been the outcome of such research, while real product requirements and challenges triggered database research directions and sub-areas.The database research area has several notable dedicated academic journals (e.g., ACM Transactions on Database Systems-TODS, Data and Knowledge Engineering-DKE, and more) and annual conferences (e.g., ACM SIGMOD, ACM PODS, VLDB, IEEE ICDE, and more), as well as an active and quite heterogeneous (subject-wise) research community all over the world.1.6Database architectureDatabase architecture (to be distinguished from DBMS architecture; see below) may be viewed, to some extent, as an extension of Data modeling. It is used to conveniently answer requirements of different end-users from a same database, as well as for other benefits. For example, a financial department of a company needs the payment details of all employees as part of the company's expenses, but not other many details about employees, that are the interest of the human resources department. Thus different departments need different views of the company's database, that both include the employees' payments, possibly in a different level of detail (and presented in different visual forms). To meet such requirement effectively database architecture consists of three levels: external, conceptual and internal. Clearly separating the three levels was a major feature of the relational database model implementations that dominate 21st century databases.[13]The external level defines how each end-user type understands the organization of its respective relevant data in the database, i.e., the different needed end-user views.A single database can have any number of views at the external level.The conceptual level unifies the various external views into a coherent whole, global view.[13] It provides the common-denominator of all the external views. It comprises all the end-user needed generic data, i.e., all the data from which any view may be derived/computed. It is provided in the simplest possible way of such generic data, and comprises the back-bone of the database. It is out of the scope of the various database end-users, and serves database application developers and defined by database administrators that build the database.The Internal level (or Physical level) is as a matter of fact part of the database implementation inside a DBMS (see Implementation section below). It is concerned with cost, performance, scalability and other operational matters. It deals with storage layout of the conceptual level, provides supporting storage-structures like indexes, to enhance performance, and occasionally stores data of individual views (materialized views), computed from generic data, if performance justification exists for such redundancy. It balances all the external views' performance requirements, possibly conflicting, in attempt to optimize the overall database usage by all its end-uses according to the database goals and priorities.All the three levels are maintained and updated according to changing needs by database administrators who often also participate in the database design.The above three-level database architecture also relates to and being motivated by the concept of data independence which has been described for long time as a desired database property and was one of the major initial driving forces of the Relational model. In the context of the above architecture it means that changes made at a certain level do not affect definitions and software developed with higher level interfaces, and are being incorporated at the higher level automatically. For example, changes in the internal level do not affect application programs written using conceptual level interfaces, which saves substantial change work that would be needed otherwise.In summary, the conceptual is a level of indirection between internal and external. On one hand it provides a common view of the database, independent of different external view structures, and on the other hand it is uncomplicated by details of how the data is stored or managed (internal level). In principle every level, and even every external view, can be presented by a different data model. In practice usually a given DBMS uses the same data model for both the external and the conceptual levels (e.g., relational model). The internal level, which is hidden inside the DBMS and depends on its implementation (see Implementation section below), requires a different levelof detail and uses its own data structure types, typically different in nature from the structures of the external and conceptual levels which are exposed to DBMS users (e.g., the data models above): While the external and conceptual levels are focused on and serve DBMS users, the concern of the internal level is effective implementation details.中文译文:数据库1.1 数据库的概念数据库的概念已经演变自1960年以来,以缓解日益困难,在设计,建设,维护复杂的信息系统(通常与许多并发的最终用户,并用大量不同的数据)。

云计算外文翻译参考文献

云计算外文翻译参考文献

云计算外文翻译参考文献(文档含中英文对照即英文原文和中文翻译)原文:Technical Issues of Forensic Investigations in Cloud Computing EnvironmentsDominik BirkRuhr-University BochumHorst Goertz Institute for IT SecurityBochum, GermanyRuhr-University BochumHorst Goertz Institute for IT SecurityBochum, GermanyAbstract—Cloud Computing is arguably one of the most discussedinformation technologies today. It presents many promising technological and economical opportunities. However, many customers remain reluctant to move their business IT infrastructure completely to the cloud. One of their main concerns is Cloud Security and the threat of the unknown. Cloud Service Providers(CSP) encourage this perception by not letting their customers see what is behind their virtual curtain. A seldomly discussed, but in this regard highly relevant open issue is the ability to perform digital investigations. This continues to fuel insecurity on the sides of both providers and customers. Cloud Forensics constitutes a new and disruptive challenge for investigators. Due to the decentralized nature of data processing in the cloud, traditional approaches to evidence collection and recovery are no longer practical. This paper focuses on the technical aspects of digital forensics in distributed cloud environments. We contribute by assessing whether it is possible for the customer of cloud computing services to perform a traditional digital investigation from a technical point of view. Furthermore we discuss possible solutions and possible new methodologies helping customers to perform such investigations.I. INTRODUCTIONAlthough the cloud might appear attractive to small as well as to large companies, it does not come along without its own unique problems. Outsourcing sensitive corporate data into the cloud raises concerns regarding the privacy and security of data. Security policies, companies main pillar concerning security, cannot be easily deployed into distributed, virtualized cloud environments. This situation is further complicated by the unknown physical location of the companie’s assets. Normally,if a security incident occurs, the corporate security team wants to be able to perform their own investigation without dependency on third parties. In the cloud, this is not possible anymore: The CSP obtains all the power over the environmentand thus controls the sources of evidence. In the best case, a trusted third party acts as a trustee and guarantees for the trustworthiness of the CSP. Furthermore, the implementation of the technical architecture and circumstances within cloud computing environments bias the way an investigation may be processed. In detail, evidence data has to be interpreted by an investigator in a We would like to thank the reviewers for the helpful comments and Dennis Heinson (Center for Advanced Security Research Darmstadt - CASED) for the profound discussions regarding the legal aspects of cloud forensics. proper manner which is hardly be possible due to the lackof circumstantial information. For auditors, this situation does not change: Questions who accessed specific data and information cannot be answered by the customers, if no corresponding logs are available. With the increasing demand for using the power of the cloud for processing also sensible information and data, enterprises face the issue of Data and Process Provenance in the cloud [10]. Digital provenance, meaning meta-data that describes the ancestry or history of a digital object, is a crucial feature for forensic investigations. In combination with a suitable authentication scheme, it provides information about who created and who modified what kind of data in the cloud. These are crucial aspects for digital investigations in distributed environments such as the cloud. Unfortunately, the aspects of forensic investigations in distributed environment have so far been mostly neglected by the research community. Current discussion centers mostly around security, privacy and data protection issues [35], [9], [12]. The impact of forensic investigations on cloud environments was little noticed albeit mentioned by the authors of [1] in 2009: ”[...] to our knowledge, no research has been published on how cloud computing environments affect digital artifacts,and on acquisition logistics and legal issues related to cloud computing env ironments.” This statement is also confirmed by other authors [34], [36], [40] stressing that further research on incident handling, evidence tracking and accountability in cloud environments has to be done. At the same time, massive investments are being made in cloud technology. Combined with the fact that information technology increasingly transcendents peoples’ private and professional life, thus mirroring more and more of peoples’actions, it becomes apparent that evidence gathered from cloud environments will be of high significance to litigation or criminal proceedings in the future. Within this work, we focus the notion of cloud forensics by addressing the technical issues of forensics in all three major cloud service models and consider cross-disciplinary aspects. Moreover, we address the usability of various sources of evidence for investigative purposes and propose potential solutions to the issues from a practical standpoint. This work should be considered as a surveying discussion of an almost unexplored research area. The paper is organized as follows: We discuss the related work and the fundamental technical background information of digital forensics, cloud computing and the fault model in section II and III. In section IV, we focus on the technical issues of cloud forensics and discuss the potential sources and nature of digital evidence as well as investigations in XaaS environments including thecross-disciplinary aspects. We conclude in section V.II. RELATED WORKVarious works have been published in the field of cloud security and privacy [9], [35], [30] focussing on aspects for protecting data in multi-tenant, virtualized environments. Desired security characteristics for current cloud infrastructures mainly revolve around isolation of multi-tenant platforms [12], security of hypervisors in order to protect virtualized guest systems and secure network infrastructures [32]. Albeit digital provenance, describing the ancestry of digital objects, still remains a challenging issue for cloud environments, several works have already been published in this field [8], [10] contributing to the issues of cloud forensis. Within this context, cryptographic proofs for verifying data integrity mainly in cloud storage offers have been proposed,yet lacking of practical implementations [24], [37], [23]. Traditional computer forensics has already well researched methods for various fields of application [4], [5], [6], [11], [13]. Also the aspects of forensics in virtual systems have been addressed by several works [2], [3], [20] including the notionof virtual introspection [25]. In addition, the NIST already addressed Web Service Forensics [22] which has a huge impact on investigation processes in cloud computing environments. In contrast, the aspects of forensic investigations in cloud environments have mostly been neglected by both the industry and the research community. One of the first papers focusing on this topic was published by Wolthusen [40] after Bebee et al already introduced problems within cloud environments [1]. Wolthusen stressed that there is an inherent strong need for interdisciplinary work linking the requirements and concepts of evidence arising from the legal field to what can be feasibly reconstructed and inferred algorithmically or in an exploratory manner. In 2010, Grobauer et al [36] published a paper discussing the issues of incident response in cloud environments - unfortunately no specific issues and solutions of cloud forensics have been proposed which will be done within this work.III. TECHNICAL BACKGROUNDA. Traditional Digital ForensicsThe notion of Digital Forensics is widely known as the practice of identifying, extracting and considering evidence from digital media. Unfortunately, digital evidence is both fragile and volatile and therefore requires the attention of special personnel and methods in order to ensure that evidence data can be proper isolated and evaluated. Normally, the process of a digital investigation can be separated into three different steps each having its own specificpurpose:1) In the Securing Phase, the major intention is the preservation of evidence for analysis. The data has to be collected in a manner that maximizes its integrity. This is normally done by a bitwise copy of the original media. As can be imagined, this represents a huge problem in the field of cloud computing where you never know exactly where your data is and additionallydo not have access to any physical hardware. However, the snapshot technology, discussed in section IV-B3, provides a powerful tool to freeze system states and thus makes digital investigations, at least in IaaS scenarios, theoretically possible.2) We refer to the Analyzing Phase as the stage in which the data is sifted and combined. It is in this phase that the data from multiple systems or sources is pulled together to create as complete a picture and event reconstruction as possible. Especially in distributed system infrastructures, this means that bits and pieces of data are pulled together for deciphering the real story of what happened and for providing a deeper look into the data.3) Finally, at the end of the examination and analysis of the data, the results of the previous phases will be reprocessed in the Presentation Phase. The report, created in this phase, is a compilation of all the documentation and evidence from the analysis stage. The main intention of such a report is that it contains all results, it is complete and clear to understand. Apparently, the success of these three steps strongly depends on the first stage. If it is not possible to secure the complete set of evidence data, no exhaustive analysis will be possible. However, in real world scenarios often only a subset of the evidence data can be secured by the investigator. In addition, an important definition in the general context of forensics is the notion of a Chain of Custody. This chain clarifies how and where evidence is stored and who takes possession of it. Especially for cases which are brought to court it is crucial that the chain of custody is preserved.B. Cloud ComputingAccording to the NIST [16], cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications and services) that can be rapidly provisioned and released with minimal CSP interaction. The new raw definition of cloud computing brought several new characteristics such as multi-tenancy, elasticity, pay-as-you-go and reliability. Within this work, the following three models are used: In the Infrastructure asa Service (IaaS) model, the customer is using the virtual machine provided by the CSP for installing his own system on it. The system can be used like any other physical computer with a few limitations. However, the additive customer power over the system comes along with additional security obligations. Platform as a Service (PaaS) offerings provide the capability to deploy application packages created using the virtual development environment supported by the CSP. For the efficiency of software development process this service model can be propellent. In the Software as a Service (SaaS) model, the customer makes use of a service run by the CSP on a cloud infrastructure. In most of the cases this service can be accessed through an API for a thin client interface such as a web browser. Closed-source public SaaS offers such as Amazon S3 and GoogleMail can only be used in the public deployment model leading to further issues concerning security, privacy and the gathering of suitable evidences. Furthermore, two main deployment models, private and public cloud have to be distinguished. Common public clouds are made available to the general public. The corresponding infrastructure is owned by one organization acting as a CSP and offering services to its customers. In contrast, the private cloud is exclusively operated for an organization but may not provide the scalability and agility of public offers. The additional notions of community and hybrid cloud are not exclusively covered within this work. However, independently from the specific model used, the movement of applications and data to the cloud comes along with limited control for the customer about the application itself, the data pushed into the applications and also about the underlying technical infrastructure.C. Fault ModelBe it an account for a SaaS application, a development environment (PaaS) or a virtual image of an IaaS environment, systems in the cloud can be affected by inconsistencies. Hence, for both customer and CSP it is crucial to have the ability to assign faults to the causing party, even in the presence of Byzantine behavior [33]. Generally, inconsistencies can be caused by the following two reasons:1) Maliciously Intended FaultsInternal or external adversaries with specific malicious intentions can cause faults on cloud instances or applications. Economic rivals as well as former employees can be the reason for these faults and state a constant threat to customers and CSP. In this model, also a malicious CSP is included albeit he isassumed to be rare in real world scenarios. Additionally, from the technical point of view, the movement of computing power to a virtualized, multi-tenant environment can pose further threads and risks to the systems. One reason for this is that if a single system or service in the cloud is compromised, all other guest systems and even the host system are at risk. Hence, besides the need for further security measures, precautions for potential forensic investigations have to be taken into consideration.2) Unintentional FaultsInconsistencies in technical systems or processes in the cloud do not have implicitly to be caused by malicious intent. Internal communication errors or human failures can lead to issues in the services offered to the costumer(i.e. loss or modification of data). Although these failures are not caused intentionally, both the CSP and the customer have a strong intention to discover the reasons and deploy corresponding fixes.IV. TECHNICAL ISSUESDigital investigations are about control of forensic evidence data. From the technical standpoint, this data can be available in three different states: at rest, in motion or in execution. Data at rest is represented by allocated disk space. Whether the data is stored in a database or in a specific file format, it allocates disk space. Furthermore, if a file is deleted, the disk space is de-allocated for the operating system but the data is still accessible since the disk space has not been re-allocated and overwritten. This fact is often exploited by investigators which explore these de-allocated disk space on harddisks. In case the data is in motion, data is transferred from one entity to another e.g. a typical file transfer over a network can be seen as a data in motion scenario. Several encapsulated protocols contain the data each leaving specific traces on systems and network devices which can in return be used by investigators. Data can be loaded into memory and executed as a process. In this case, the data is neither at rest or in motion but in execution. On the executing system, process information, machine instruction and allocated/de-allocated data can be analyzed by creating a snapshot of the current system state. In the following sections, we point out the potential sources for evidential data in cloud environments and discuss the technical issues of digital investigations in XaaS environmentsas well as suggest several solutions to these problems.A. Sources and Nature of EvidenceConcerning the technical aspects of forensic investigations, the amount of potential evidence available to the investigator strongly diverges between thedifferent cloud service and deployment models. The virtual machine (VM), hosting in most of the cases the server application, provides several pieces of information that could be used by investigators. On the network level, network components can provide information about possible communication channels between different parties involved. The browser on the client, acting often as the user agent for communicating with the cloud, also contains a lot of information that could be used as evidence in a forensic investigation. Independently from the used model, the following three components could act as sources for potential evidential data.1) Virtual Cloud Instance: The VM within the cloud, where i.e. data is stored or processes are handled, contains potential evidence [2], [3]. In most of the cases, it is the place where an incident happened and hence provides a good starting point for a forensic investigation. The VM instance can be accessed by both, the CSP and the customer who is running the instance. Furthermore, virtual introspection techniques [25] provide access to the runtime state of the VM via the hypervisor and snapshot technology supplies a powerful technique for the customer to freeze specific states of the VM. Therefore, virtual instances can be still running during analysis which leads to the case of live investigations [41] or can be turned off leading to static image analysis. In SaaS and PaaS scenarios, the ability to access the virtual instance for gathering evidential information is highly limited or simply not possible.2) Network Layer: Traditional network forensics is knownas the analysis of network traffic logs for tracing events that have occurred in the past. Since the different ISO/OSI network layers provide several information on protocols and communication between instances within as well as with instances outside the cloud [4], [5], [6], network forensics is theoretically also feasible in cloud environments. However in practice, ordinary CSP currently do not provide any log data from the network components used by the customer’s instances or applications. For instance, in case of a malware infection of an IaaS VM, it will be difficult for the investigator to get any form of routing information and network log datain general which is crucial for further investigative steps. This situation gets even more complicated in case of PaaS or SaaS. So again, the situation of gathering forensic evidence is strongly affected by the support the investigator receives from the customer and the CSP.3) Client System: On the system layer of the client, it completely depends on the used model (IaaS, PaaS, SaaS) if and where potential evidence could beextracted. In most of the scenarios, the user agent (e.g. the web browser) on the client system is the only application that communicates with the service in the cloud. This especially holds for SaaS applications which are used and controlled by the web browser. But also in IaaS scenarios, the administration interface is often controlled via the browser. Hence, in an exhaustive forensic investigation, the evidence data gathered from the browser environment [7] should not be omitted.a) Browser Forensics: Generally, the circumstances leading to an investigation have to be differentiated: In ordinary scenarios, the main goal of an investigation of the web browser is to determine if a user has been victim of a crime. In complex SaaS scenarios with high client-server interaction, this constitutes a difficult task. Additionally, customers strongly make use of third-party extensions [17] which can be abused for malicious purposes. Hence, the investigator might want to look for malicious extensions, searches performed, websites visited, files downloaded, information entered in forms or stored in local HTML5 stores, web-based email contents and persistent browser cookies for gathering potential evidence data. Within this context, it is inevitable to investigate the appearance of malicious JavaScript [18] leading to e.g. unintended AJAX requests and hence modified usage of administration interfaces. Generally, the web browser contains a lot of electronic evidence data that could be used to give an answer to both of the above questions - even if the private mode is switched on [19].B. Investigations in XaaS EnvironmentsTraditional digital forensic methodologies permit investigators to seize equipment and perform detailed analysis on the media and data recovered [11]. In a distributed infrastructure organization like the cloud computing environment, investigators are confronted with an entirely different situation. They have no longer the option of seizing physical data storage. Data and processes of the customer are dispensed over an undisclosed amount of virtual instances, applications and network elements. Hence, it is in question whether preliminary findings of the computer forensic community in the field of digital forensics apparently have to be revised and adapted to the new environment. Within this section, specific issues of investigations in SaaS, PaaS and IaaS environments will be discussed. In addition, cross-disciplinary issues which affect several environments uniformly, will be taken into consideration. We also suggest potential solutions to the mentioned problems.1) SaaS Environments: Especially in the SaaS model, the customer does notobtain any control of the underlying operating infrastructure such as network, servers, operating systems or the application that is used. This means that no deeper view into the system and its underlying infrastructure is provided to the customer. Only limited userspecific application configuration settings can be controlled contributing to the evidences which can be extracted fromthe client (see section IV-A3). In a lot of cases this urges the investigator to rely on high-level logs which are eventually provided by the CSP. Given the case that the CSP does not run any logging application, the customer has no opportunity to create any useful evidence through the installation of any toolkit or logging tool. These circumstances do not allow a valid forensic investigation and lead to the assumption that customers of SaaS offers do not have any chance to analyze potential incidences.a) Data Provenance: The notion of Digital Provenance is known as meta-data that describes the ancestry or history of digital objects. Secure provenance that records ownership and process history of data objects is vital to the success of data forensics in cloud environments, yet it is still a challenging issue today [8]. Albeit data provenance is of high significance also for IaaS and PaaS, it states a huge problem specifically for SaaS-based applications: Current global acting public SaaS CSP offer Single Sign-On (SSO) access control to the set of their services. Unfortunately in case of an account compromise, most of the CSP do not offer any possibility for the customer to figure out which data and information has been accessed by the adversary. For the victim, this situation can have tremendous impact: If sensitive data has been compromised, it is unclear which data has been leaked and which has not been accessed by the adversary. Additionally, data could be modified or deleted by an external adversary or even by the CSP e.g. due to storage reasons. The customer has no ability to proof otherwise. Secure provenance mechanisms for distributed environments can improve this situation but have not been practically implemented by CSP [10]. Suggested Solution: In private SaaS scenarios this situation is improved by the fact that the customer and the CSP are probably under the same authority. Hence, logging and provenance mechanisms could be implemented which contribute to potential investigations. Additionally, the exact location of the servers and the data is known at any time. Public SaaS CSP should offer additional interfaces for the purpose of compliance, forensics, operations and security matters to their customers. Through an API, the customers should have the ability to receive specific information suchas access, error and event logs that could improve their situation in case of aninvestigation. Furthermore, due to the limited ability of receiving forensic information from the server and proofing integrity of stored data in SaaS scenarios, the client has to contribute to this process. This could be achieved by implementing Proofs of Retrievability (POR) in which a verifier (client) is enabled to determine that a prover (server) possesses a file or data object and it can be retrieved unmodified [24]. Provable Data Possession (PDP) techniques [37] could be used to verify that an untrusted server possesses the original data without the need for the client to retrieve it. Although these cryptographic proofs have not been implemented by any CSP, the authors of [23] introduced a new data integrity verification mechanism for SaaS scenarios which could also be used for forensic purposes.2) PaaS Environments: One of the main advantages of the PaaS model is that the developed software application is under the control of the customer and except for some CSP, the source code of the application does not have to leave the local development environment. Given these circumstances, the customer obtains theoretically the power to dictate how the application interacts with other dependencies such as databases, storage entities etc. CSP normally claim this transfer is encrypted but this statement can hardly be verified by the customer. Since the customer has the ability to interact with the platform over a prepared API, system states and specific application logs can be extracted. However potential adversaries, which can compromise the application during runtime, should not be able to alter these log files afterwards. Suggested Solution:Depending on the runtime environment, logging mechanisms could be implemented which automatically sign and encrypt the log information before its transfer to a central logging server under the control of the customer. Additional signing and encrypting could prevent potential eavesdroppers from being able to view and alter log data information on the way to the logging server. Runtime compromise of an PaaS application by adversaries could be monitored by push-only mechanisms for log data presupposing that the needed information to detect such an attack are logged. Increasingly, CSP offering PaaS solutions give developers the ability to collect and store a variety of diagnostics data in a highly configurable way with the help of runtime feature sets [38].3) IaaS Environments: As expected, even virtual instances in the cloud get compromised by adversaries. Hence, the ability to determine how defenses in the virtual environment failed and to what extent the affected systems havebeen compromised is crucial not only for recovering from an incident. Also forensic investigations gain leverage from such information and contribute to resilience against future attacks on the systems. From the forensic point of view, IaaS instances do provide much more evidence data usable for potential forensics than PaaS and SaaS models do. This fact is caused throughthe ability of the customer to install and set up the image for forensic purposes before an incident occurs. Hence, as proposed for PaaS environments, log data and other forensic evidence information could be signed and encrypted before itis transferred to third-party hosts mitigating the chance that a maliciously motivated shutdown process destroys the volatile data. Although, IaaS environments provide plenty of potential evidence, it has to be emphasized that the customer VM is in the end still under the control of the CSP. He controls the hypervisor which is e.g. responsible for enforcing hardware boundaries and routing hardware requests among different VM. Hence, besides the security responsibilities of the hypervisor, he exerts tremendous control over how customer’s VM communicate with the hardware and theoretically can intervene executed processes on the hosted virtual instance through virtual introspection [25]. This could also affect encryption or signing processes executed on the VM and therefore leading to the leakage of the secret key. Although this risk can be disregarded in most of the cases, the impact on the security of high security environments is tremendous.a) Snapshot Analysis: Traditional forensics expect target machines to be powered down to collect an image (dead virtual instance). This situation completely changed with the advent of the snapshot technology which is supported by all popular hypervisors such as Xen, VMware ESX and Hyper-V.A snapshot, also referred to as the forensic image of a VM, providesa powerful tool with which a virtual instance can be clonedby one click including also the running system’s mem ory. Due to the invention of the snapshot technology, systems hosting crucial business processes do not have to be powered down for forensic investigation purposes. The investigator simply creates and loads a snapshot of the target VM for analysis(live virtual instance). This behavior is especially important for scenarios in which a downtime of a system is not feasible or practical due to existing SLA. However the information whether the machine is running or has been properly powered down is crucial [3] for the investigation. Live investigations of running virtual instances become more common providing evidence data that。

计算机网络中英文对照外文翻译文献

计算机网络中英文对照外文翻译文献

中英文资料外文翻译计算机网络计算机网络,通常简单的被称作是一种网络,是一家集电脑和设备为一体的沟通渠道,便于用户之间的沟通交流和资源共享。

网络可以根据其多种特点来分类。

计算机网络允许资源和信息在互联设备中共享。

一.历史早期的计算机网络通信始于20世纪50年代末,包括军事雷达系统、半自动地面防空系统及其相关的商业航空订票系统、半自动商业研究环境。

1957年俄罗斯向太空发射人造卫星。

十八个月后,美国开始设立高级研究计划局(ARPA)并第一次发射人造卫星。

然后用阿帕网上的另外一台计算机分享了这个信息。

这一切的负责者是美国博士莱德里尔克。

阿帕网于来于自印度,1969年印度将其名字改为因特网。

上世纪60年代,高级研究计划局(ARPA)开始为美国国防部资助并设计高级研究计划局网(阿帕网)。

因特网的发展始于1969年,20世纪60年代起开始在此基础上设计开发,由此,阿帕网演变成现代互联网。

二.目的计算机网络可以被用于各种用途:为通信提供便利:使用网络,人们很容易通过电子邮件、即时信息、聊天室、电话、视频电话和视频会议来进行沟通和交流。

共享硬件:在网络环境下,每台计算机可以获取和使用网络硬件资源,例如打印一份文件可以通过网络打印机。

共享文件:数据和信息: 在网络环境中,授权用户可以访问存储在其他计算机上的网络数据和信息。

提供进入数据和信息共享存储设备的能力是许多网络的一个重要特征。

共享软件:用户可以连接到远程计算机的网络应用程序。

信息保存。

安全保证。

三.网络分类下面的列表显示用于网络分类:3.1连接方式计算机网络可以据硬件和软件技术分为用来连接个人设备的网络,如:光纤、局域网、无线局域网、家用网络设备、电缆通讯和G.hn(有线家庭网络标准)等等。

以太网的定义,它是由IEEE 802标准,并利用各种媒介,使设备之间进行通信的网络。

经常部署的设备包括网络集线器、交换机、网桥、路由器。

无线局域网技术是使用无线设备进行连接的。

计算机科学与技术 外文翻译 英文文献 中英对照

计算机科学与技术 外文翻译 英文文献 中英对照

附件1:外文资料翻译译文大容量存储器由于计算机主存储器的易失性和容量的限制, 大多数的计算机都有附加的称为大容量存储系统的存储设备, 包括有磁盘、CD 和磁带。

相对于主存储器,大的容量储存系统的优点是易失性小,容量大,低成本, 并且在许多情况下, 为了归档的需要可以把储存介质从计算机上移开。

术语联机和脱机通常分别用于描述连接于和没有连接于计算机的设备。

联机意味着,设备或信息已经与计算机连接,计算机不需要人的干预,脱机意味着设备或信息与机器相连前需要人的干预,或许需要将这个设备接通电源,或许包含有该信息的介质需要插到某机械装置里。

大量储存器系统的主要缺点是他们典型地需要机械的运动因此需要较多的时间,因为主存储器的所有工作都由电子器件实现。

1. 磁盘今天,我们使用得最多的一种大量存储器是磁盘,在那里有薄的可以旋转的盘片,盘片上有磁介质以储存数据。

盘片的上面和(或)下面安装有读/写磁头,当盘片旋转时,每个磁头都遍历一圈,它被叫作磁道,围绕着磁盘的上下两个表面。

通过重新定位的读/写磁头,不同的同心圆磁道可以被访问。

通常,一个磁盘存储系统由若干个安装在同一根轴上的盘片组成,盘片之间有足够的距离,使得磁头可以在盘片之间滑动。

在一个磁盘中,所有的磁头是一起移动的。

因此,当磁头移动到新的位置时,新的一组磁道可以存取了。

每一组磁道称为一个柱面。

因为一个磁道能包含的信息可能比我们一次操作所需要得多,所以每个磁道划分成若干个弧区,称为扇区,记录在每个扇区上的信息是连续的二进制位串。

传统的磁盘上每个磁道分为同样数目的扇区,而每个扇区也包含同样数目的二进制位。

(所以,盘片中心的储存的二进制位的密度要比靠近盘片边缘的大)。

因此,一个磁盘存储器系统有许多个别的磁区, 每个扇区都可以作为独立的二进制位串存取,盘片表面上的磁道数目和每个磁道上的扇区数目对于不同的磁盘系统可能都不相同。

磁区大小一般是不超过几个KB; 512 个字节或1024 个字节。

计算机专业毕业设计论文外文文献中英文翻译——java对象

计算机专业毕业设计论文外文文献中英文翻译——java对象

1 . Introduction To Objects1.1The progress of abstractionAll programming languages provide abstractions. It can be argued that the complexity of the problems you’re able to solve is directly related to the kind and quality of abstraction。

By “kind” I mean,“What is it that you are abstracting?” Assembly language is a small abstraction of the underlying machine. Many so—called “imperative” languages that followed (such as FORTRAN,BASIC, and C) were abstractions of assembly language。

These languages are big improvements over assembly language,but their primary abstraction still requires you to think in terms of the structure of the computer rather than the structure of the problem you are trying to solve。

The programmer must establish the association between the machine model (in the “solution space,” which is the place where you’re modeling that problem, such as a computer) and the model of the problem that is actually being solved (in the “problem space,” which is the place where the problem exists). The effort required to perform this mapping, and the fact that it is extrinsic to the programming language,produces programs that are difficult to write and expensive to maintain,and as a side effect created the entire “programming methods” industry.The alter native to modeling the machine is to model the problem you’re trying to solve。

计算机外文翻译(完整)

计算机外文翻译(完整)

计算机外⽂翻译(完整)毕业设计(论⽂)外⽂资料翻译专业:计算机科学与技术姓名:王成明学号:06120186外⽂出处:The History of the Internet附件: 1.外⽂原⽂ 2.外⽂资料翻译译⽂;附件1:外⽂原⽂The History of the InternetThe Beginning - ARPAnetThe Internet started as a project by the US government. The object of the project was to create a means of communications between long distance points, in the event of a nation wide emergency or, more specifically, nuclear war. The project was called ARPAnet, and it is what the Internet started as. Funded specifically for military communication, the engineers responsible for ARPANet had no idea of the possibilities of an "Internet."By definition, an 'Internet' is four or more computers connected by a network.ARPAnet achieved its network by using a protocol called TCP/IP. The basics around this protocol was that if information sent over a network failed to get through on one route, it would find another route to work with, as well as establishing a means for one computer to "talk" to another computer, regardless of whether it was a PC or a Macintosh.By the 80's ARPAnet, just years away from becoming the more well known Internet, had 200 computers. The Defense Department, satisfied with ARPAnets results, decided to fully adopt it into service, and connected many military computers and resources into the network. ARPAnet then had 562 computers on its network. By the year 1984, it had over 1000 computers on its network.In 1986 ARPAnet (supposedly) shut down, but only the organization shut down, and the existing networks still existed between the more than 1000 computers. It shut down due to a failied link up with NSF, who wanted to connect its 5 countywide super computers into ARPAnet.With the funding of NSF, new high speed lines were successfully installed at line speeds of 56k (a normal modem nowadays) through telephone lines in 1988. By that time, there were 28,174 computers on the (by then decided) Internet. In 1989 there were 80,000 computers on it. By 1989, there were290,000.Another network was built to support the incredible number of people joining. It was constructed in 1992.Today - The InternetToday, the Internet has become one of the most important technological advancements in the history of humanity. Everyone wants to get 'on line' to experience the wealth of information of the Internet. Millions of people now use the Internet, and it's predicted that by the year 2003 every single person on the planet will have Internet access. The Internet has truly become a way of life in our time and era, and is evolving so quickly its hard to determine where it will go next, as computer and network technology improve every day.HOW IT WORKS:It's a standard thing. People using the Internet. Shopping, playing games,conversing in virtual Internet environments.The Internet is not a 'thing' itself. The Internet cannot just "crash." It functions the same way as the telephone system, only there is no Internet company that runs the Internet.The Internet is a collection of millioins of computers that are all connected to each other, or have the means to connect to each other. The Internet is just like an office network, only it has millions of computers connected to it.The main thing about how the Internet works is communication. How does a computer in Houston know how to access data on a computer in Tokyo to view a webpage?Internet communication, communication among computers connected to the Internet, is based on a language. This language is called TCP/IP. TCP/IP establishes a language for a computer to access and transmit data over the Internet system.But TCP/IP assumes that there is a physical connecetion between onecomputer and another. This is not usually the case. There would have to be a network wire that went to every computer connected to the Internet, but that would make the Internet impossible to access.The physical connection that is requireed is established by way of modems,phonelines, and other modem cable connections (like cable modems or DSL). Modems on computers read and transmit data over established lines,which could be phonelines or data lines. The actual hard core connections are established among computers called routers.A router is a computer that serves as a traffic controller for information.To explain this better, let's look at how a standard computer might viewa webpage.1. The user's computer dials into an Internet Service Provider (ISP). The ISP might in turn be connected to another ISP, or a straight connection into the Internet backbone.2. The user launches a web browser like Netscape or Internet Explorer and types in an internet location to go to.3. Here's where the tricky part comes in. First, the computer sends data about it's data request to a router. A router is a very high speed powerful computer running special software. The collection of routers in the world make what is called a "backbone," on which all the data on the Internet is transferred. The backbone presently operates at a speed of several gigabytes per-second. Such a speed compared to a normal modem is like comparing the heat of the sun to the heat of an ice-cube.Routers handle data that is going back and forth. A router puts small chunks of data into packages called packets, which function similarly to envelopes. So, when the request for the webpage goes through, it uses TCP/IP protocols to tell the router what to do with the data, where it's going, and overall where the user wants to go.4. The router sends these packets to other routers, eventually leadingto the target computer. It's like whisper down the lane (only the information remains intact).5. When the information reaches the target web server, the webserver then begins to send the web page back. A webserver is the computer where the webpage is stored that is running a program that handles requests for the webpage and sends the webpage to whoever wants to see it.6. The webpage is put in packets, sent through routers, and arrive at the users computer where the user can view the webpage once it is assembled.The packets which contain the data also contain special information that lets routers and other computers know how to reassemble the data in the right order.With millions of web pages, and millions of users, using the Internet is not always easy for a beginning user, especially for someone who is not entirely comfortale with using computers. Below you can find tips tricks and help on how to use main services of the Internet.Before you access webpages, you must have a web browser to actually be able to view the webpages. Most Internet Access Providers provide you with a web browser in the software they usually give to customers; you. The fact that you are viewing this page means that you have a web browser. The top two use browsers are Netscape Communicator and Microsoft Internet Explorer. Netscape can be found at /doc/bedc387343323968011c9268.html and MSIE can be found at /doc/bedc387343323968011c9268.html /ie.The fact that you're reading this right now means that you have a web browser.Next you must be familiar with actually using webpages. A webpage is a collection of hyperlinks, images, text, forms, menus, and multimedia. To "navigate" a webpage, simply click the links it provides or follow it's own instructions (like if it has a form you need to use, it will probably instruct you how to use it). Basically, everything about a webpage is made to be self-explanetory. That is the nature of a webpage, to be easily navigatable."Oh no! a 404 error! 'Cannot find web page?'" is a common remark made by new web-users.Sometimes websites have errors. But an error on a website is not the user's fault, of course.A 404 error means that the page you tried to go to does not exist. This could be because the site is still being constructed and the page hasn't been created yet, or because the site author made a typo in the page. There's nothing much to do about a 404 error except for e-mailing the site administrator (of the page you wanted to go to) an telling him/her about the error.A Javascript error is the result of a programming error in the Javascript code of a website. Not all websites utilize Javascript, but many do. Javascript is different from Java, and most browsers now support Javascript. If you are using an old version of a web browser (Netscape 3.0 for example), you might get Javascript errors because sites utilize Javascript versions that your browser does not support. So, you can try getting a newer version of your web browser.E-mail stands for Electronic Mail, and that's what it is. E-mail enables people to send letters, and even files and pictures to each other.To use e-mail, you must have an e-mail client, which is just like a personal post office, since it retrieves and stores e-mail. Secondly, you must have an e-mail account. Most Internet Service Providers provide free e-mail account(s) for free. Some services offer free e-mail, like Hotmail, and Geocities.After configuring your e-mail client with your POP3 and SMTP server address (your e-mail provider will give you that information), you are ready to receive mail.An attachment is a file sent in a letter. If someone sends you an attachment and you don't know who it is, don't run the file, ever. It could be a virus or some other kind of nasty programs. You can't get a virus justby reading e-mail, you'll have to physically execute some form of program for a virus to strike.A signature is a feature of many e-mail programs. A signature is added to the end of every e-mail you send out. You can put a text graphic, your business information, anything you want.Imagine that a computer on the Internet is an island in the sea. The sea is filled with millions of islands. This is the Internet. Imagine an island communicates with other island by sending ships to other islands and receiving ships. The island has ports to accept and send out ships.A computer on the Internet has access nodes called ports. A port is just a symbolic object that allows the computer to operate on a network (or the Internet). This method is similar to the island/ocean symbolism above.Telnet refers to accessing ports on a server directly with a text connection. Almost every kind of Internet function, like accessing web pages,"chatting," and e-mailing is done over a Telnet connection.Telnetting requires a Telnet client. A telnet program comes with the Windows system, so Windows users can access telnet by typing in "telnet" (without the "'s) in the run dialog. Linux has it built into the command line; telnet. A popular telnet program for Macintosh is NCSA telnet.Any server software (web page daemon, chat daemon) can be accessed via telnet, although they are not usually meant to be accessed in such a manner. For instance, it is possible to connect directly to a mail server and check your mail by interfacing with the e-mail server software, but it's easier to use an e-mail client (of course).There are millions of WebPages that come from all over the world, yet how will you know what the address of a page you want is?Search engines save the day. A search engine is a very large website that allows you to search it's own database of websites. For instance, if you wanted to find a website on dogs, you'd search for "dog" or "dogs" or "dog information." Here are a few search-engines.1. Altavista (/doc/bedc387343323968011c9268.html ) - Web spider & Indexed2. Yahoo (/doc/bedc387343323968011c9268.html ) - Web spider & Indexed Collection3. Excite (/doc/bedc387343323968011c9268.html ) - Web spider & Indexed4. Lycos (/doc/bedc387343323968011c9268.html ) - Web spider & Indexed5. Metasearch (/doc/bedc387343323968011c9268.html ) - Multiple searchA web spider is a program used by search engines that goes from page to page, following any link it can possibly find. This means that a search engine can literally map out as much of the Internet as it's own time and speed allows for.An indexed collection uses hand-added links. For instance, on Yahoo's site. You can click on Computers & the Internet. Then you can click on Hardware. Then you can click on Modems, etc., and along the way through sections, there are sites available which relate to what section you're in.Metasearch searches many search engines at the same time, finding the top choices from about 10 search engines, making searching a lot more effective.Once you are able to use search engines, you can effectively find the pages you want.With the arrival of networking and multi user systems, security has always been on the mind of system developers and system operators. Since the dawn of AT&T and its phone network, hackers have been known by many, hackers who find ways all the time of breaking into systems. It used to not be that big of a problem, since networking was limited to big corporate companies or government computers who could afford the necessary computer security.The biggest problem now-a-days is personal information. Why should you be careful while making purchases via a website? Let's look at how the internet works, quickly.The user is transferring credit card information to a webpage. Looks safe, right? Not necessarily. As the user submits the information, it is being streamed through a series of computers that make up the Internet backbone.The information is in little chunks, in packages called packets. Here's the problem: While the information is being transferred through this big backbone, what is preventing a "hacker" from intercepting this data stream at one of the backbone points?Big-brother is not watching you if you access a web site, but users should be aware of potential threats while transmitting private information. There are methods of enforcing security, like password protection, an most importantly, encryption.Encryption means scrambling data into a code that can only be unscrambled on the "other end." Browser's like Netscape Communicator and Internet Explorer feature encryption support for making on-line transfers. Some encryptions work better than others. The most advanced encryption system is called DES (Data Encryption Standard), and it was adopted by the US Defense Department because it was deemed so difficult to 'crack' that they considered it a security risk if it would fall into another countries hands.A DES uses a single key of information to unlock an entire document. The problem is, there are 75 trillion possible keys to use, so it is a highly difficult system to break. One document was cracked and decoded, but it was a combined effort of14,000 computers networked over the Internet that took a while to do it, so most hackers don't have that many resources available.附件2:外⽂资料翻译译⽂Internet的历史起源——ARPAnetInternet是被美国政府作为⼀项⼯程进⾏开发的。

计算机网络技术中英文对照外文翻译文献

计算机网络技术中英文对照外文翻译文献

中英文资料外文翻译网站建设技术1.介绍网络技术的发展,为今天全球性的信息交流与资在建立源共享和交往提供了更多的途径和可能。

足不出户便可以知晓天下大事,按几下键盘或点几下鼠标可以与远在千里之外的朋友交流,网上通信、网上浏览、网上交互、网上电子商务已成为现代人们生活的一部分。

Internet 时代, 造就了人们新的工作和生活方式,其互联性、开放性和共享信息的模式,打破了传统信息传播方式的重重壁垒,为人们带来了新的机遇。

随着计算机和信息时代的到来,人类社会前进的脚步在逐渐加快。

近几年网页设计发展,快得人目不暇接。

随着网页设计技术的发展,丰富多彩的网页成为网上一道亮丽的风景线。

要想设计美观实用的网页就应该深入掌握网站建设技术。

在建立网站时,我们分析了网站建立的目的、内容、功能、结构,应用了更多的网页设计技术。

2、网站的定义2.1 如何定义网站确定网站的任务和目标,是建设网站所面临的最重要的问题。

为什么人们会来到你的网站? 你有独特的服务吗? 人们第一次到你的网站是为了什么? 他们还会再来吗? 这些问题都是定义网站时必须考虑的问题。

要定义网站,首先,必须对整个网站有一个清晰认识,弄清到底要设计什么、主要的目的与任务、如何对任务进行组织与规划。

其次,保持网站的高品质。

在众多网站的激烈竞争中,高品质的产品是长期竞争的最大优势。

一个优秀的网站应具备:(1)用户访问网站的速度要快;(2)注意反馈与更新。

及时更新网站内容、及时反馈用户的要求;(3)首页设计要合理。

首页给访问者留下的第一印象很重要,设计务必精美,以求产生良好的视觉效果。

2.2 网站的内容和功能在网站的内容方面,就是要做到新、快、全三面。

网站内容的类型包括静态的、动态的、功能的和事物处理的。

确定网站的内容是根据网站的性质决定的,在设计政府网站、商业网站、科普性网站、公司介绍网站、教学交流网站等的内容和风格时各有不同。

我们建立的网站同这些类型的网站性质均不相同。

计算机专业-外文翻译

计算机专业-外文翻译

中文翻译:1 什么是 FlashFlash 是一种创作工具,设计人员和开发人员可使用它来创建演示文稿、应用程序和其它允许用户交互的内容。

Flash 可以包含简单的动画、视频内容、复杂演示文稿和应用程序以及介于它们之间的任何内容。

通常,使用 Flash 创作的各个内容单元称为应用程序,即使它们可能只是很简单的动画。

您可以通过添加图片、声音、视频和特殊效果,构建包含丰富媒体的 Flash 应用程序。

Flash 特别适用于创建通过 Internet 提供的内容,因为它的文件非常小。

Flash 是通过广泛使用矢量图形做到这一点的。

与位图图形相比,矢量图形需要的内存和存储空间小很多,因为它们是以数学公式而不是大型数据集来表示的。

位图图形之所以更大,是因为图像中的每个像素都需要一组单独的数据来表示。

要在 Flash 中构建应用程序,可以使用 Flash 绘图工具创建图形,并将其它媒体元素导入 Flash 文档。

接下来,定义如何以及何时使用各个元素来创建设想中的应用程序。

在 Flash 中创作内容时,需要在 Flash 文档文件中工作。

Flash 文档的文件扩展名为 .fla (FLA)。

Flash 文档有四个主要部分:舞台是在回放过程中显示图形、视频、按钮等内容的位置。

时间轴用来通知 Flash 显示图形和其它项目元素的时间,也可以使用时间轴指定舞台上各图形的分层顺序。

位于较高图层中的图形显示在较低图层中的图形的上方。

库面板是 Flash 显示 Flash 文档中的媒体元素列表的位置。

ActionScript代码可用来向文档中的媒体元素添加交互式内容。

例如,可以添加代码以便用户在单击某按钮时显示一幅新图像,还可以使用 ActionScript 向应用程序添加逻辑。

逻辑使应用程序能够根据用户的操作和其它情况采取不同的工作方式。

Flash 包括两个版本的 ActionScript,可满足创作者的不同具体需要。

有关编写 ActionScript 的详细信息,请参阅"帮助"面板中的"学习 Flash 中的 ActionScript 2.0"。

计算机专业外文翻译 9

计算机专业外文翻译 9

译文Apache Struts 2“Apache Struts 2 is an elegant, extensible framework for creating enterprise-ready Java web applications. The framework is designed to streamline the full development cycle, from building, to deploying, to maintaining applications over time” -The Apache Software Foundation.4.1简介Struts是Apache的一个应用于Java Web的网络编程的开源框架。

Struts框架的创造者和发起者是McClanahan。

后来在2002年,Struts框架由Apache软件基金会收购和接管。

Struts 提供给程序员一个易于组织基于JSP和Servlet的HTML格式和Java代码的框架。

Struts1几乎能与所有标准的Java技术和Jakarta配置包协同工作。

然而,随着需求的不断增长,Struts1在网络应用程序暴露出来许多问题,所以为了满足需求,导致Strut2推出,Strut2能更好地为开发者提供服务,如 Ajax、高效开发和可扩展性。

4.1.1 Struts 2的起源自从2000年Apache Struts的发起,Struts框架取得了非常大的成功,被大多数标准所接纳,得到了很大的发展,如果不是这样,哪里会有今天java web程序的成绩。

它的历史,告诉我们Struts是怎样组织JSP和/ Servlets,而提供了固定的框架。

Struts融入server-generated HTML与Javascript,客户端验证,也使得开发比较容易和维护。

随着时间推进的和客户对web 需求扩大,网站应用程序取得硕果累累,Struts1太老了,开始在越来越多的网站前端开发者视野中淡去。

【计算机专业文献翻译】信息系统的管理

【计算机专业文献翻译】信息系统的管理
基本上每一台计算机都能连接到网络中,一台计算机要么是客户端,要么就是服务器。服务器更具强大和区别性,因为它存储了网络中其他机器需要使用的数据。个人计算机的客户端在需要数据的时候随时都可以访问服务器。网络中既是服务器又是客户端的计算机称作点对点网络。
传播媒体必须经过仔细选择,平衡每个媒体的优点和缺点,这个选择决定网络的速度。改变一个已经安装好的网络媒体通常非常昂贵。最实用的传播媒体是电缆,光纤,广播,光,红外线。
本科生毕业设计(论文)外文资料译文
(2009届)
论文题目
基于Javamail的邮件收发系统
学生姓名
学号
专业
计算机科学与技术
班级
指导教师
职称
讲师、副教授
填表日期
2008年 12月 10 日
信息科学与工程学院教务科制
外文资料翻译(译文不少于2000汉字)
1.所译外文资料:信息系统的管理Managing Information Systems
数据共享是网络的重要应用之一。网络可以共享交易数据,搜索和查询数据,信息,公告板,日历,团队和个人信息数据,备份等。在交易的时候,连接一个公司的电脑的中央数据库包括现有库存信息和出售的数据信息。如果数据被储存在一个中央数据库中,搜查结果便可从中获取。电子邮件的发送已经成为同事之间最常用的信息共享的方式之一。
自从信号在空中传输后,广播,光以及红外线作为传播媒体已经不需要电缆。
传输能力,即一个传播媒体一次性传输的数据量,在不同的媒体中,材料不同,安装时付出的劳动不同,传输的能力有很大的区别。传播媒体有时候被合并,代替远地域之间的高速传播媒体,速度虽慢,但是成本低,在一幢大楼中进行信息传播。
连接设备包括网络连接卡NICS,或者在计算机和网络间进行传输和信号传递的局域网LAN卡。其他常用的设备连接不同的网络,特别是当一个网络使用不用的传输媒体的时候。使用一个对很多用户都开放的系统很重要,比如windows/NT,Office2000,Novell,UNIX.

计算机安全漏洞中英文对照外文翻译文献

计算机安全漏洞中英文对照外文翻译文献

计算机安全漏洞中英文对照外文翻译文献(文档含英文原文和中文翻译)Talking about security loopholesreference to the core network security business objective is to protect the sustainability of the system and data security, This two of the main threats come from the worm outbreaks, hacking attacks, denial of service attacks, Trojan horse. Worms, hacker attacks problems and loopholes closely linked to, if there is major security loopholes have emerged, the entire Internet will be faced with a major challenge. While traditional Trojan and little security loopholes, but recently many Trojan are clever use of the IE loophole let you browse the website at unknowingly were on the move.Security loopholes in the definition of a lot, I have here is a popular saying: can be used to stem the "thought" can not do, and are safety-related deficiencies. Thisshortcoming can be a matter of design, code realization of the problem.Different perspective of security loo phole sIn the classification of a specific procedure is safe from the many loopholes in classification.1. Classification from the user groups:● Public loopholes in the software category. If the loopholes in Windows, IEloophole, and so on.● specialized software loophole. If Oracl e loopholes, Apache, etc. loopholes.2. Data from the perspective include :● could not reasonably be read and read data, including the memory of thedata, documents the data, Users input data, the data in the database, network,data transmission and so on.● designated can be written into the designated places (including the localpaper, memory, databases, etc.)● Input data can be implemented (including native implementation,according to Shell code execution, by SQL code execution, etc.)3. From the point of view of the scope of the role are :● Remote loopholes, an attacker could use the network and directly throughthe loopholes in the attack. Such loopholes great harm, an attacker can createa loophole through other people's computers operate. Such loopholes and caneasily lead to worm attacks on Windows.● Local loopholes, the attacker must have the machine premise accesspermissions can be launched to attack the loopholes. Typical of the localauthority to upgrade loopholes, loopholes in the Unix system are widespread,allow ordinary users to access the highest administrator privileges.4. Trigger conditions from the point of view can be divided into:● Initiative trigger loopholes, an attacker can take the initiative to use theloopholes in the attack, If direct access to computers.● Passive trigger loopholes must be computer operators can be carried outattacks with the use of the loophole. For example, the attacker made to a mailadministrator, with a special jpg image files, if the administrator to open image files will lead to a picture of the software loophole was triggered, thereby system attacks, but if managers do not look at the pictures will not be affected by attacks.5. On an operational perspective can be divided into:● File operation type, mainly for the operation of the target file path can be controlled (e.g., parameters, configuration files, environment variables, the symbolic link HEC), this may lead to the following two questions: ◇Content can be written into control, the contents of the documents can be forged. Upgrading or authority to directly alter the important data (such as revising the deposit and lending data), this has many loopholes. If history Oracle TNS LOG document can be designated loopholes, could lead to any person may control the operation of the Oracle computer services;◇information content can be output Print content has been contained to a screen to record readable log files can be generated by the core users reading papers, Such loopholes in the history of the Unix system crontab subsystem seen many times, ordinary users can read the shadow of protected documents;● Memory coverage, mainly for memory modules can be specified, write content may designate such persons will be able to attack to enforce the code (buffer overflow, format string loopholes, PTrace loopholes, Windows 2000 history of the hardware debugging registers users can write loopholes), or directly alter the memory of secrets data.● logic errors, such wide gaps exist, but very few changes, so it is difficult to discern, can be broken down as follows : ◇loopholes competitive conditions (usually for the design, typical of Ptrace loopholes, The existence of widespread document timing of competition) ◇wrong tactic, usually in design. If the history of the FreeBSD Smart IO loopholes. ◇Algorithm (usually code or design to achieve), If the history of Microsoft Windows 95/98 sharing passwordcan easily access loopholes. ◇Imperfections of the design, such as TCP / IP protocol of the three-step handshake SYN FLOOD led to a denial of service attack. ◇realize the mistakes (usually no problem for the design, but the presence of coding logic wrong, If history betting system pseudo-random algorithm)● External orders, Typical of external commands can be controlled (via thePATH variable, SHELL importation of special characters, etc.) and SQL injection issues.6. From time series can be divided into:● has long found loopholes: manufacturers already issued a patch or repairmethods many people know already. Such loopholes are usually a lot of people have had to repair macro perspective harm rather small.● recently discovered loophole: manufacturers just made patch or repairmethods, the people still do not know more. Compared to greater danger loopholes, if the worm appeared fool or the use of procedures, so will result in a large number of systems have been attacked.● 0day: not open the loophole in the private transactions. Usually such loopholesto the public will not have any impact, but it will allow an attacker to the target by aiming precision attacks, harm is very great.Different perspective on the use of the loopholesIf a defect should not be used to stem the "original" can not do what the (safety-related), one would not be called security vulnerability, security loopholes and gaps inevitably closely linked to use.Perspective use of the loopholes is:● Data Perspective: visit had not visited the data, including reading and writing.This is usually an attacker's core purpose, but can cause very serious disaster (such as banking data can be written).● Competence Perspective: Major Powers to bypass or permissions. Permissionsare usually in order to obtain the desired data manipulation capabilities.● Usability p erspective: access to certain services on the system of controlauthority, this may lead to some important services to stop attacks and lead to a denial of service attack.● Authentication bypass: usually use certification system and the loopholes willnot authorize to access. Authentication is usually bypassed for permissions or direct data access services.● Code execution perspective: mainly procedures for the importation of thecontents as to implement the code, obtain remote system access permissions or local system of higher authority. This angle is SQL injection, memory type games pointer loopholes (buffer overflow, format string, Plastic overflow etc.), the main driving. This angle is usually bypassing the authentication system, permissions, and data preparation for the reading.Loopholes explore methods mustFirst remove security vulnerabilities in software BUG in a subset, all software testing tools have security loopholes to explore practical. Now that the "hackers" used to explore the various loopholes that there are means available to the model are:● fuzz testing (black box testing), by constructing procedures may lead toproblems of structural input data for automatic testing.● FOSS audit (White Box), now have a series of tools that can assist in thedetection of the safety procedures BUG. The most simple is your hands the latest version of the C language compiler.● IDA anti-compilation of the audit (gray box testing), and above the sourceaudit are very similar. The only difference is that many times you can obtain software, but you can not get to the source code audit, But IDA is a very powerful anti-Series platform, let you based on the code (the source code is in fact equivalent) conducted a safety audit.● dynamic tracking, is the record of proceedings under different conditions andthe implementation of all security issues related to the operation (such as file operations), then sequence analysis of these operations if there are problems, it is competitive category loopholes found one of the major ways. Other tracking tainted spread also belongs to this category.● patch, the software manufacturers out of the question usually addressed in thepatch. By comparing the patch before and after the source document (or the anti-coding) to be aware of the specific details of loopholes.More tools with which both relate to a crucial point: Artificial need to find a comprehensive analysis of the flow path coverage. Analysis methods varied analysis and design documents, source code analysis, analysis of the anti-code compilation, dynamic debugging procedures.Grading loopholesloopholes in the inspection harm should close the loopholes and the use of the hazards related Often people are not aware of all the Buffer Overflow Vulnerability loopholes are high-risk. A long-distance loophole example and better delineation:●Remote access can be an OS, application procedures, version information.●open unnecessary or dangerous in the service, remote access to sensitiveinformation systems.● Remote can be restricted for the documents, data reading.●remotely important or restricted documents, data reading.● may be limited for long-range document, data revisions.● Remote can be restricted for important documents, data changes.● Remote c an be conducted without limitation in the important documents, datachanges, or for general service denial of service attacks.● Remotely as a normal user or executing orders for system and network-leveldenial of service attacks.● may be remote managem ent of user identities to the enforcement of the order(limited, it is not easy to use).● can be remote management of user identities to the enforcement of the order(not restricted, accessible).Almost all local loopholes lead to code execution, classified above the 10 points system for:●initiative remote trigger code execution (such as IE loophole).● passive trigger remote code execution (such as Word gaps / charting softwareloopholes).DEMOa firewall segregation (peacekeeping operation only allows the Department of visits) networks were operating a Unix server; operating systems only root users and users may oracle landing operating system running Apache (nobody authority), Oracle (oracle user rights) services.An attacker's purpose is to amend the Oracle database table billing data. Its possible attacks steps:● 1. Access peacekeeping operation of the network. Access to a peacekeepingoperation of the IP address in order to visit through the firewall to protect the UNIX server.● 2. Apache s ervices using a Remote Buffer Overflow Vulnerability direct accessto a nobody's competence hell visit.● 3. Using a certain operating system suid procedure of the loophole to upgradetheir competence to root privileges.● 4. Oracle sysdba landing into t he database (local landing without a password).● 5. Revised target table data.Over five down for process analysis:●Step 1: Authentication bypass●Step 2: Remote loopholes code execution (native), Authentication bypassing● Step 3: permissions, auth entication bypass● Step 4: Authentication bypass● Step 5: write data安全漏洞杂谈网络安全的核心目标是保障业务系统的可持续性和数据的安全性,而这两点的主要威胁来自于蠕虫的暴发、黑客的攻击、拒绝服务攻击、木马。

计算机科学与技术Java垃圾收集器中英文对照外文翻译文献

计算机科学与技术Java垃圾收集器中英文对照外文翻译文献

中英文资料中英文资料外文翻译文献原文:How a garbage collector works of Java LanguageIf you come from a programming language where allocating objects on the heap is expensive, you may naturally assume that Java’s scheme of allocating everything (except primitives) on the heap is also expensive. However, it turns out that the garbage collector can have a significant impact on increasing the speed of object creation. This might sound a bit odd at first—that storage release affects storage allocation—but it’s the way some JVMs work, and it means that allocating storage for heap objects in Java can be nearly as fast as creating storage on the stack in other languages.For example, you can think of the C++ heap as a yard where each stakes out its own piece of turf object. This real estate can become abandoned sometime later and must be reused. In some JVMs, the Java heap is quite different; it’s more like a conveyor belt that moves forwardevery time you allocate a new object. This means that object storage allocation is remarkab ly rapid. The “heap pointer” is simply moved forward into virgin territory, so it’s effectively the same as C++’s stack allocation. (Of course, there’s a little extra overhead for bookkeeping, but it’s nothing like searching for storage.)You might observ e that the heap isn’t in fact a conveyor belt, and if you treat it that way, you’ll start paging memory—moving it on and off disk, so that you can appear to have more memory than you actually do. Paging significantly impacts performance. Eventually, after you create enough objects, you’ll run out of memory. The trick is that the garbage collector steps in, and while it collects the garbage it compacts all the objects in the heap so that you’ve effectively moved the “heap pointer” closer to the beginning of the conveyor belt and farther away from a page fault. The garbage collector rearranges things and makes it possible for the high-speed, infinite-free-heap model to be used while allocating storage.To understand garbage collection in Java, it’s helpful le arn how garbage-collection schemes work in other systems. A simple but slow garbage-collection technique is called reference counting. This means that each object contains a reference counter, and every time a reference is attached to that object, the reference count is increased. Every time a reference goes out of scope or is set to null, the reference count isdecreased. Thus, managing reference counts is a small but constant overhead that happens throughout the lifetime of your program. The garbage collector moves through the entire list of objects, and when it finds one with a reference count of zero it releases that storage (however, reference counting schemes often release an object as soon as the count goes to zero). The one drawback is that if objects circularly refer to each other they can have nonzero reference counts while still being garbage. Locating such self-referential groups requires significant extra work for the garbage collector. Reference counting is commonly used to explain one kind of g arbage collection, but it doesn’t seem to be used in any JVM implementations.In faster schemes, garbage collection is not based on reference counting. Instead, it is based on the idea that any non-dead object must ultimately be traceable back to a reference that lives either on the stack or in static storage. The chain might go through several layers of objects. Thus, if you start in the stack and in the static storage area and walk through all the references, you’ll find all the live objects. For each reference that you find, you must trace into the object that it points to and then follow all the references in that object, tracing into the objects they point to, etc., until you’ve moved through the entire Web that originated with the reference on the stack or in static storage. Each object that you move through must still be alive. Note that there is no problem withdetached self-referential groups—these are simply not found, and are therefore automatically garbage.In the approach described here, the JVM uses an adaptive garbage-collection scheme, and what it does with the live objects that it locates depends on the variant currently being used. One of these variants is stop-and-copy. This means that—for reasons that will become apparent—the program is first stopped (this is not a background collection scheme). Then, each live object is copied from one heap to another, leaving behind all the garbage. In addition, as the objects are copied into the new heap, they are packed end-to-end, thus compacting the new heap (and allowing new storage to simply be reeled off the end as previously described).Of course, when an object is moved from one place to another, all references that point at the object must be changed. The reference that goes from the heap or the static storage area to the object can be changed right away, but there can be other references pointing to this object Initialization & Cleanup that will be encountered later during the “walk.” These are fixed up as they are found (you could imagine a table that maps old addresses to new ones).There are two issues that make these so-called “copy collectors” inefficient. The first is the idea that you have two heaps and you slosh all the memory back and forth between these two separate heaps,maintaining twice as much memory as you actually need. Some JVMs deal with this by allocating the heap in chunks as needed and simply copying from one chunk to another.The second issue is the copying process itself. Once your program becomes stable, it might be generating little or no garbage. Despite that, a copy collector will still copy all the memory from one place to another, which is wasteful. To prevent this, some JVMs detect that no new garbage is being generated and switch to a different scheme (this is the “adaptive” part). This other scheme is called mark-and-sweep, and it’s what earlier versions of Sun’s JVM used all the time. For general use, mark-and-sweep is fairly slow, but when you know you’re generating little or no garbage, it’s fast. Mark-and-sweep follows the same logic of starting from the stack and static storage, and tracing through all the references to find live objects.However, each time it finds a live object, that object is marked by setting a flag in it, but the object isn’t collected yet.Only when the marking process is finished does the sweep occur. During the sweep, the dead objects are released. However, no copying happens, so if the collector chooses to compact a fragmented heap, it does so by shuffling objects around. “Stop-and-copy”refers to the idea that this type of garbage collection is not done in the background; Instead, the program is stopped while the garbage collection occurs. In the Sun literature you’llfind many references to garbage collection as a low-priority background process, but it turns out that the garbage collection was not implemented that way in earlier versions of the Sun JVM. Instead, the Sun garbage collector stopped the program when memory got low. Mark-and-sweep also requires that the program be stopped.As previously mentioned, in the JVM described here memory is allocated in big blocks. If you allocate a large object, it gets its own block. Strict stop-and-copy requires copying every live object from the source heap to a new heap before you can free the old one, which translates to lots of memory. With blocks, the garbage collection can typically copy objects to dead blocks as it collects. Each block has a generation count to keep track of whether it’s alive. In the normal case, only the blocks created since the last garbage collection are compacted; all other blocks get their generation count bumped if they have been referenced from somewhere. This handles the normal case of lots of short-lived temporary objects. Periodically, a full sweep is made—large objects are still not copied (they just get their generation count bumped), and blocks containing small objects are copied and compacted.The JVM monitors the efficiency of garbage collection and if it becomes a waste of time because all objects are long-lived, then it switches to mark-and sweep. Similarly, the JVM keeps track of how successful mark-and-sweep is, and if the heap starts to becomefragmented, it switches back to stop-and-copy. This is where the “adaptive” part comes in, so you end up with a mouthful: “Adaptive generational stop-and-copy mark-and sweep.”There are a number of additional speedups possible in a JVM. An especially important one involves the operation of the loader and what is called a just-in-time (JIT) compiler. A JIT compiler partially or fully converts a program into native machine code so that it doesn’t need to be interpreted by the JVM and thus runs much faster. When a class must be loaded (typically, the first time you want to create an object of that class), the .class file is located, and the byte codes for that class are brought into memory. At this point, one approach is to simply JIT compile all the code, but this has two drawbacks: It takes a little more time, which, compounded throughout the life of the program, can add up; and it increases the size of the executable (byte codes are significantly more compact than expanded JIT code), and this might cause paging, which definitely slows down a program. An alternative approach is lazy evaluation, which means that the code is not JIT compiled until necessary. Thus, code that never gets executed might never be JIT compiled. The Java Hotspot technologies in recent JDKs take a similar approach by increasingly optimizing a piece of code each time it is executed, so the more the code is executed, the faster it gets.译文:Java垃圾收集器的工作方式如果你学下过一种因为在堆里分配对象所以开销过大的编程语言,很自然你可能会假定Java 在堆里为每一样东西(除了primitives)分配内存资源的机制开销也会很大。

计算机专业外文文献翻译

计算机专业外文文献翻译

毕业设计(论文)外文文献翻译(本科学生用)题目:Plc based control system for the music fountain 学生姓名:_ ___学号:060108011117 学部(系): 信息学部专业年级: _06自动化(1)班_指导教师: ___职称或学位:助教__20 年月日外文文献翻译(译成中文1000字左右):【主要阅读文献不少于5篇,译文后附注文献信息,包括:作者、书名(或论文题目)、出版社(或刊物名称)、出版时间(或刊号)、页码。

提供所译外文资料附件(印刷类含封面、封底、目录、翻译部分的复印件等,网站类的请附网址及原文】英文节选原文:Central Processing Unit (CPU) is the brain of a PLC controller. CPU itself is usually one of the microcontrollers. Aforetime these were 8-bit microcontrollers such as 8051, and now these are 16-and 32-bit microcontrollers. Unspoken rule is that you’ll find mostly Hitachi and Fujicu microcontrollers in PLC controllers by Japanese makers, Siemens in European controllers, and Motorola microcontrollers in American ones. CPU also takes care of communication, interconnectedness among other parts of PLC controllers, program execution, memory operation, overseeing input and setting up of an output. PLC controllers have complex routines for memory checkup in order to ensure that PLC memory was not damaged (memory checkup is done for safety reasons).Generally speaking, CPU unit makes a great number of check-ups of the PLC controller itself so eventual errors would be discovered early. You can simply look at any PLC controller and see that there are several indicators in the form. of light diodes for error signalization.System memory (today mostly implemented in FLASH technology) is used by a PLC for a process control system. Aside form. this operating system it also contains a user program translated forma ladder diagram to a binary form. FLASH memory contents can be changed only in case where user program is being changed. PLC controllers were used earlier instead of PLASH memory and have had EPROM memory instead of FLASH memory which had to be erased with UV lamp and programmed on programmers. With the use of FLASH technology this process was greatly shortened. Reprogramming a program memory is done through a serial cable in a program for application development.User memory is divided into blocks having special functions. Some parts of a memory are used for storing input and output status. The real status of an input is stored either as “1”or as “0”in a specific memory bit/ each input or output has one corresponding bit in memory. Other parts of memory are used to store variable contents for variables used in used program. For example, time value, or counter value would be stored in this part of the memory.PLC controller can be reprogrammed through a computer (usual way), but also through manual programmers (consoles). This practically means that each PLC controller can programmed through a computer if you have the software needed for programming. Today’s transmission computers are ideal for reprogramming a PLC controller in factory itself. This is of great importance to industry. Once the system is corrected, it is also important to read the right program into a PLC again. It is also good to check from time to time whether program in a PLC has not changed. This helps to avoid hazardous situations in factory rooms (some automakers have established communication networks which regularly check programs in PLC controllers to ensure execution only of good programs). Almost every program for programming a PLC controller possesses various useful options such as: forced switching on and off of the system input/outputs (I/O lines),program follow up in real time as well as documenting a diagram. This documenting is necessary to understand and define failures and malfunctions. Programmer can add remarks, names of input or output devices, and comments that can be useful when finding errors, or with system maintenance. Adding comments and remarks enables any technician (and not just a person who developed the system) to understand a ladder diagram right away. Comments and remarks can even quote precisely part numbers if replacements would be needed. This would speed up a repair of any problems that come up due to bad parts. The old way was such that a person who developed a system had protection on the program, so nobody aside from this person could understand how it was done. Correctly documented ladder diagram allows any technician to understand thoroughly how system functions.Electrical supply is used in bringing electrical energy to central processing unit. Most PLC controllers work either at 24 VDC or 220 VAC. On some PLC controllers you’ll find electrical supply as a separate module. Those are usually bigger PLC controllers, while small and medium series already contain the supply module. User has to determine how much current to take from I/O module to ensure that electrical supply provides appropriate amount of current. Different types of modules use different amounts of electrical current. This electrical supply is usually not used to start external input or output. User has to provide separate supplies in starting PLC controller inputs because then you can ensure so called “pure” supply for the PLC controller. With pure supply we mean supply where industrial environment can not affect it damagingly. Some of the smaller PLC controllers supply their inputs with voltage from a small supply source already incorporated into a PLC.中文翻译:从结构上分,PLC分为固定式和组合式(模块式)两种。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

英文参考文献及翻译Linux - Operating system of cybertimesThough for a lot of people , regard Linux as the main operating system to make up huge work station group, finish special effects of " Titanic " make , already can be regarded as and show talent fully. But for Linux, this only numerous news one of. Recently, the manufacturers concerned have announced that support the news of Linux to increase day by day, users' enthusiasm to Linux runs high unprecedentedly too. Then, Linux only have operating system not free more than on earth on 7 year this piece what glamour, get the favors of such numerous important software and hardware manufacturers as the masses of users and Orac le , Informix , HP , Sybase , Corel , Intel , Netscape , Dell ,etc. , OK?1.The background of Linux and characteristicLinux is a kind of " free (Free ) software ": What is called free,mean users can obtain the procedure and source code freely , and can use them freely , including revise or copy etc.. It is a result of cybertimes, numerous technical staff finish its research and development together through Inte rnet, countless user is it test and except fault , can add user expansion function that oneself make conveniently to participate in. As the most outstanding one in free software, Linux has characteristic of the following:(1)Totally follow POSLX standard, expand the network operatingsystem of supporting all AT&T and BSD Unix characteristic. Because of inheritting Unix outstanding design philosophy , and there are clean , stalwart , high-efficient and steady kernels, their all key codes are finished by Li nus Torvalds and other outstanding programmers, without any Unix code of AT&T or Berkeley, so Linu x is not Unix, but Linux and Unix are totally compatible.(2)Real many tasks, multi-user's system, the built-in networksupports, can be with such seamless links as NetWare , Windows NT , OS/2 , Unix ,etc.. Network in various kinds of Unix it tests to be fastest in comparing and assess efficiency. Support such many kinds of files systems as FAT16 , FAT32 , NTFS , Ex t2FS , ISO9600 ,etc. at the same time .(3) Can operate it in many kinds of hardwares platform , including such processors as Alpha , SunSparc , PowerPC , MIPS ,etc., to various kinds of new-type peripheral hardwares, can from distribute on global numerous programmer there getting support rapidly too.(4) To that the hardware requires lower, can obtain very good performance on more low-grade machine , what deserves particular mention is Linux outstanding stability , permitted " year " count often its running times.2.Main application of Linux At present,Now, the application of Linux mainly includes:(1) Internet/Intranet: This is one that Linux was used most at present, it can offer and include Web server , all such Inter net services as Ftp server , Gopher server , SMTP/POP3 mail server , Proxy/Cache server , DNS server ,etc.. Linux kernel supports IPalias , PPP and IPtunneling, these functions can be used for setting up fictitious host computer , fictitious service , VPN (fictitious special-purpose network ) ,etc.. Operating Apache Web server on Linux mainly, the occupation rate of market in 1998 is 49%, far exceeds the sum of such several big companies as Microsoft , Netscape ,etc..(2) Because Linux has outstanding networking ability , it can be usedin calculating distributedly large-scaly, for instance cartoon making , scientific caculation , database and file server ,etc..(3) As realization that is can under low platform fullness of Unix that operate , apply at all levels teaching and research work of universities and colleges extensively, if Mexico government announce middle and primary schools in the whole country dispose Linux and offer Internet service for student already.(4) Tabletop and handling official business appliedly. Application number of people of in this respect at present not so good as Windows of Microsoft far also, reason its lie in Lin ux quantity , desk-top of application software not so good as Windows application far not merely,because the characteristic of the freedom software makes it not almost have advertisement that support (though the function of Star Office is not second to MS Office at the same time, but there are actually few people knowing).3.Can Linux become a kind of major operating system?In the face of the pressure of coming from users that is strengthened day by day, more and more commercial companies transplant its application to Linux platform, comparatively important incident was as follows, in 1998 ①Compaq and HP determine to put forward user of requirement truss up Linux at their servers , IBM and Dell promise to offer customized Linux system to user too. ②Lotus announce, Notes the next edition include one special-purpose edition in Linux. ③Corel Company transplants its famous WordPerfect to on Linux, and free issue. Corel also plans to move the other figure pattern process products to Linux platform completely.④Main database producer: Sybase , Informix , Oracle , CA , IBM have already been transplanted one's own database products to on Linux, or has finished Beta edition, among them Oracle and Informix also offer technical support to their products.4.The gratifying one is, some farsighted domestic corporations have begun to try hard to change this kind of current situation already. Stone Co. not long ago is it invest a huge sum of money to claim , regard Linux as platform develop a Internet/Intranet solution, regard this as the core and launch Stone's system integration business , plan to set up nationwide Linux technical support organization at the same time , take the lead to promote the freedom software application and development in China. In addition domestic computer Company , person who win of China , devoted to Linux relevant software and hardware application of system popularize too. Is it to intensification that Linux know , will have more and more enterprises accede to the ranks that Linux will be used with domestic every enterprise to believe, more software will be planted in Linux platform. Meanwhile, the domestic university should regard Linux as the original version and upgrade already existing Unix content of courses , start with analysing the source code and revising the kernel and train a large number of senior Linux talents, improve our country's own operating system. Having only really grasped the operating system, the software industry of our country could be got rid of and aped sedulously at present, the passive state led by the nose byothers, create conditions for revitalizing the software industry of our country fundamentally.中文翻译Linux—网络时代的操作系统虽然对许多人来说,以Linux作为主要的操作系统组成庞大的工作站群,完成了《泰坦尼克号》的特技制作,已经算是出尽了风头。

相关文档
最新文档