C++多平台中英文资料外文翻译文献
网站设计与实现中英文对照外文翻译文献
中英文对照外文翻译文献(文档含英文原文和中文翻译)H O L I S T I C W E B B R O W S I N G:T R E N D S O F T H E F U T U R EThe future of the Web is everywhere. The future of the Web is not at your desk. It’s not necessarily in your pocket, either. It’s everywhere. With each new technological innovation, we continue to become more and more immersed in the Web, connecting the ever-growing layer of information in the virtual world to the real one around us. But rather than get starry-eyed with utopian wonder about this bright future ahead, we should soberly anticipate the massive amount of planning and design work it will require of designers, developers and others.The gap between technological innovation and its integration in our daily lives is shrinking at a rate much faster than we can keep pace with—consider the number of unique Web applications you signed up for in the past year alone. T hishas resulted in a very fragmented experience of the Web. While running several different browsers, with all sorts of plug-ins, you might also be running multiple standalone applications to manage feeds, social media accounts and music playlists.Even though we may be adept at switching from one tab or window to another, we should be working towards a more holistic Web experience, one that seamlessly integrates all of the functionality we need in the simplest and most contextual way. With this in mind, l et’s review four trends that designers and developers would be wise to observe and integrate into their work so as to pave the way for a more holistic Web browsing experience:1.The browser as operating system,2.Functionally-limited mobile applications,3.Web-enhanced devices,4.Personalization.1. The Browser As Operating SystemThanks to the massive growth of Web productivity applications, creative tools and entertainment options, we are spending more time in the browser than ever before. The more time we spend there, the less we make use of the many tools in the larger operating system that actually runs the browser. As a result, we’re beginning to expect the same high level of reliability and sophistication in our Web experience that we get from our operating system.For the most part, our expectations have been met by such innovations as Google’s Gmail, Talk, Calendar and Docs applications, which all offer varying degrees of integration with one another, and online image editing tools like Picnik and Adobe’s on line version of Photoshop. And those expectations will continue to be met by upcoming releases, such as the Chrome operating system—we’re already thinking of our browsers as operating systems. Doing everything on the Web was once a pipe dream, but now it’s a reality.U B I Q U I T YThe one limitation of Web browsers that becomes more and more obvious as we make greater use of applications in the cloud is the lack of usable connections between open tabs. Most users have grown accustomed to keeping many tabs open, switching back and forth rapidly between Gmail, Google Calendar, Google Docs and various social media tools. But this switching from tab to tab is indicative of broken connections between applications that really ought to be integrated.Mozilla is attempting to functionally connect tools that we use in the browser in a more intuitive and rich way with Ubiquity. While it’s definitely a step in the right direction, the command-line approach may be a barrier to entry for thoseunable to let go of the mouse. In the screenshot below, you can see how Ubiquity allows you to quickly map a location shown on a Web page without having to open Google Maps in another tab. This is one example of integrated functionality without which you would be required to copy and paste text from one tab to another. Ubiquity’s core capability, which is creating a holistic browsing experience by understanding basic commands and executing them using appropriate Web applications, is certainly the direction in which the browser is heading.This approach, wedded to voice-recognition software, may be how we all navigate the Web in the next decade, or sooner: hands-free.T R A C E M O N K E Y A N D O G GMeanwhile, smaller, quieter releases have been paving the way to holistic browsing. This past summer, Firefox released an update to its software that includes a brand new JavaScript engine called TraceMonkey. This engine delivers a significant boost in speed and image-editing functionality, as well as the ability to play videos without third-party software or codecs.Aside from the speed advances, which are always welcome, the image and video capabilities are perfect examples of how the browser is encroaching on the operating system’s territory. Being able to edit images in the browser could replace the need for local image-editing software on your machine, and potentially for separate applications such as Picnik. At this point, it’s not certain how sophisticated this functionality can be, and so designers and ordinary users will probably continue to run local copies of Photoshop for some time to come.The new video functionality, which relies on an open-source codeccalled Ogg, opens up many possibilities, the first one being for developers who do not want to license codecs. Currently, developers are required to license a codec if they want their videos to be playable in proprietary software such as Adobe Flash. Ogg allows video to be played back in Firefox itself.What excites many, though, is that the new version of Firefox enables interactivitybetween multiple applications on the same page. One potential application of this technology, as illustrated in the image above, is allowing users to click objects in a video to get additional information about them while the video is playing.2. Functionally-Limited Mobile ApplicationsSo far, our look at a holistic Web experience has been limited to the traditional br owser. But we’re also interacting with the Web more and more on mobile devices. Right now, casual surfing on a mobile device is not a very sophisticated experiences and therefore probably not the main draw for users. Thecombination of small screens, inconsistent input options, slow connections and lack of content optimized for mobile browsers makes this a pretty clumsy, unpredictable and frustrating experience, especially if you’re not on an iPhone.However, applications written specifically for mobile environments and that deal with particular, limited sets of data—such as Google’s mobile apps,device-specific applications for Twitter and Facebook and the millions of applications in the iPhone App Store—look more like the future of mobile Web use. Because the mobile browsing experience is in its infancy, here is some advice on designing mobile experiences: rather than squeezing full-sized Web applications (i.e. ones optimized for desktops and laptops) into the pocket, designers and developers should become proficient at identifying and executing limited functionality sets for mobile applications.A M A Z O N M OB I L EA great example of a functionally-limited mobile application is Amazon’s interface for the iPhone (screenshot above). Amazon has reduced the massive scale of its website to the most essential functions: search, shopping cart and lists. And it has optimized the layout specifically for the iPhone’s smaller screen.FA C E B O O K F O R I P H O N EFacebook continues to improve its mobile version. The latest version includes a simplified landing screen, with an icon for every major function of the website in order of priority of use. While information has been reduced and segmented, the scope of the website has not been significantly altered. Each new update brings the app closer to replicating the full experience in a way that feels quite natural.G M A I L F O R I P H O N EFinally,Gmail’s iPhone application is also impressive. Google has introduced a floating bar to the interface that allows users to batch process emails, so thatt hey don’t have to open each email in order to deal with it.3. Web-Enhanced DevicesMobile devices will proliferate faster than anything the computer industry has seen before, thereby exploding entry points to the Web. But the Web will vastly expand not solely through personal mobile devices but through completely new Web-enhanced interfaces in transportation vehicles, homes, clothing and other products.In some cases, Web enhancement may lend itself to marketing initiatives and advertising; in other cases, connecting certain devices to the Web will make them more useful and efficient. Here are three examples of Web-enhanced products or services that we may all be using in the coming years:W E B-E N H A N C E D G R O C E RY S H O P P I N GWeb-connected grocery store “VIP” card s may track customer spending as they do today: every time you scan your customer card, your purchases are added to a massive database that grocery stores use to guide their stocking choices. In exchange for your data, the stores offer you discounts on selected products. Soon with Web-enhanced shopping, stores will be able to offer you specific promotions based on your particular purchasing history, and in real time (as illustrated above). This will give shoppers more incentive to sign up for VIP programs and give retailers more flexibility and variety with discounts, sales and other promotions.W E B-E N H A N C E D U T I L I T I E SOne example of a Web-enhanced device we may all see in our homes soon enough is a smart thermostat (illustrated above), which will allow users not only to monitor their power usage using Google PowerMeter but to see their current charges when it matters to them (e.g. when they’re turning up the heater, not sitting in front of a computer).W E B-E N H A N C E D P E R S O N A L B A N K I N GAnother useful Web enhancement would be a display of your current bank account balance directly on your debit or credit card (as shown above). This data would, of course, be protected and displayed only after you clear a biometric security system that reads your fingerprint directly on the card. Admittedly, this idea is rife with privacy and security implications, but something like this will nevertheless likely exist in the not-too-distant future.4. PersonalizationThanks to the rapid adoption of social networking websites, people have become comfortable with more personalized experiences online. Being greeted by name and offered content or search results based on their browsing history not only is common now but makes the Web more appealing to many. The next step is to increase the user’s control of their personal information and to offer more tools that deliver new information tailored to them.C E N T R A L I Z ED P R O F I LE SIf you’re like most people, you probably maintain somewhere between two to six active profiles on various social networks. Each profile contains a set of information about you, and the overlap varies. You probably have unique usernames and passwords for each one, too, though using a single sign-on service to gain access to multiple accounts is becoming more common. But wh y shouldn’t the information you submit to these accounts follow the same approach? In the coming years, what you tell people about yourself online will be more and more under your control. This process starts with centralizing your data in one profile,which will then share bits of it with other profiles. This way, if your information changes, you’ll have to update your profile only once.D ATA O W NE R S H I PThe question of who owns the data that you share online is fuzzy. In many cases, it even remains unaddressed. However, as privacy settings on social networks become more and more complex, users are becoming increasingly concerned about data ownership. In particular, the question of who owns the images, video and messages created by users becomes significant when a user wants to remove their profile. To put it in perspective, Royal Pingdom, inits Internet 2009 in Numbers report, found that 2.5 billion photos were uploaded to Facebook each month in 2009! The more this number grows, the more users will be concerned about what happens to the content they transfer from their machines to servers in the cloud.While it may seem like a step backward, a movement to restore user data storage to personal machines, which would then intelligently share that data with various social networks and other websites, will likely spring up in response to growing privacy concerns. A system like this would allow individuals to assign meta data to files on their computers, such as video clips and photos; this meta data would specify the files’ availability to social network profiles and other websites. Rather than uploading a copy of an image from your computer to Flickr, you would give Flickr access to certain files that remain on your machine. Organizations such as the Data Portability Project are introducing this kind of thinking accross the Web today.R E C O M M E N D AT I O N E N G I N E SSearch engines—and the whole concept of search itself—will remain in flux as personalization becomes more commonplace. Currently, the major search engines are adapting to this by offering different takes on personalized search results, based on user-specific browsing history. If you are signed in to your Google account and search for a pizza parlor, you will more likely see local results. With its social search experiment, Google also hopes to leverage your social network connections to deliver results from people you already know. Rounding those out with real-time search results gives users a more personal search experience that is a much more realistic representation of the rapid proliferation of new information on the Web. And because the results are filtered based on your behavior and preferences, the search engine will continue to “learn” more about you in order to provide the most useful information.Another new search engine is attempting to get to the heart of personalized results. Hunch provides customized recommendations of information based onusers’ answers to a set of questions for each query. The more you use it, the better the engine gets at recommending information. As long as you maintain a profile with Hunch, you will get increasingly satisfactory answers to general questions like, “Where should I go on vacation?”The trend of personalization will have significant impact on the way individual websites and applications are designed. Today, consumer websites routinely alter their landing pages based on the location of the user. Tomorrow, websites might do similar interface customizations for individual users. Designers and developers will need to plan for such visual and structural versatility to stay on the cutting edge.整体网页浏览:对未来的发展趋势克里斯托弗·巴特勒未来的网页无处不在。
C#编程语言外文文献翻译中英文
C# 编程语言概述外文文献翻译(含:英文原文及中文译文)文献出处:Barnett M. C# Programming Language Overview [J]Lecture Notes in Computer Science, 2016, 3(4):49-59.英文原文C# Programming Language OverviewBarnett M1. History of C, C++, C#The C# programming language is based on the spirit of the C and C++ programming languages. This account has powerful features and an easy-to-learn curve. It cannot be said that C# is the same as C and C++, but because C# is built on both, Microsoft has removed some features that have become more burdensome, such as pointers. This section looks at C and C++ and tracks their development in C#.The C programming language was originally defined in the UNIX operating system. In the past, we often wrote some UNIX applications, including a C compiler, and finally used to write UNIX itself. It is generally accepted that this academic competition extends to the world that contains this business. The original Windows API was defined to work with C using Windows code, and until now at least the core Windows operating system APIS maintains the C compiler.From a defined point of view, C lacks a single detail, like thelanguage Smalltalk does, and the concept of an object. Y ou will learn more about the contents of the object. In Chapter 8, "Write Object-Oriented Code," an object is collected as a data set and some operations are set. The code can be completed by C, but the concept of the object cannot be Forced to appear in this language. If you want to construct your code to make it like an object, that's fine. If you don't want to do this, C will really not mind. The object is not an intrinsic part. Many people in this language did not spend a lot of time in this program example. When the development of object-oriented perspectives began to gain acceptance, think about the code approach. C++ was developed to include this improvement. It is defined to be compatible with C (just as all C programs are also C++ programs and can be compiled by a C++ compiler) The main addition to the C++ language is to provide this new concept. C++ additionally provides a derivative of the class (object template) behavior.The C++ language is a modified version of the C language. Unfamiliar, infrequent languages such as VB, C, and C++ are very low-level and require a lot of coding to make your application run well. Reason and error checking. And C++ can be handled in some very powerful applications, the code works very smoothly. The goal is set to maintain compatibility with C. C++ cannot break the low-level features of C.Microsoft defined C# retains a lot of C and C++ statements. The code can also want to identify the code quickly. A big advantage for C# is that its designers did not make it compatible with C and C++. When this may seem like a wrong treatment, it is actually good news. C# eliminates something that makes C and C++ difficult to work with. Beginning with quirks and defects found in C. C# is starting a clean slate and does not have any compatibility requirements. So it can maintain the strengths of its predecessors and discard weaknesses that make C and C++ programs difficult to survive.2. Introduce C#C#, the new language introduced in the .NET system, is derived from C++. However, C# is a popular, object-oriented (from beginning to end) type-safe language.Language featuresThe following section provides a quick perspective on some of the features of the C# language. If some of them are unfamiliar to you, don't worry, everything will be explained in detail in the following sections. In C#, all code and data must be attached to a class. Y ou cannot define a variable outside the class, nor can you write any code that is not in the class. When a class object is created and run, the class is constructed. When the object of the class is released, the class is destroyed. The class provides single inheritance, and all the classes eventually get from thebase class is the object. Over time, C# provides versioned techniques to help with the formation of your classes to maintain code compatibility when you use code from your earlier classes.Let's look at an example of a class called Family. This class contains two static fields to hold the first and last names of family members. In the same way, there is a way to return the full name of a family member.Class Class1{Public string FirstName;Public string LastName;Public string FullName(){}Return FirstName + LastName;}Note: Single inheritance means that a C# class can only inherit from a base class.C# is a collection that you can package your class into a namespace called the namespace class. And you can help arrange collection of classes on logical aggregations. When you started learning C#, it was clear that all namespaces were related to .NET type systems. Microsoft also chose to include channels that assist in the compatibility of previouscode and APIs. These classes are also included in Microsoft's namespace.Type of dataC# lets you work with two types of data: value types and reference types. The value type holds the actual value. The reference type saves the actual value stored elsewhere in the memory. Raw data types, such as character, integer, float, enumeration, and structure types, are all value types. Objects and array types are treated as reference types. C# predefines reference types (objects and strings) New, Byte, Unsigned Short, Unsigned Integer, Unsigned Long, Float, Double-Float, Boolean, Character, and The value type and reference type of the decimal type will eventually be executed by a primitive type object. C# also allows you to convert a value or a type to another value or a type. Y ou can use an implicit conversion strategy or an explicit conversion strategy. Implicit conversion strategies are always successful and do not lose any information (for example, you can convert an integer to a long integer without losing any information because long integers are longer than integers) Some data is lost because long integers can hold more value than integers. Conversion occurs.Before and after referenceRefer to Chapter 3 "Working with V ariables" to find out more about explicit and implicit conversion strategies.Y ou can use single-dimensional and multidimensional arrays in C#at the same time. Multidimensional arrays can become a matrix. When this matrix has the same area size as a multidimensional array. Or jagged, when some arrays have different sizes.Classes and structures can have data members called attributes and fields. Y ou can define a structure called Employee. For example, there is a field called Name. If you define an Employee type variable called CurrenrEmployee, you can retrieve the employee's name by writing . What should happen after the code assignment? If the employee's name must be read by a database, for example, you can write a code "When some people ask for the value of the name attribute, read the name from the database and return the name with the string type".FunctionA function is a code that can be used at any time, code. An example of a function will appear earlier than the FullName function, in this chapter, in the Family class. A function is usually combined with some code that returns information, and a method usually does not return information. However, for us, we generally attribute them to functions.The function can have four parameters:•The input parameters have values passed into the function, but the function cannot change their values.•The output parameters have no value when they are passed to thefunction, but the function can give them a value and pass the value back to its caller. ,•The reference parameter passes another value by reference. They have a value into the function, and this value can be changed in the function.•The parameter parameter defines an array variable in the list.C# and CLR work together to provide automatic storage management. Or "Leave enough space for this object to use" code like this. The CLR monitors your memory usage and automatically retrieves it when you need it.C# provides a large number of operators that allow you to write a large number of mathematical and bitwise expressions. Many (but not all) of them can be redefined, and you can change the job of these operators.C# provides a long list of reports that you can define through a variety of processing paths through your code. Through the report's operations, using keywords like switch, while, for, break, and continue enables your code to be split into different paths depending on the value of the variable.Classes can contain code and data. Visibility of each member to other objects. C# provides such accessible ranges as public, protected, internal, protected internal, and private.V ariableV ariables can be defined as constants. The constant has a fixed value and cannot be changed during the execution of your code. The value of PI, for example, is a good example of a constant because her value will not be changed while your code is running. The enumeration type defines a specific name for the constant. For example, you can define an enumerated type of planet using Mercury V in your code. If you use a variable to represent the planet, using the names of this enum type can make your code easier to read.C# provides an embedded mechanism to define and handle some events. If you write a class that performs a long operation, you may want to call an event. When the event ends, the client can sign this time and grab the event in their own code, he can let them be notified When you have completed this long budget, this event handling mechanism uses delegates in C#, a variable that references a function.Note: Event processing is a program in your code that determines what action will take place when a time occurs.For example, the user clicks on a button. If your class holds a value, write some code called a protractor that your class can be accessed as if it were an array. Suppose you write a class called Rainbow. For example, it contains a set of colors in this rainbow. Visitors may want some MYRainbow to retrieve the first color in the rainbow. Y ou can write an indexer in your Rainbow class to define what will be returned when thevisitor accesses your class as if it were an array of values.InterfaceC# provides an interface that aggregates properties, methods, and events that describe a set of functions. The class of C# can execute the interface. It tells the user through the interface a set of function files provided by this class. What existing code can have as few compatibility issues as possible. Once there was an interface exposed, it could not be changed, but it could evolve through inheritance. C# classes can perform many interfaces, even if the class can only inherit from a base class.Let's look at an example of a very clear rule in the real world of C# that helps illustrate the interface. Many applications use the additions provided today. There is the ability to read additional items when executed. To do this, this added item must follow some rules. DLL add items must display a function called CEEntry. And you must use CEd as the beginning of the DLL file name. When we run our code, it scans the directories of all the DLLs that are starting with CEd. When it finds one, it is read. Then it uses GetProcAddress to find the CEEntry function in the DLL. This proves that it is necessary for you to obey all the rules to establish an addition. This kind of creating a read addition is necessary because it carries more unnecessary code responsibility. If we use an interface in this example, your DLL additions can be applied to an interface. This ensures that all necessary methods, properties, and eventsappear in the DLL and are specified as files.AttributesThe attribute declares additional information about your class for the CLR. In the past, if you wanted to describe your classes yourself, you would have to use a few decentralized ways to store them in external files, such as IDL or event HTML files. Through your efforts, the property solves this problem. The developer has constrained some information in the class and any kind of information, for example, in the class, defines how it acts when it is used. The possibilities are endless, which is why Microsoft will contain a lot of predefined attributes in the .NET framework.Compile C#Running your C# code generates two important types of information through the C# compiler: code and metadata. The next section describes these two topics and completes a binary review built on .NET code, which is assembly.Microsoft Intermediate Language (MSIL)The code output by the C# compiler is written in an intermediate language called Microsoft. MSIL is your code that is used to construct a detailed set of instructions to guide you on how to perform. It contains instructions for operations, such as initialization of variables, methods for evoking objects, error handling, and declaring something new. C# is notjust a language from the MSIL source code that changes during the writing process. All .NET-compatible languages, including and C++ management, generate MSIL when their source code is compiled. All .NET languages use the same runtime, so code from different languages and different compilers can easily work together.For physical CPUs, MISL is not a set of explicit instructions. It doesn't know anything about your machine's CPU, and your machine doesn't know anything about MSIL. Then, when your CPU can't read MSIL, explain the code. This sinking is called just enough to write, or JIT. The job of the JIT compiler is to translate your universal MSIL code to the machine so that the CPU can execute your code.Y ou may want to know what an extra step is in the process. When a compiler can immediately generate CPU-interpreted code for why MSIL was generated, the compiler does this later. There are many reasons for this. First, MSIL makes it easier for you to write code as it moves to a different piece of hardware. Suppose you have written some C# code and you want it to run on your desktop and handheld devices at the same time. It is very likely that these two devices have different CPUs. If you only have one C# compiler whose goal is a clear CPU, then you need two C# compilers: one with the desktop CPU and the other with the handheld device CPU. Y ou have to compile your code twice to ensure that your correct code is used on the right device. With MSIL, you only write once.The .NET Framework is installed on your desktop and it contains a JIT compiler that translates your MSIL-specific CPU code to your machine. The .NET Framework is installed on your handheld device and it contains a JIT compiler that translates the same MSIL-specific CPU-specific code to your handheld device. To run MSIL code base on any device that has a .NET JIT compiler. Y ou now have only one MSIL basic code that can run on any device that has a .NET JIT compiler. The JIT compiler on these devices can take care of your code and make them run smoothly.Another reason why the compiler uses MSIL is that the settings of the instruction can be easily read by an authenticated proximity. Part of the compiler's job is to verify your code to make it as clear as possible. When properly accessed, these checks ensure that your code does not execute any instructions that can cause your code to crash. The definition of MSIL directives makes this check process easier to understand. CPU-specific instruction settings are optimized for fast code execution. However, they make the code difficult to read and therefore difficult to check. Having a C# compiler that can output CPU-specific code at once can make code inspection difficult or even impossible. Allow the .NET Framework's JIT compiler to verify your code to ensure that your code accesses memory through a buggy path and that the variable types are used correctly.MetadataThe assembly process outputs the same amount of metadata. This is a very important part of the .NET code sharing story. Whether you use C# to build a client application or use C# to build a library that some people use for your application, you will want to take advantage of some compiled .NET code. That code may have been provided by Microsoft as part of the .NET framework, or it may be provided by some online users. The key to using a foreign code is to let the C# compiler know that the class and that variable are in another base code so that it can be found in the precompilation of your work and match the code you write with the source code.Look at the metadata for the directory for your compiled code. The number of bits of source code compiled by C# exists in the compiled code along with the generation of MSIL. The types of methods and variables are completely described in the metadata and are ready to be read by other applications. For example, can read metadata from a .NET library to provide intelligent sensing of all the methods that can be used effectively for a particular class.If you have already worked with COM, you may be familiar with type libraries. The goal of the type library is to provide the same directory functionality to COM objects. However, the type library is provided from a few limitations, and in fact not all data about the target can be put into the type library. Metadata in .NET does not have this disadvantage. Allthe code used to describe the class's information is placed in the metadata.memberSometimes you need to use C# to build a terminal application. These applications are packaged into an executable file and use .EXE as an extension. C# completely supports the creation of .EXE files. However, there are also times when you do not want to be used in other programs. Y ou may want to create some useful C# classes, such as a developer who wants to use your class in a application. In this case, you will not create an application, instead you will build a component. A component is a metadata package. As a unit to configure, these classes will share the same level of version control, security information, and dynamic requirements. Think of a component as a logical DLL. If you are familiar with Microsoft's translation services or COM+, then you can think of components as equivalent to .NET packages.There are two kinds of components: private components and global components. When you build your own component, you don't need to specify whether you want to create a global component or a private component. Y ou can only make your code accessible by a separate application. Y our component is a package similar to a DLL and is installed into the same directory when your application runs it. The application is only executable when it is in the same directory as yourcomponent.If you want to share your code, more global components in more applications. Global components can be used by any system's .NET application regardless of the directory in which it is installed. Microsoft installs components as part of the .NET structure, and each Microsoft component is installed as a global component. The Microsoft Architecture SDK contains the public functionality to install and remove artifacts from global widget storage.C# can be viewed to some extent as a programming language for the .NET Windows-oriented environment. In the past ten years, although VB and C++ have finally become very powerful languages, some of the content has come. For Visual Basic, its main advantage is that it is easy to understand. Many programming tasks are easy to accomplish and basically hide the connotations of the Windows API and the COM component structure. The downside is that Visual Basic has never implemented an early version of object-oriented, real-world (BASIC is primarily intended to make beginners easier to understand than to write large commercial applications), so it cannot really be structured or object-oriented. Programming language.On the other hand, C++ has its own root in the ANSI C++ language definition. It is not fully compatible with ANSI because Microsoft wrote the C++ compiler before the ANSI definition was standardized, but it isalready quite close. Unfortunately, this leads to two problems. First, ANSI C++ was developed under technical conditions more than a decade ago, so it does not support current concepts (such as Unicode strings and generating XML documents), and some of the older grammatical structures were designed for previous compilers ( For example, the declaration and definition of member functions are separate.) Second, Microsoft also tried to evolve C++ into a language for performing high-performance tasks on Windows - avoiding the addition of large numbers of Microsoft-specific keywords and libraries in the language. The result is that in Windows, the language becomes a very messy language. Let a C++ developer talk about how many strings are defined in this way: char*, LPTSTR, (MFC version), CString (WTL version), wchar_t*, OLECHAR*, and so on.Now entering the .NET era - a new environment, it has made new extensions to both languages. Microsoft added many Microsoft-specific keywords to C++ and evolved VB to , retaining some basic VB syntax, but it is completely different in design. From a practical application perspective, is a New language. Here, Visua l C# .NET. Microsoft describes C# as a simple, modern, object-oriented, type-safe, and C and C++-derived programming language. Most in dependent commentators are “derived from C, C++, and Java” from their claims. C# is very similar to C++ and Java. It uses parentheses ({})to mark blocks of code, and semicolons separate lines of statements. The first impression of C# code is that it is very similar to C++ or Java code. But after these seeming similarities, C# is much easier to learn than C++ but harder than Java. Its design and modern development tools are more adaptable than other languages. It also has Visua Basic's ease of use, high performance, and low memory accessibility of C++. C# includes the following features:●Full support for class and object-oriented programming, including interface and inheritance, virtual functions, and operator overloading.●Define a complete, consistent set of basic types.●Built-in support for automatically generating XML document descriptions.●Automatically clean dynamically allocated memory.●Classes or methods can be marked with user-defined properties. This can be used for documentation purposes and has a certain impact on compilation (for example, marking a method to compile only when debugging).●Full access to the .NET base class library and easy access to the Windows API.●Y ou can use pointers and direct memory access, but the C# language can access memory without them.●Supports attributes and events in VB style.●Changing compiler options, ActiveX controls (COM components) are called by other code in the same way. ●C# can be used to write dynamic Web pages and XML Web services.It should be noted that for most of these features, and Managed C++ are also available. But since C# used .NET from the beginning, support for .NET features was not only complete, but also provided a more suitable syntax than other languages. The C# language itself is very similar to Java, but there are some improvements because Java is not designed for use in a .NET environment. Before ending this topic, we must also point out two limitations of C#. One is that the language is not suitable for writing time-critical or very high-performance codes, such as a loop that runs 1000 or 1050 times, and immediately clears the resources they occupy when they are not needed. In this regard, C++ may still be the best of all low-level languages. The second is that C# lacks the key functions needed for very high-performance applications. The parcels guarantee inlining and destructor functions in specific areas of the code. However, such applications are very few.中文译文C# 编程语言概述作者:Barnett M1. C,C++,C#的历史C#程序语言是建立在C 和C++程序语言的精神上的。
云计算外文翻译参考文献
云计算外文翻译参考文献(文档含中英文对照即英文原文和中文翻译)原文:Technical Issues of Forensic Investigations in Cloud Computing EnvironmentsDominik BirkRuhr-University BochumHorst Goertz Institute for IT SecurityBochum, GermanyRuhr-University BochumHorst Goertz Institute for IT SecurityBochum, GermanyAbstract—Cloud Computing is arguably one of the most discussedinformation technologies today. It presents many promising technological and economical opportunities. However, many customers remain reluctant to move their business IT infrastructure completely to the cloud. One of their main concerns is Cloud Security and the threat of the unknown. Cloud Service Providers(CSP) encourage this perception by not letting their customers see what is behind their virtual curtain. A seldomly discussed, but in this regard highly relevant open issue is the ability to perform digital investigations. This continues to fuel insecurity on the sides of both providers and customers. Cloud Forensics constitutes a new and disruptive challenge for investigators. Due to the decentralized nature of data processing in the cloud, traditional approaches to evidence collection and recovery are no longer practical. This paper focuses on the technical aspects of digital forensics in distributed cloud environments. We contribute by assessing whether it is possible for the customer of cloud computing services to perform a traditional digital investigation from a technical point of view. Furthermore we discuss possible solutions and possible new methodologies helping customers to perform such investigations.I. INTRODUCTIONAlthough the cloud might appear attractive to small as well as to large companies, it does not come along without its own unique problems. Outsourcing sensitive corporate data into the cloud raises concerns regarding the privacy and security of data. Security policies, companies main pillar concerning security, cannot be easily deployed into distributed, virtualized cloud environments. This situation is further complicated by the unknown physical location of the companie’s assets. Normally,if a security incident occurs, the corporate security team wants to be able to perform their own investigation without dependency on third parties. In the cloud, this is not possible anymore: The CSP obtains all the power over the environmentand thus controls the sources of evidence. In the best case, a trusted third party acts as a trustee and guarantees for the trustworthiness of the CSP. Furthermore, the implementation of the technical architecture and circumstances within cloud computing environments bias the way an investigation may be processed. In detail, evidence data has to be interpreted by an investigator in a We would like to thank the reviewers for the helpful comments and Dennis Heinson (Center for Advanced Security Research Darmstadt - CASED) for the profound discussions regarding the legal aspects of cloud forensics. proper manner which is hardly be possible due to the lackof circumstantial information. For auditors, this situation does not change: Questions who accessed specific data and information cannot be answered by the customers, if no corresponding logs are available. With the increasing demand for using the power of the cloud for processing also sensible information and data, enterprises face the issue of Data and Process Provenance in the cloud [10]. Digital provenance, meaning meta-data that describes the ancestry or history of a digital object, is a crucial feature for forensic investigations. In combination with a suitable authentication scheme, it provides information about who created and who modified what kind of data in the cloud. These are crucial aspects for digital investigations in distributed environments such as the cloud. Unfortunately, the aspects of forensic investigations in distributed environment have so far been mostly neglected by the research community. Current discussion centers mostly around security, privacy and data protection issues [35], [9], [12]. The impact of forensic investigations on cloud environments was little noticed albeit mentioned by the authors of [1] in 2009: ”[...] to our knowledge, no research has been published on how cloud computing environments affect digital artifacts,and on acquisition logistics and legal issues related to cloud computing env ironments.” This statement is also confirmed by other authors [34], [36], [40] stressing that further research on incident handling, evidence tracking and accountability in cloud environments has to be done. At the same time, massive investments are being made in cloud technology. Combined with the fact that information technology increasingly transcendents peoples’ private and professional life, thus mirroring more and more of peoples’actions, it becomes apparent that evidence gathered from cloud environments will be of high significance to litigation or criminal proceedings in the future. Within this work, we focus the notion of cloud forensics by addressing the technical issues of forensics in all three major cloud service models and consider cross-disciplinary aspects. Moreover, we address the usability of various sources of evidence for investigative purposes and propose potential solutions to the issues from a practical standpoint. This work should be considered as a surveying discussion of an almost unexplored research area. The paper is organized as follows: We discuss the related work and the fundamental technical background information of digital forensics, cloud computing and the fault model in section II and III. In section IV, we focus on the technical issues of cloud forensics and discuss the potential sources and nature of digital evidence as well as investigations in XaaS environments including thecross-disciplinary aspects. We conclude in section V.II. RELATED WORKVarious works have been published in the field of cloud security and privacy [9], [35], [30] focussing on aspects for protecting data in multi-tenant, virtualized environments. Desired security characteristics for current cloud infrastructures mainly revolve around isolation of multi-tenant platforms [12], security of hypervisors in order to protect virtualized guest systems and secure network infrastructures [32]. Albeit digital provenance, describing the ancestry of digital objects, still remains a challenging issue for cloud environments, several works have already been published in this field [8], [10] contributing to the issues of cloud forensis. Within this context, cryptographic proofs for verifying data integrity mainly in cloud storage offers have been proposed,yet lacking of practical implementations [24], [37], [23]. Traditional computer forensics has already well researched methods for various fields of application [4], [5], [6], [11], [13]. Also the aspects of forensics in virtual systems have been addressed by several works [2], [3], [20] including the notionof virtual introspection [25]. In addition, the NIST already addressed Web Service Forensics [22] which has a huge impact on investigation processes in cloud computing environments. In contrast, the aspects of forensic investigations in cloud environments have mostly been neglected by both the industry and the research community. One of the first papers focusing on this topic was published by Wolthusen [40] after Bebee et al already introduced problems within cloud environments [1]. Wolthusen stressed that there is an inherent strong need for interdisciplinary work linking the requirements and concepts of evidence arising from the legal field to what can be feasibly reconstructed and inferred algorithmically or in an exploratory manner. In 2010, Grobauer et al [36] published a paper discussing the issues of incident response in cloud environments - unfortunately no specific issues and solutions of cloud forensics have been proposed which will be done within this work.III. TECHNICAL BACKGROUNDA. Traditional Digital ForensicsThe notion of Digital Forensics is widely known as the practice of identifying, extracting and considering evidence from digital media. Unfortunately, digital evidence is both fragile and volatile and therefore requires the attention of special personnel and methods in order to ensure that evidence data can be proper isolated and evaluated. Normally, the process of a digital investigation can be separated into three different steps each having its own specificpurpose:1) In the Securing Phase, the major intention is the preservation of evidence for analysis. The data has to be collected in a manner that maximizes its integrity. This is normally done by a bitwise copy of the original media. As can be imagined, this represents a huge problem in the field of cloud computing where you never know exactly where your data is and additionallydo not have access to any physical hardware. However, the snapshot technology, discussed in section IV-B3, provides a powerful tool to freeze system states and thus makes digital investigations, at least in IaaS scenarios, theoretically possible.2) We refer to the Analyzing Phase as the stage in which the data is sifted and combined. It is in this phase that the data from multiple systems or sources is pulled together to create as complete a picture and event reconstruction as possible. Especially in distributed system infrastructures, this means that bits and pieces of data are pulled together for deciphering the real story of what happened and for providing a deeper look into the data.3) Finally, at the end of the examination and analysis of the data, the results of the previous phases will be reprocessed in the Presentation Phase. The report, created in this phase, is a compilation of all the documentation and evidence from the analysis stage. The main intention of such a report is that it contains all results, it is complete and clear to understand. Apparently, the success of these three steps strongly depends on the first stage. If it is not possible to secure the complete set of evidence data, no exhaustive analysis will be possible. However, in real world scenarios often only a subset of the evidence data can be secured by the investigator. In addition, an important definition in the general context of forensics is the notion of a Chain of Custody. This chain clarifies how and where evidence is stored and who takes possession of it. Especially for cases which are brought to court it is crucial that the chain of custody is preserved.B. Cloud ComputingAccording to the NIST [16], cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications and services) that can be rapidly provisioned and released with minimal CSP interaction. The new raw definition of cloud computing brought several new characteristics such as multi-tenancy, elasticity, pay-as-you-go and reliability. Within this work, the following three models are used: In the Infrastructure asa Service (IaaS) model, the customer is using the virtual machine provided by the CSP for installing his own system on it. The system can be used like any other physical computer with a few limitations. However, the additive customer power over the system comes along with additional security obligations. Platform as a Service (PaaS) offerings provide the capability to deploy application packages created using the virtual development environment supported by the CSP. For the efficiency of software development process this service model can be propellent. In the Software as a Service (SaaS) model, the customer makes use of a service run by the CSP on a cloud infrastructure. In most of the cases this service can be accessed through an API for a thin client interface such as a web browser. Closed-source public SaaS offers such as Amazon S3 and GoogleMail can only be used in the public deployment model leading to further issues concerning security, privacy and the gathering of suitable evidences. Furthermore, two main deployment models, private and public cloud have to be distinguished. Common public clouds are made available to the general public. The corresponding infrastructure is owned by one organization acting as a CSP and offering services to its customers. In contrast, the private cloud is exclusively operated for an organization but may not provide the scalability and agility of public offers. The additional notions of community and hybrid cloud are not exclusively covered within this work. However, independently from the specific model used, the movement of applications and data to the cloud comes along with limited control for the customer about the application itself, the data pushed into the applications and also about the underlying technical infrastructure.C. Fault ModelBe it an account for a SaaS application, a development environment (PaaS) or a virtual image of an IaaS environment, systems in the cloud can be affected by inconsistencies. Hence, for both customer and CSP it is crucial to have the ability to assign faults to the causing party, even in the presence of Byzantine behavior [33]. Generally, inconsistencies can be caused by the following two reasons:1) Maliciously Intended FaultsInternal or external adversaries with specific malicious intentions can cause faults on cloud instances or applications. Economic rivals as well as former employees can be the reason for these faults and state a constant threat to customers and CSP. In this model, also a malicious CSP is included albeit he isassumed to be rare in real world scenarios. Additionally, from the technical point of view, the movement of computing power to a virtualized, multi-tenant environment can pose further threads and risks to the systems. One reason for this is that if a single system or service in the cloud is compromised, all other guest systems and even the host system are at risk. Hence, besides the need for further security measures, precautions for potential forensic investigations have to be taken into consideration.2) Unintentional FaultsInconsistencies in technical systems or processes in the cloud do not have implicitly to be caused by malicious intent. Internal communication errors or human failures can lead to issues in the services offered to the costumer(i.e. loss or modification of data). Although these failures are not caused intentionally, both the CSP and the customer have a strong intention to discover the reasons and deploy corresponding fixes.IV. TECHNICAL ISSUESDigital investigations are about control of forensic evidence data. From the technical standpoint, this data can be available in three different states: at rest, in motion or in execution. Data at rest is represented by allocated disk space. Whether the data is stored in a database or in a specific file format, it allocates disk space. Furthermore, if a file is deleted, the disk space is de-allocated for the operating system but the data is still accessible since the disk space has not been re-allocated and overwritten. This fact is often exploited by investigators which explore these de-allocated disk space on harddisks. In case the data is in motion, data is transferred from one entity to another e.g. a typical file transfer over a network can be seen as a data in motion scenario. Several encapsulated protocols contain the data each leaving specific traces on systems and network devices which can in return be used by investigators. Data can be loaded into memory and executed as a process. In this case, the data is neither at rest or in motion but in execution. On the executing system, process information, machine instruction and allocated/de-allocated data can be analyzed by creating a snapshot of the current system state. In the following sections, we point out the potential sources for evidential data in cloud environments and discuss the technical issues of digital investigations in XaaS environmentsas well as suggest several solutions to these problems.A. Sources and Nature of EvidenceConcerning the technical aspects of forensic investigations, the amount of potential evidence available to the investigator strongly diverges between thedifferent cloud service and deployment models. The virtual machine (VM), hosting in most of the cases the server application, provides several pieces of information that could be used by investigators. On the network level, network components can provide information about possible communication channels between different parties involved. The browser on the client, acting often as the user agent for communicating with the cloud, also contains a lot of information that could be used as evidence in a forensic investigation. Independently from the used model, the following three components could act as sources for potential evidential data.1) Virtual Cloud Instance: The VM within the cloud, where i.e. data is stored or processes are handled, contains potential evidence [2], [3]. In most of the cases, it is the place where an incident happened and hence provides a good starting point for a forensic investigation. The VM instance can be accessed by both, the CSP and the customer who is running the instance. Furthermore, virtual introspection techniques [25] provide access to the runtime state of the VM via the hypervisor and snapshot technology supplies a powerful technique for the customer to freeze specific states of the VM. Therefore, virtual instances can be still running during analysis which leads to the case of live investigations [41] or can be turned off leading to static image analysis. In SaaS and PaaS scenarios, the ability to access the virtual instance for gathering evidential information is highly limited or simply not possible.2) Network Layer: Traditional network forensics is knownas the analysis of network traffic logs for tracing events that have occurred in the past. Since the different ISO/OSI network layers provide several information on protocols and communication between instances within as well as with instances outside the cloud [4], [5], [6], network forensics is theoretically also feasible in cloud environments. However in practice, ordinary CSP currently do not provide any log data from the network components used by the customer’s instances or applications. For instance, in case of a malware infection of an IaaS VM, it will be difficult for the investigator to get any form of routing information and network log datain general which is crucial for further investigative steps. This situation gets even more complicated in case of PaaS or SaaS. So again, the situation of gathering forensic evidence is strongly affected by the support the investigator receives from the customer and the CSP.3) Client System: On the system layer of the client, it completely depends on the used model (IaaS, PaaS, SaaS) if and where potential evidence could beextracted. In most of the scenarios, the user agent (e.g. the web browser) on the client system is the only application that communicates with the service in the cloud. This especially holds for SaaS applications which are used and controlled by the web browser. But also in IaaS scenarios, the administration interface is often controlled via the browser. Hence, in an exhaustive forensic investigation, the evidence data gathered from the browser environment [7] should not be omitted.a) Browser Forensics: Generally, the circumstances leading to an investigation have to be differentiated: In ordinary scenarios, the main goal of an investigation of the web browser is to determine if a user has been victim of a crime. In complex SaaS scenarios with high client-server interaction, this constitutes a difficult task. Additionally, customers strongly make use of third-party extensions [17] which can be abused for malicious purposes. Hence, the investigator might want to look for malicious extensions, searches performed, websites visited, files downloaded, information entered in forms or stored in local HTML5 stores, web-based email contents and persistent browser cookies for gathering potential evidence data. Within this context, it is inevitable to investigate the appearance of malicious JavaScript [18] leading to e.g. unintended AJAX requests and hence modified usage of administration interfaces. Generally, the web browser contains a lot of electronic evidence data that could be used to give an answer to both of the above questions - even if the private mode is switched on [19].B. Investigations in XaaS EnvironmentsTraditional digital forensic methodologies permit investigators to seize equipment and perform detailed analysis on the media and data recovered [11]. In a distributed infrastructure organization like the cloud computing environment, investigators are confronted with an entirely different situation. They have no longer the option of seizing physical data storage. Data and processes of the customer are dispensed over an undisclosed amount of virtual instances, applications and network elements. Hence, it is in question whether preliminary findings of the computer forensic community in the field of digital forensics apparently have to be revised and adapted to the new environment. Within this section, specific issues of investigations in SaaS, PaaS and IaaS environments will be discussed. In addition, cross-disciplinary issues which affect several environments uniformly, will be taken into consideration. We also suggest potential solutions to the mentioned problems.1) SaaS Environments: Especially in the SaaS model, the customer does notobtain any control of the underlying operating infrastructure such as network, servers, operating systems or the application that is used. This means that no deeper view into the system and its underlying infrastructure is provided to the customer. Only limited userspecific application configuration settings can be controlled contributing to the evidences which can be extracted fromthe client (see section IV-A3). In a lot of cases this urges the investigator to rely on high-level logs which are eventually provided by the CSP. Given the case that the CSP does not run any logging application, the customer has no opportunity to create any useful evidence through the installation of any toolkit or logging tool. These circumstances do not allow a valid forensic investigation and lead to the assumption that customers of SaaS offers do not have any chance to analyze potential incidences.a) Data Provenance: The notion of Digital Provenance is known as meta-data that describes the ancestry or history of digital objects. Secure provenance that records ownership and process history of data objects is vital to the success of data forensics in cloud environments, yet it is still a challenging issue today [8]. Albeit data provenance is of high significance also for IaaS and PaaS, it states a huge problem specifically for SaaS-based applications: Current global acting public SaaS CSP offer Single Sign-On (SSO) access control to the set of their services. Unfortunately in case of an account compromise, most of the CSP do not offer any possibility for the customer to figure out which data and information has been accessed by the adversary. For the victim, this situation can have tremendous impact: If sensitive data has been compromised, it is unclear which data has been leaked and which has not been accessed by the adversary. Additionally, data could be modified or deleted by an external adversary or even by the CSP e.g. due to storage reasons. The customer has no ability to proof otherwise. Secure provenance mechanisms for distributed environments can improve this situation but have not been practically implemented by CSP [10]. Suggested Solution: In private SaaS scenarios this situation is improved by the fact that the customer and the CSP are probably under the same authority. Hence, logging and provenance mechanisms could be implemented which contribute to potential investigations. Additionally, the exact location of the servers and the data is known at any time. Public SaaS CSP should offer additional interfaces for the purpose of compliance, forensics, operations and security matters to their customers. Through an API, the customers should have the ability to receive specific information suchas access, error and event logs that could improve their situation in case of aninvestigation. Furthermore, due to the limited ability of receiving forensic information from the server and proofing integrity of stored data in SaaS scenarios, the client has to contribute to this process. This could be achieved by implementing Proofs of Retrievability (POR) in which a verifier (client) is enabled to determine that a prover (server) possesses a file or data object and it can be retrieved unmodified [24]. Provable Data Possession (PDP) techniques [37] could be used to verify that an untrusted server possesses the original data without the need for the client to retrieve it. Although these cryptographic proofs have not been implemented by any CSP, the authors of [23] introduced a new data integrity verification mechanism for SaaS scenarios which could also be used for forensic purposes.2) PaaS Environments: One of the main advantages of the PaaS model is that the developed software application is under the control of the customer and except for some CSP, the source code of the application does not have to leave the local development environment. Given these circumstances, the customer obtains theoretically the power to dictate how the application interacts with other dependencies such as databases, storage entities etc. CSP normally claim this transfer is encrypted but this statement can hardly be verified by the customer. Since the customer has the ability to interact with the platform over a prepared API, system states and specific application logs can be extracted. However potential adversaries, which can compromise the application during runtime, should not be able to alter these log files afterwards. Suggested Solution:Depending on the runtime environment, logging mechanisms could be implemented which automatically sign and encrypt the log information before its transfer to a central logging server under the control of the customer. Additional signing and encrypting could prevent potential eavesdroppers from being able to view and alter log data information on the way to the logging server. Runtime compromise of an PaaS application by adversaries could be monitored by push-only mechanisms for log data presupposing that the needed information to detect such an attack are logged. Increasingly, CSP offering PaaS solutions give developers the ability to collect and store a variety of diagnostics data in a highly configurable way with the help of runtime feature sets [38].3) IaaS Environments: As expected, even virtual instances in the cloud get compromised by adversaries. Hence, the ability to determine how defenses in the virtual environment failed and to what extent the affected systems havebeen compromised is crucial not only for recovering from an incident. Also forensic investigations gain leverage from such information and contribute to resilience against future attacks on the systems. From the forensic point of view, IaaS instances do provide much more evidence data usable for potential forensics than PaaS and SaaS models do. This fact is caused throughthe ability of the customer to install and set up the image for forensic purposes before an incident occurs. Hence, as proposed for PaaS environments, log data and other forensic evidence information could be signed and encrypted before itis transferred to third-party hosts mitigating the chance that a maliciously motivated shutdown process destroys the volatile data. Although, IaaS environments provide plenty of potential evidence, it has to be emphasized that the customer VM is in the end still under the control of the CSP. He controls the hypervisor which is e.g. responsible for enforcing hardware boundaries and routing hardware requests among different VM. Hence, besides the security responsibilities of the hypervisor, he exerts tremendous control over how customer’s VM communicate with the hardware and theoretically can intervene executed processes on the hosted virtual instance through virtual introspection [25]. This could also affect encryption or signing processes executed on the VM and therefore leading to the leakage of the secret key. Although this risk can be disregarded in most of the cases, the impact on the security of high security environments is tremendous.a) Snapshot Analysis: Traditional forensics expect target machines to be powered down to collect an image (dead virtual instance). This situation completely changed with the advent of the snapshot technology which is supported by all popular hypervisors such as Xen, VMware ESX and Hyper-V.A snapshot, also referred to as the forensic image of a VM, providesa powerful tool with which a virtual instance can be clonedby one click including also the running system’s mem ory. Due to the invention of the snapshot technology, systems hosting crucial business processes do not have to be powered down for forensic investigation purposes. The investigator simply creates and loads a snapshot of the target VM for analysis(live virtual instance). This behavior is especially important for scenarios in which a downtime of a system is not feasible or practical due to existing SLA. However the information whether the machine is running or has been properly powered down is crucial [3] for the investigation. Live investigations of running virtual instances become more common providing evidence data that。
免费的7个中英文文献资料检索网站,值得您收藏
免费的7个中英文文献资料检索网站,值得您收藏写作学术论文离不开文献资料的查找使用。
那么除了在知网、万方、维普等国内数据库以及百度文库等进行文献检索外,还有没有其他的比较好的文献引擎呢?特别是搜索外文文献的网站?答案是肯定的。
今天易起论文的小编就为大家推荐7个「学术文献检索工具」。
1.Citeseerx「Citeseerx」官网首页/CiteSeerX是CiteSeer的换代产品。
CiteSeerX与CiteSeer 一样,也公开在网上提供完全免费的服务,实现全天24h实时更新。
CiteSeer引文搜索引擎由美国普林斯顿大学NEC 研究院研制开发。
CiteSeer引文搜索引擎是利用自动引文标引系统(ACI)建立的第一个学术论文数字图书馆。
CiteSeerX采用机器自动识别技术搜集网上以Postscrip和PDF文件格式存在的学术论文,然后依照引文索引方法标引和链接每一篇文章。
CiteSeerX的宗旨就在于有效地组织网上文献,多角度促进学术文献的传播与反馈。
▼CiteSeerX的检索界面简洁清晰默认为文献(Documents)检索还支持Authours、tables检索若选择“IncludeCitations”进行搜索期刊文献等检索范围会扩大不仅包括学术文献全文的数据库还会列出数据库中每篇论文的参考文献点击“AdvancedSearch”进入高级检索界面,可以看到CiteSeerX支持以下检索字段的“并”运算:篇名、作者、作者单位、期刊或会议录名称、出版年、文摘、关键词、文本内容以及用户为论文定义的标签(Tag)。
当然也可以在首页的单一检索框自行构造组合检索式,如Author:(jkleinberg)ANDvenue:(journaloftheacm)。
点击“AdvancedSearch”进入高级检索界面高级检索会增加检索的精确度,除了支持作者、作者单位、篇名等基本检索之外,还支持文本内容以及用户为论文定义的标签等更为详细的检索。
会计学毕业论文中英文资料外文翻译文献
会计学中英文资料外文翻译外文原文Title:Future of SME finance(Background – the environment for SME finance has changedFuture economic recovery will depend on the possibility of Crafts, Trades and SMEs to exploit their potential for growth and employment creation.SMEs make a major contribution to growth and employment in the EU and are at the heart of the Lisbon Strategy, whose main objective is to turn Europe into the most competitive and dynamic knowledge-based economy in the world. However, the ability of SMEs to grow depends highly on their potential to invest in restructuring, innovation and qualification. All of these investments need capital and therefore access to finance.Against this background the consistently repeated complaint of SMEs about their problems regarding access to finance is a highly relevant constraint that endangers the economic recovery of Europe.Changes in the finance sector influence the behavior of credit institutes towards Crafts, Trades and SMEs. Recent and ongoing developments in the banking sector add to the concerns of SMEs and will further endanger their access to finance. The main changes in the banking sector which influence SME finance are:•Globalization and internationalization have increased the competition and the profit orientation in the sector;•worsening of the economic situations in some institutes (burst of the ITC bubble, insolvencies) strengthen the focus on profitability further;•Mergers and restructuring created larger structures and many local branches, which had direct and personalized contacts with small enterprises, were closed;•up-coming implementation of new capital adequacy rules (Basel II) will also change SME business of the credit sector and will increase its administrative costs;•Stricter interpretation of State-Aide Rules by the European Commission eliminates the support of banks by public guarantees; many of the effected banks arevery active in SME finance.All these changes result in a higher sensitivity for risks and profits in the finance sector.The changes in the finance sector affect the accessibility of SMEs to finance.Higher risk awareness in the credit sector, a stronger focus on profitability and the ongoing restructuring in the finance sector change the framework for SME finance and influence the accessibility of SMEs to finance. The most important changes are: •In order to make the higher risk awareness operational, the credit sector introduces new rating systems and instruments for credit scoring;•Risk assessment of SMEs by banks will force the enterprises to present more and better quality information on their businesses;•Banks will try to pass through their additional costs for implementing and running the new capital regulations (Basel II) to their business clients;•due to the increase of competition on interest rates, the bank sector demands more and higher fees for its services (administration of accounts, payments systems, etc.), which are not only additional costs for SMEs but also limit their liquidity;•Small enterprises will lose their personal relationship with decision-makers in local branches –the credit application process will become more formal and anonymous and will probably lose longer;•the credit sector will lose more and more its “public function” to provide access to finance for a wide range of economic actors, which it has in a number of countries, in order to support and facilitate economic growth; the profitability of lending becomes the main focus of private credit institutions.All of these developments will make access to finance for SMEs even more difficult and / or will increase the cost of external finance. Business start-ups and SMEs, which want to enter new markets, may especially suffer from shortages regarding finance. A European Code of Conduct between Banks and SMEs would have allowed at least more transparency in the relations between Banks and SMEs and UEAPME regrets that the bank sector was not able to agree on such a commitment.Towards an encompassing policy approach to improve the access of Crafts, Trades and SMEs to financeAll analyses show that credits and loans will stay the main source of finance forthe SME sector in Europe. Access to finance was always a main concern for SMEs, but the recent developments in the finance sector worsen the situation even more. Shortage of finance is already a relevant factor, which hinders economic recovery in Europe. Many SMEs are not able to finance their needs for investment.Therefore, UEAPME expects the new European Commission and the new European Parliament to strengthen their efforts to improve the framework conditions for SME finance. Europe’s Crafts, Trades and SMEs ask for an encompassing policy approach, which includes not only the conditions for SMEs’ access to lending, but will also strengthen their capacity for internal finance and their access to external risk capital.From UEAPME’s point of view such an encompassing approach should be based on three guiding principles:•Risk-sharing between private investors, financial institutes, SMEs and public sector;•Increase of transparency of SMEs towards their external investors and lenders;•improving the regulatory environment for SME finance.Based on these principles and against the background of the changing environment for SME finance, UEAPME proposes policy measures in the following areas:1. New Capital Requirement Directive: SME friendly implementation of Basel IIDue to intensive lobbying activities, UEAPME, together with other Business Associations in Europe, has achieved some improvements in favour of SMEs regarding the new Basel Agreement on regulatory capital (Basel II). The final agreement from the Basel Committee contains a much more realistic approach toward the real risk situation of SME lending for the finance market and will allow the necessary room for adaptations, which respect the different regional traditions and institutional structures.However, the new regulatory system will influence the relations between Banks and SMEs and it will depend very much on the way it will be implemented into European law, whether Basel II becomes burdensome for SMEs and if it will reduce access to finance for them.The new Capital Accord form the Basel Committee gives the financial marketauthorities and herewith the European Institutions, a lot of flexibility. In about 70 areas they have room to adapt the Accord to their specific needs when implementing it into EU law. Some of them will have important effects on the costs and the accessibility of finance for SMEs.UEAPME expects therefore from the new European Commission and the new European Parliament:•The implementation of the new Capital Requirement Directive will be costly for the Finance Sector (up to 30 Billion Euro till 2006) and its clients will have to pay for it. Therefore, the implementation – especially for smaller banks, which are often very active in SME finance –has to be carried out with as little administrative burdensome as possible (reporting obligations, statistics, etc.).•The European Regulators must recognize traditional instruments for collaterals (guarantees, etc.) as far as possible.•The European Commission and later the Member States should take over the recommendations from the European Parliament with regard to granularity, access to retail portfolio, maturity, partial use, adaptation of thresholds, etc., which will ease the burden on SME finance.2. SMEs need transparent rating proceduresDue to higher risk awareness of the finance sector and the needs of Basel II, many SMEs will be confronted for the first time with internal rating procedures or credit scoring systems by their banks. The bank will require more and better quality information from their clients and will assess them in a new way. Both up-coming developments are already causing increasing uncertainty amongst SMEs.In order to reduce this uncertainty and to allow SMEs to understand the principles of the new risk assessment, UEAPME demands transparent rating procedures –rating procedures may not become a “Black Box” for SMEs:•The bank should communicate the relevant criteria affecting the rating of SMEs.•The bank should inform SMEs about its assessment in order to allow SMEs to improve.The negotiations on a European Code of Conduct between Banks and SMEs , which would have included a self-commitment for transparent rating procedures by Banks, failed. Therefore, UEAPME expects from the new European Commission andthe new European Parliament support for:•binding rules in the framework of the new Capital Adequacy Directive, which ensure the transparency of rating procedures and credit scoring systems for SMEs;•Elaboration of national Codes of Conduct in order to improve the relations between Banks and SMEs and to support the adaptation of SMEs to the new financial environment.3. SMEs need an extension of credit guarantee systems with a special focus on Micro-LendingBusiness start-ups, the transfer of businesses and innovative fast growth SMEs also depended in the past very often on public support to get access to finance. Increasing risk awareness by banks and the stricter interpretation of State Aid Rules will further increase the need for public support.Already now, there are credit guarantee schemes in many countries on the limit of their capacity and too many investment projects cannot be realized by SMEs.Experiences show that Public money, spent for supporting credit guarantees systems, is a very efficient instrument and has a much higher multiplying effect than other instruments. One Euro form the European Investment Funds can stimulate 30 Euro investments in SMEs (for venture capital funds the relation is only 1:2).Therefore, UEAPME expects the new European Commission and the new European Parliament to support:•The extension of funds for national credit guarantees schemes in the framework of the new Multi-Annual Programmed for Enterprises;•The development of new instruments for securitizations of SME portfolios;•The recognition of existing and well functioning credit guarantees schemes as collateral;•More flexibility within the European Instruments, because of national differences in the situation of SME finance;•The development of credit guarantees schemes in the new Member States;•The development of an SBIC-like scheme in the Member States to close the equity gap (0.2 – 2.5 Mio Euro, according to the expert meeting on PACE on April 27 in Luxemburg).•the development of a financial support scheme to encourage the internalizations of SMEs (currently there is no scheme available at EU level:termination of JOP, fading out of JEV).4. SMEs need company and income taxation systems, which strengthen their capacity for self-financingMany EU Member States have company and income taxation systems with negative incentives to build-up capital within the company by re-investing their profits. This is especially true for companies, which have to pay income taxes. Already in the past tax-regimes was one of the reasons for the higher dependence of Europe’s SMEs on bank lending. In future, the result of rating will also depend on the amount of capital in the company; the high dependence on lending will influence the access to lending. This is a vicious cycle, which has to be broken.Even though company and income taxation falls under the competence of Member States, UEAPME asks the new European Commission and the new European Parliament to publicly support tax-reforms, which will strengthen the capacity of Crafts, Trades and SME for self-financing. Thereby, a special focus on non-corporate companies is needed.5. Risk Capital – equity financingExternal equity financing does not have a real tradition in the SME sector. On the one hand, small enterprises and family business in general have traditionally not been very open towards external equity financing and are not used to informing transparently about their business.On the other hand, many investors of venture capital and similar forms of equity finance are very reluctant regarding investing their funds in smaller companies, which is more costly than investing bigger amounts in larger companies. Furthermore it is much more difficult to set out of such investments in smaller companies.Even though equity financing will never become the main source of financing for SMEs, it is an important instrument for highly innovative start-ups and fast growing companies and it has therefore to be further developed. UEAPME sees three pillars for such an approach where policy support is needed:Availability of venture capital•The Member States should review their taxation systems in order to create incentives to invest private money in all forms of venture capital.•Guarantee instruments for equity financing should be further developed.Improve the conditions for investing venture capital into SMEs•The development of secondary markets for venture capital investments inSMEs should be supported.•Accounting Standards for SMEs should be revised in order to ease transparent exchange of information between investor and owner-manager.Owner-managers must become more aware about the need for transparency towards investors•SME owners will have to realise that in future access to external finance (venture capital or lending) will depend much more on a transparent and open exchange of information about the situation and the perspectives of their companies.•In order to fulfil the new needs for transparency, SMEs will have to use new information instruments (business plans, financial reporting, etc.) and new management instruments (risk-management, financial management, etc.).外文资料翻译题目:未来的中小企业融资背景:中小企业融资已经改变未来的经济复苏将取决于能否工艺品,贸易和中小企业利用其潜在的增长和创造就业。
计算机网络中英文对照外文翻译文献
中英文资料外文翻译计算机网络计算机网络,通常简单的被称作是一种网络,是一家集电脑和设备为一体的沟通渠道,便于用户之间的沟通交流和资源共享。
网络可以根据其多种特点来分类。
计算机网络允许资源和信息在互联设备中共享。
一.历史早期的计算机网络通信始于20世纪50年代末,包括军事雷达系统、半自动地面防空系统及其相关的商业航空订票系统、半自动商业研究环境。
1957年俄罗斯向太空发射人造卫星。
十八个月后,美国开始设立高级研究计划局(ARPA)并第一次发射人造卫星。
然后用阿帕网上的另外一台计算机分享了这个信息。
这一切的负责者是美国博士莱德里尔克。
阿帕网于来于自印度,1969年印度将其名字改为因特网。
上世纪60年代,高级研究计划局(ARPA)开始为美国国防部资助并设计高级研究计划局网(阿帕网)。
因特网的发展始于1969年,20世纪60年代起开始在此基础上设计开发,由此,阿帕网演变成现代互联网。
二.目的计算机网络可以被用于各种用途:为通信提供便利:使用网络,人们很容易通过电子邮件、即时信息、聊天室、电话、视频电话和视频会议来进行沟通和交流。
共享硬件:在网络环境下,每台计算机可以获取和使用网络硬件资源,例如打印一份文件可以通过网络打印机。
共享文件:数据和信息: 在网络环境中,授权用户可以访问存储在其他计算机上的网络数据和信息。
提供进入数据和信息共享存储设备的能力是许多网络的一个重要特征。
共享软件:用户可以连接到远程计算机的网络应用程序。
信息保存。
安全保证。
三.网络分类下面的列表显示用于网络分类:3.1连接方式计算机网络可以据硬件和软件技术分为用来连接个人设备的网络,如:光纤、局域网、无线局域网、家用网络设备、电缆通讯和G.hn(有线家庭网络标准)等等。
以太网的定义,它是由IEEE 802标准,并利用各种媒介,使设备之间进行通信的网络。
经常部署的设备包括网络集线器、交换机、网桥、路由器。
无线局域网技术是使用无线设备进行连接的。
生物科学论文中英文资料外文翻译文献
生物科学论文中英文资料外文翻译文献Carotenoid Biosynthetic Pathway in the Citrus Genus: Number of Copies and Phylogenetic Diversity of Seven GeneThe first objective of this paper was to analyze the potential role of allelic variability of carotenoid biosynthetic genes in the interspecifi diversity in carotenoid composition of Citrus juices. The second objective was to determine the number of copies for each of these genes. Seven carotenoid biosynthetic genes were analyzed using restriction fragment length polymorphism (RFLP) and simple sequence repeats (SSR) markers. RFLP analyses were performed with the genomic DNA obtained from 25 Citrus genotypes using several restriction enzymes. cDNA fragments of Psy, Pds, Zds, Lcyb, Lcy-e, Hy-b, and Zep genes labeled with [R-32P]dCTP were used as probes. For SSR analyses, two primer pairs amplifying two SSR sequences identified from expressed sequence tags (ESTs) of Lcy-b and Hy-b genes were designed. The number of copies of the seven genes ranged from one for Lcy-b to three for Zds. The genetic diversity revealed by RFLP and SSR profiles was in agreement with the genetic diversity obtained from neutral molecμLar markers. Genetic interpretation of RFLP and SSR profiles of four genes (Psy1, Pds1, Lcy-b, and Lcy-e1) enabled us to make inferences on the phylogenetic origin of alleles for the major commercial citrus species. Moreover, the resμLts of our analyses suggest that the allelic diversity observed at the locus of both of lycopene cyclase genes, Lcy-b and Lcy-e1, is associated with interspecific diversity in carotenoid accumμLation in Citrus. The interspecific differences in carotenoid contents previously reported to be associated withother key steps catalyzed by PSY, HY-b, and ZEP were not linked to specific alleles at the corresponding loci.KEYWORDS: Citrus; carotenoids; biosynthetic genes; allelic variability; phylogeny INTRODUCTIONCarotenoids are pigments common to all photosynthetic organisms. In pigment-protein complexes, they act as light sensors for photosynthesis but also prevent photo-oxidat ion induced by too strong light intensities. In horticμLtural crops, they play a major role in fruit, root, or tuber coloration and in nutritional quality. Indeed some of these micronutrients are precursors of vitamin A, an essential component of human and animal diets. Carotenoids may also play a role in chronic disease prevention (such as certain cancers), probably due to their antioxidant properties. The carotenoid biosynthetic pathway is now well established. Carotenoids are synthesized in plastids by nuclear-encoded enzymes. The immediate precursor of carotenoids (and also of gibberellins, plastoquinone, chlorophylls,phylloquinones, and tocopherols) is geranylgeranyl diphosphate (GGPP). In light-grown plants, GGPP is mainly derivedcarotenoid, 15-cis-phytoene. Phytoene undergoes four desaturation reactions catalyzed by two enzymes, phytoene desaturase (PDS) and β-carotene desaturase (ZDS), which convert phytoene into the red-colored poly-cis-lycopene. Recently, Isaacson et al. and Park et al. isolated from tomato and Arabidopsis thaliana, respectively, the genes that encode the carotenoid isomerase (CRTISO) which, in turn, catalyzes the isomerization of poly-cis-carotenoids into all-trans-carotenoids. CRTISO acts on prolycopene to form all-trans lycopene, which undergoes cyclization reactions. Cyclization of lycopene is abranching point: one branch leads to β-carotene (β, β-carotene) and the other toα-carotene (β, ε-carotene). Lycopene β-cyclase (LCY-b) then converts lycopene intoβ-carotene in two steps, whereas the formation of α-carotene requires the action of two enzymes, lycopene ε- cyclase (LCY-e) and lycopene β-cyclase (LCY-b). α- carotene is converted into lutein by hydroxylations catalyzed by ε-carotene hydroxylase (HY-e) andβ-carotene hydroxylase (HY-b). Other xanthophylls are produced fromβ-carotene with hydroxylation reactions catalyzed by HY-b and epoxydation catalyzed by zeaxanthin epoxidase (ZEP). Most of the carotenoid biosynthetic genes have been cloned and sequenced in Citrus varieties . However, our knowledge of the complex regμLation of carotenoid biosynthesis in Citrus fruit is still limited. We need further information on the number of copies of these genes and on their allelic diversity in Citrus because these can influence carotenoid composition within the Citrus genus.Citrus fruit are among the richest sources of carotenoids. The fruit generally display a complex carotenoid structure, and 115 different carotenoids have been identified in Citrus fruit. The carotenoid richness of Citrus flesh depends on environmental conditions, particμLarly on growing conditions and on geogr aphical origin . However the main factor influencing variability of caro tenoid quality in juice has been shown to be genetic diversity. Kato et al. showed that mandarin and orange juices accumμLated high levels of β-cryptoxanthin and violaxanthin, respectively, whereas mature lemon accumμLated extremely low levels of carotenoids. Goodner et al. demonstrated that mandarins, oranges, and their hybrids coμLd be clearly distinguished by theirβ-cryptoxanthin contents. Juices of red grapefruit contained two major carotenoids: lycopene and β-carotene. More recently, we conducted a broad study on the organization of the variability of carotenoid contents in different cμLtivated Citrus species in relation with the biosynthetic pathway . Qualitative analysis of presence or absence of the different compounds revealed three main clusters: (1) mandarins, sweet oranges, and sour oranges;(2) citrons, lemons, and limes; (3) pummelos and grapefruit. Our study also enabled identification of key steps in the diversification of the carotenoid profile. Synthesis of phytoene appeared as a limiti ng step for acid Citrus, while formation of β-carotene and R-carotene from lycopene were dramatically limited in cluster 3 (pummelos and grapefruit). Only varieties in cluster 1 were able to produce violaxanthin. In the same study , we concluded that there was a very strong correlation between the classification of Citrus species based on the presence or absence of carotenoids (below,this classification is also referred to as the organization of carotenoid diversity) and genetic diversity evaluated with bi ochemical or molecμLar markers such as isozymes or randomLy amplified polymorphic DNA (RAPD). We also concluded that, at the interspecific level, the organization of the diversity of carotenoid composition was linked to the global evolution process of cμLt ivated Citrus rather than to more recent mutation events or human selection processes. Indeed, at interspecific level, a correlation between phenotypic variability and genetic diversity is common and is generally associated with generalized gametic is common and is generally associated with generalized gametic disequilibrium resμLting from the history of cμLtivated Citrus. Thus from numerical taxonomy based on morphologicaltraits or from analysis of molecμLar markers , all authors agreed on the existence o f three basic taxa (C. reticμLata, mandarins; C. medica, citrons; and C. maxima, pummelos) whose differentiation was the resμLt of allopatric evolution. All other cμLtivated Citrus specie s (C. sinensis, sweet oranges; C. aurantium, sour oranges;C. paradi si, grapefruit; and C. limon, lemons) resμLted from hybridization events within this basic pool except for C. aurantifolia, which may be a hybrid between C. medica and C. micrantha .Our p revious resμLts and data on Citrus evolution lead us to propose the hypothesis that the allelic variability supporting the organization of carotenoid diversity at interspecific level preceded events that resμLted in the creation of secondary species. Such molecμLar variability may have two different effects: on the one hand, non-silent substitutions in coding region affect the specific activity of corresponding enzymes of the biosynthetic pathway, and on the other hand, variations in untranslated regions affect transcriptional or post-transcriptional mechanisms.There is no available data on the allelic diversity of Citrus genes of the carotenoid biosynthetic pathway. The objective of this paper was to test the hypothesis that allelic variability of these genes partially determines phenotypic variability at the interspecific level. For this purpose, we analyzed the RFLPs around seven genes of the biosynthetic pathway of carotenoids (Psy, Pds, Zds, Lcy-b, Lcy-e, Hy-b, Zep) and the polymorphism of two SSR sequences found in Lcy-b and Hy-b genes in a representative set of varieties of the Citrus genus already analyzed for carotenoid constitution. Our study aimed to answer the following questions: (a) are those genes mono- or mμLtilocus, (b) is the polymorphism revealed by RFLP and SSR markers inagreement with the general histor y of cμLtivated Citrus thus permitting inferences about the phylogenetic origin of genes of the secondary species, and (c) is this polymorphism associated with phenotypic (carotenoid compound) variations.RESΜLTS AND DISCUSSIONGlobal Diversity of the Genotype Sample Observed by RFLP Analysis. RFLP analyses were performed using probes defined from expressed sequences of seven major genes of the carotenoid biosynthetic pathway . One or two restriction enzymes were used for each gene. None of these enzymes cut the cDNA probe sequence except HindIII for the Lcy-e gene. Intronic sequences and restriction sites on genomic sequences werescreened with PCR amplification using genomic DNA as template and with digestion of PCR products. The resμLts indicated the absence of an intronic sequence for Psy and Lcy-b fragments. The absence of intron in these two fragments was checked by cloning and sequencing corresponding genomic sequences (data not shown). Conversely, we found introns in Pds, Zds, Hy-b, Zep, and Lcy-e genomic sequences corresponding to RFLP probes. EcoRV did not cut the genomic sequences of Pds, Zds, Hy-b, Zep, and Lcy-e. In the same way, no BamHI restriction site was found in the genomic sequences of Pds, Zds, and Hy-b. Data relative to the diversity observed for the different genes are presented in Table 4. A total of 58 fragments were identified, six of them being monomorphic (present in all individuals). In the limited sample of the three basic taxa, only eight bands out of 58 coμLd not be observed. In the basic taxa, the mean number of bands per genotype observed was 24.7, 24.7, and 17 for C. reticμLata, C. maxima, and C. medica, respectively. It varies from28 (C. limettioides) to 36 (C. aurantium) for the secondary species. The mean number of RFLP bands per individual was lower for basic taxa than for the group of secondary species. This resμLt indicates that secondary species are much more heterozygous than the basic ones for these genes, which is logical if we assume that the secondary species arise from hybridizations between the three basic taxa. Moreover C. medica appears to be the least heterozygous taxon for RFLP around the genes of the carotenoid biosynthetic pathway, as already shown with isozymes, RAPD, and SSR markers.The two lemons were close to the acid Citrus cluster and the three sour oranges close to the mandarins/sweet oranges cluster. This organization of genetic diversity based on the RFLP profiles obtained with seven genes of the carotenoid pathway is very similar to that previously obtained with neutral molecμLar markers such as genomic SSR as well as the organization obtained with qualitative carotenoid compositions. All these resμLts suggest that the observed RFLP and SSR fragments are good phylogenetic markers. It seems consistent with our basic hypothesis that major differentiation in the genes involved in the carotenoid biosynthetic pathway preceded the creation of the secondary hybrid species and thus that the allelic structure of these hybrid species can be reconstructed from alleles observed in the three basic taxa.Gene by Gene Analysis: The Psy Gene. For the Psy probe combined with EcoRV or BamHI restriction enzymes, five bands were identified for the two enzymes, and two to three bands were observed for each genotype. One of these bands was present in all individuals. There was no restriction site in the probe sequence. These resμLts lead us to believe that Psy is present at two loci,one where no polymorphism was found with the restriction enzymes used, and one that displayed polymorphism. The number of different profiles observed was six and four with EcoRV and BamHI, respectively, for a total of 10 different profiles among the 25 individuals .Two Psy genes have also been found in tomato, tobacco, maize, and rice . Conversely, only one Psy gene has been found in Arabidopsis thaliana and in pepper (Capsicum annuum), which also accumμLates carotenoids in fruit. According to Bartley and Scolnik, Psy1 was expressed in tomato fruit chromoplasts, while Psy2 was specific to leaf tissue. In the same way, in Poaceae (maize, rice), Gallagher et al. found that Psy gene was duplicated and that Psy1 and notPsy2 transcripts in endosperm correlated with endosperm carotenoid accumμLation. These resμLts underline the role of gene duplication and the importance of tissue-specific phytoene synthase in the regμLation of carotenoid accumμLation.All the polymorphic bands were present in the sample of the basic taxon genomes. Assuming the hypothesis that all these bands describe the polymorphism at the same locus for the Psy gene, we can conclude that we found allelic differentiation between the three basic taxa with three alleles for C. reticμLata, four for C. maxima, and one for C. medica.The alleles observed for the basic taxa then enabled us to determine the genotypes of all the other species. The presumed genotypes for the Psy polymorphic locus are given in Table 7. Sweet oranges and grapefruit were heterozygous with one mandarin and one pummelo allele. Sour oranges were heterozygous; they shared the same mandarin allele with sweet oranges but had a different pummelo allele. Clementine was heterozygous with two mandarin alleles; one shared with sweetoranges and one with “Willow leaf” mandarin. “Meyer” lemon was heterozygous, with the mandarin allele also found in sweet oranges, and the citron allele. “Eureka”lemon was also heterozygous with the same pummelo allele as sour oranges and the citron allele. The other acid Citrus were homozygous for the citron allele.The Pds Gen. For the Pds probe combined with EcoRV, six different fragments were observed. One was common to all individuals. The number of fragments per individual was two or three. ResμLts for Pds led us to believe that this gene is present at two loci, one where no polymorphism was found with EcoRV restriction, and one displaying polymorphism. Conversely, studies on Arabidopsis, tomato, maize, and rice showed that Pds was a single copy gene. However, a previous study on Citrus suggests that Pds is present as a low-copy gene family in the Citrus genome, which is in agreement with our findings.The Zds Gene. The Zds profiles were complex. Nine and five fragments were observed with EcoRV and BamHI restriction, respectively. For both enzymes, one fragment was common to all individuals. The number of fragments per individual ranged from two to six for EcoRV and three to five for BamHI. There was no restriction site in the probe sequence. It can be assumed that several copies (at least three) of the Zds gene are present in the Citrus genome with polymorphism for at least two of them. In Arabidopsis, maize, and rice, like Pds, Zds was a single-copy gene .In these conditions and in the absence of analysis of controlled progenies, we are unable to conduct genetic analysis of profiles. However it appears that some bands differentiated the basic taxa: one for mandarins, one for pummelos, and one for citrons with EcoRV restriction and one for pummelos and onefor citrons with BamHI restriction. Two bands out of the nine obtained with EcoRV were not observed in the samples of basic taxa. One was rare and only observed in “Rangpur” lime. The other was found in sour oranges, “V olkamer” lemon,and “Palestine sweet” lime suggesting a common ancestor for these three genotypes.This is in agreement with the assumption of Nicolosi et al. that “V olkamer” lemon resμLts from a complex hybrid combination with C. aurantium as one parent. It will be necessary to extend the analysis of the basic taxa to conclude whether these specific bands are present in the diversity of these taxa or resμLt from mutations after the formation of the secondary species.The Lcy-b Gene with RFLP Analysis.After restriction with EcoRV and hybridization with the Lcy-b probe, we obtained simple profiles with a total of four fragments. One to two fragments were observed for each individual, and seven profiles were differentiated among the 25 genotypes. These resμLts provide evidence that Lcy-b is present at a single locus in the haploid Citrus genome. Two lycopene β-cyclases encoded by two genes have been identified in tomato. The B gene encoded a novel type of lycopene β-cyclase whose sequence was similar to capsanthin-capsorubin synthase. The B gene expressed at a high level in βmutants was responsible for strong accumμLation ofβ-carotene in fruit, while in wild-type tomatoes, B was expressed at a low level.The Lcy-b Gene with SSR Analysis. Four bands were detected at locus 1210 (Lcy-b gene). One or two bands were detected per variety confirming that this gene is mono locus. Six different profiles were observed among the 25 genotypes. As with RFLPanalysis, no intrataxon molecμLar polymorphism was found within C. Paradisi, C. Sinensis, and C. Aurantium.Taken together, the information obtained from RFLP and SSR analyses enabled us to identify a complete differentiation among the three basic taxon samples. Each of these taxons displayed two alleles for the analyzed sample. An additional allele was identified for “Mexican” l ime. The profiles for all secondary species can be reconstructed from these alleles. Deduced genetic structure is given in. Sweet oranges and clementine were heterozygous with one mandarin and one pummelo allele. Sour oranges were also heterozygous sharing the same mandarin allele as sweet oranges but with another pummelo allele. Grapefruit were heterozygous with two pummelo alleles. All the acid secondary species were heterozygous, having one allele from citrons and the other one from mandarins except for “Mexican” lime, which had a specific allele.柑桔属类胡萝卜素生物合成途径中七个基因拷贝数目及遗传多样性的分析摘要:本文的首要目标是分析类胡萝卜素生物合成相关等位基因在发生变异柑橘属类胡萝卜素组分种间差异的潜在作用;第二个目标是确定这些基因的拷贝数。
智能控制系统毕业论文中英文资料对照外文翻译文献
智能控制系统中英文资料对照外文翻译文献附录一:外文摘要The development and application of Intelligence controlsystemModern electronic products change rapidly is increasingly profound impact on people's lives, to people's life and working way to bring more convenience to our daily lives, all aspects of electronic products in the shadow, single chip as one of the most important applications, in many ways it has the inestimable role. Intelligent control is a single chip, intelligent control of applications and prospects are very broad, the use of modern technology tools to develop an intelligent, relatively complete functional software to achieve intelligent control system has become an imminent task. Especially in today with MCU based intelligent control technology in the era, to establish their own practical control system has a far-reaching significance so well on the subject later more fully understanding of SCM are of great help to.The so-called intelligent monitoring technology is that:" the automatic analysis and processing of the information of the monitored device". If the monitored object as one's field of vision, and intelligent monitoring equipment can be regarded as the human brain. Intelligent monitoring with the aid of computer data processing capacity of the powerful, to get information in the mass data to carry on the analysis, some filtering of irrelevant information, only provide some key information. Intelligent control to digital, intelligent basis, timely detection system in the abnormal condition, and can be the fastest and best way to sound the alarm and provide usefulinformation, which can more effectively assist the security personnel to deal with the crisis, and minimize the damage and loss, it has great practical significance, some risk homework, or artificial unable to complete the operation, can be used to realize intelligent device, which solves a lot of artificial can not solve the problem, I think, with the development of the society, intelligent load in all aspects of social life play an important reuse.Single chip microcomputer as the core of control and monitoring systems, the system structure, design thought, design method and the traditional control system has essential distinction. In the traditional control or monitoring system, control or monitoring parameters of circuit, through the mechanical device directly to the monitored parameters to regulate and control, in the single-chip microcomputer as the core of the control system, the control parameters and controlled parameters are not directly change, but the control parameter is transformed into a digital signal input to the microcontroller, the microcontroller according to its output signal to control the controlled object, as intelligent load monitoring test, is the use of single-chip I / O port output signal of relay control, then the load to control or monitor, thus similar to any one single chip control system structure, often simplified to input part, an output part and an electronic control unit ( ECU )Intelligent monitoring system design principle function as follows: the power supply module is 0~220V AC voltage into a0 ~ 5V DC low voltage, as each module to provide normal working voltage, another set of ADC module work limit voltage of 5V, if the input voltage is greater than 5V, it can not work normally ( but the design is provided for the load voltage in the 0~ 5V, so it will not be considered ), at the same time transformer on load current is sampled on the accused, the load current into a voltage signal, and then through the current - voltage conversion, and passes through the bridge rectification into stable voltage value, will realize the load the current value is converted to a single chip can handle0 ~ 5V voltage value, then the D2diode cutoff, power supply module only plays the role of power supply. Signal to the analog-to-digital conversion module, through quantization, coding, the analog voltage value into8bits of the digital voltage value, repeatedly to the analog voltage16AD conversion, and the16the digital voltage value and, to calculate the average value, the average value through a data bus to send AT89C51P0, accepted AT89C51 read, AT89C51will read the digital signal and software setting load normal working voltage reference range [VMIN, VMAX] compared with the reference voltage range, if not consistent, then the P1.0 output low level, close the relay, cut off the load on the fault source, to stop its sampling, while P1.1 output high level fault light, i.e., P1.3 output low level, namely normal lights. The relay is disconnected after about 2minutes, theAT89C51P1.0outputs high level ( software design), automatic closing relay, then to load the current regular sampling, AD conversion, to accept the AT89C51read, comparison, if consistent, then the P1.1 output low level, namely fault lights out, while P1.3 output high level, i.e. normal lamp ( software set ); if you are still inconsistent, then the need to manually switch S1toss to" repair" the slip, disconnect the relay control, load adjusting the resistance value is: the load detection and repair, and then close the S1repeatedly to the load current sampling, until the normal lamp bright, repeated this process, constantly on the load testing to ensure the load problems timely repair, make it work.In the intelligent load monitoring system, using the monolithic integrated circuit to the load ( voltage too high or too small ) intelligent detection and control, is achieved by controlling the relay and transformer sampling to achieve, in fact direct control of single-chip is the working state of the relay and the alarm circuit working state, the system should achieve technical features of this thesis are as follows (1) according to the load current changes to control relays, the control parameter is the load current, is the control parameter is the relay switch on-off and led the state; (2) the set current reference voltage range ( load normal working voltage range ), by AT89C51 chip the design of the software section, provide a basis for comparison; (3) the use of single-chip microcomputer to control the light-emitting diode to display the current state of change ( normal / fault / repair ); specific summary: Transformer on load current is sampled, a current / voltage converter, filter, regulator, through the analog-digital conversion, to accept the AT89C51chip to read, AT89C51 to read data is compared with the reference voltage, if normal, the normal light, the output port P.0high level, the relay is closed, is provided to the load voltage fault light; otherwise, P1.0 output low level, The disconnecting relay to disconnect the load, the voltage on the sampling, stop. Two minutes after closing relay, timing sampling.System through the expansion of improved, can be used for temperature alarm circuit, alarm circuit, traffic monitoring, can also be used to monitor a system works, in the intelligent high-speed development today, the use of modern technology tools, the development of an intelligent, function relatively complete software to realize intelligent control system, has become an imminent task, establish their own practical control system has a far-reaching significance. Micro controller in the industry design and application, no industry like intelligent automation and control field develop so fast. Since China and the Asian region the main manufacturing plant intelligence to improve the degree of automation, new technology to improve efficiency, have important influence on the product cost. Although the centralized control can be improved in any particular manufacturing process of the overall visual, but not for those response and processingdelay caused by fault of some key application.Intelligent control technology as computer technology is an important technology, widely used in industrial control, intelligent control, instrument, household appliances, electronic toys and other fields, it has small, multiple functions, low price, convenient use, the advantages of a flexible system design. Therefore, more and more engineering staff of all ages, so this graduate design is of great significance to the design of various things, I have great interest in design, this has brought me a lot of things, let me from unsuspectingly to have a clear train of thought, since both design something, I will be there a how to design thinking, this is very important, I think this job will give me a lot of valuable things.中文翻译:智能控制系统的开发应用现代社会电子产品日新月异正在越来越深远的影响着人们的生活,给人们的生活和工作方式带来越来越大的方便,我们的日常生活各个方面都有电子产品的影子,单片机作为其中一个最重要的应用,在很多方面都有着不可估量的作用。
Internet中英文资料对照外文翻译文献综述
中英文资料对照外文翻译文献综述Internet的历史起源——ARPAnetInternet是被美国政府作为一项工程进行开发的。
这项工程的目的,是为了建立远距离之间点与点的通信,以便处理国家军事范围内的紧急事件,例如核战争。
这项工程被命名为ARPAnet,它就是Internet的前身。
建立此工程的主要应用对象就是军事通讯,那些负责ARPAnet的工程师们当时也没有想到它将成为“Internet”。
根据定义,一个“Internet”应该由四或者更多的计算机连接起来的网络。
ARPAnet是通过一种叫TCP/IP的协议实现连网工作的。
此协议最基础的工作原理是:如果信息在网络中的一条路径发送失败,那么它将找到其他路径进行发送,就好象建立一种语言以便一台计算机与其他计算机“交谈”一样,但不注意它是PC,或是Macintosh。
到了20世纪80年代,ARPAnet已经开始变成目前更为有名的Internet了,它拥有200台在线主机。
国防部很满意ARPAnets的成果,于是决定全力将它培养为能够联系很多军事主机,资源共享的服务网络。
到了1984年,它就已经超过1000台主机在线了。
在1986年ARPAnet关闭了,但仅仅是建立它的机构关闭了,而网络继续存在与超过1000台的主机之间。
由于使用NSF连接失败,ARPAnet才被关闭。
NSF是将5个国家范围内的超级计算机连入ARPAnet。
随着NSF的建立,新的高速的传输介质被成功的使用,在1988年,用户能通过56k的电话线上网。
在那个时候有28,174台主机连入Internet。
到了1989年有80,000台主机连入Internet。
到1989年末,就有290,000台主机连入了。
另外还有其他网络被建立,并支持用户以惊人的数量接入。
于1992年正式建立。
现状——Internet如今,Internet已经成为人类历史上最先进技术的一种。
每个人都想“上网”去体验一下Internet中的信息财富。
DCS分布式控制系统中英文资料对照外文翻译文献综述
DCS分布式控制系统中英文资料对照外文翻译文献综述中文:DCSDCS是分布式控制系统的英文缩写(Distributed Control System),在国内自控行业又称之为集散控制系统。
即所谓的分布式控制系统,或在有些资料中称之为集散系统,是相对于集中式控制系统而言的一种新型计算机控制系统,它是在集中式控制系统的基础上发展、演变而来的。
它是一个由过程控制级和过程监控级组成的以通信网络为纽带的多级计算机系统,综合了计算机,通信、显示和控制等4C技术,其基本思想是分散控制、集中操作、分级管理、配置灵活以及组态方便。
在系统功能方面,DCS和集中式控制系统的区别不大,但在系统功能的实现方法上却完全不同。
首先,DCS的骨架—系统网络,它是DCS的基础和核心。
由于网络对于DCS 整个系统的实时性、可靠性和扩充性,起着决定性的作用,因此各厂家都在这方面进行了精心的设计。
对于DCS的系统网络来说,它必须满足实时性的要求,即在确定的时间限度内完成信息的传送。
这里所说的“确定”的时间限度,是指在无论何种情况下,信息传送都能在这个时间限度内完成,而这个时间限度则是根据被控制过程的实时性要求确定的。
因此,衡量系统网络性能的指标并不是网络的速率,即通常所说的每秒比特数(bps),而是系统网络的实时性,即能在多长的时间内确保所需信息的传输完成。
系统网络还必须非常可靠,无论在任何情况下,网络通信都不能中断,因此多数厂家的DCS均采用双总线、环形或双重星形的网络拓扑结构。
为了满足系统扩充性的要求,系统网络上可接入的最大节点数量应比实际使用的节点数量大若干倍。
这样,一方面可以随时增加新的节点,另一方面也可以使系统网络运行于较轻的通信负荷状态,以确保系统的实时性和可靠性。
在系统实际运行过程中,各个节点的上网和下网是随时可能发生的,特别是操作员站,这样,网络重构会经常进行,而这种操作绝对不能影响系统的正常运行,因此,系统网络应该具有很强在线网络重构功能。
DCS分布式控制系统论文中英文资料对照外文翻译文献综述
DCS分布式控制系统中英文资料对照外文翻译文献综述中文:DCSDCS是分布式控制系统的英文缩写(Distributed Control System),在国内自控行业又称之为集散控制系统。
即所谓的分布式控制系统,或在有些资料中称之为集散系统,是相对于集中式控制系统而言的一种新型计算机控制系统,它是在集中式控制系统的基础上发展、演变而来的。
它是一个由过程控制级和过程监控级组成的以通信网络为纽带的多级计算机系统,综合了计算机,通信、显示和控制等4C技术,其基本思想是分散控制、集中操作、分级管理、配置灵活以及组态方便。
在系统功能方面,DCS和集中式控制系统的区别不大,但在系统功能的实现方法上却完全不同。
首先,DCS的骨架—系统网络,它是DCS的基础和核心。
由于网络对于DCS 整个系统的实时性、可靠性和扩充性,起着决定性的作用,因此各厂家都在这方面进行了精心的设计。
对于DCS的系统网络来说,它必须满足实时性的要求,即在确定的时间限度内完成信息的传送。
这里所说的“确定”的时间限度,是指在无论何种情况下,信息传送都能在这个时间限度内完成,而这个时间限度则是根据被控制过程的实时性要求确定的。
因此,衡量系统网络性能的指标并不是网络的速率,即通常所说的每秒比特数(bps),而是系统网络的实时性,即能在多长的时间内确保所需信息的传输完成。
系统网络还必须非常可靠,无论在任何情况下,网络通信都不能中断,因此多数厂家的DCS均采用双总线、环形或双重星形的网络拓扑结构。
为了满足系统扩充性的要求,系统网络上可接入的最大节点数量应比实际使用的节点数量大若干倍。
这样,一方面可以随时增加新的节点,另一方面也可以使系统网络运行于较轻的通信负荷状态,以确保系统的实时性和可靠性。
在系统实际运行过程中,各个节点的上网和下网是随时可能发生的,特别是操作员站,这样,网络重构会经常进行,而这种操作绝对不能影响系统的正常运行,因此,系统网络应该具有很强在线网络重构功能。
中英文参考文献
中英文参考文献
中英文参考文献是学术研究中必不可少的部分,用于向读者提供关于研究背景、方法和结果的详细信息。
以下是一些中英文参考文献的示例:
中文参考文献:
1. 张三. (2019). 机器学习算法在数据挖掘中的应用研究. 中国计算机学会.
2. 李四, 王五, & 赵六. (2018). 人工智能的发展及其应用. 北京: 电子工业出版社.
3. 吕七, 刘八, & 陈九. (2017). 自然语言处理技术的最新进展. 人工智能, 25(3), 28-35.
英文参考文献:
1. Zhang, S. (2019). Application of machine learning algorithms in data mining. China Computer Federation.
2. Li, S., Wang, W., & Zhao, L. (2018). The development and applications of artificial intelligence. Beijing: Electronics Industry Press.
3. Lyu, Q., Liu, B., & Chen, J. (2017). The latest advances in natural language processing technology. Artificial Intelligence, 25(3), 28-35.。
外文文献免费范文精选
英文原文1:《Professional C# Third Edition》Simon Robinson,Christian Nagel, Jay Glynn, Morgan Skinner, Karli Watson, Bill Evjen. Wiley Publishing, Inc. 2006 Where C# Fits InIn one sense, C# can be seen as being the same thing to programming languages is to the Windows environment. Just as Microsoft has been adding more and more features to Windows and the Windows API over the past decade. Visual Basic andC++ have undergone expansion. Although Visual Basic and C++ have ended up as hugely powerful languages as a result of this, both languages also suffer from problems due to the legacies of how they have evolved.In the case of Visual Basic 6 and earlier, the main strength of the language was the fact that it was simple to understand and didn't make many programming tasks easy, largely hiding the details of the Windows APT and the COM component infrastructure from the developer. The downside to this was that Visual Basic was never truly object-oriented, so that large applications quickly become disorganized and hard to maintain. As well as this, because Visual Basic's syntax was inherited from early versions of BASIC (which, in turn, was designed to be intuitively simple for beginning programmers to understand, rather than lo write large commercial applications), it didn't really lend itself to well-structured or object-oriented programs.C++, on the other hand, has its roots in the ANSI C++ language definition. It isn’t completely ANSI compliant for the simple reason that Microso ft first wrote itsC++ compiler before the ANSI definition had become official, but it conics close. Unfortunately, this has led to two problems. First, ANSI C++ has its roots in a decade-old state of technology, and this shows up in a lack of support for modern 1 外文文献-中文翻译-c#concepts (such as Unicode strings and generating XML documentation), and in some archaic syntax structures designed for the compilers of yesteryear (such as the separation of declaration from definition of member functions). Second, Microsoft has been simultaneously trying to evolve C++ into a language that is designed for high-performance (asks on Windows, and in order to achieve that they've been forced to add a huge number of Microsoft-specific keywords as well as various libraries to the language.The result is that on Windows, the language has become a complete mess. Just ask C++ developers how many definitions for a string they can think of: char*, LPTSTR, string, CString (MFC version), CString (WTL version), wchar_l*, OLECHAR*, and so on.Now completely new environment that is going to involve new extensions to both languages. Microsoft has gotten around this by adding yet more Microsoft-specific keywords to C++, and by completely revamping Visual Basic into Visual a language that retains some of the basic VB syntax but that is so different in design that we can consider it to be, for all practical purposes, a new language. It?s in this context that Microsoft has decided to give developers an alternative—a language designed specifically and designed with a clean slate. Visual C# .NET is the result. Officially, Microsoft describes C# as a ''simple, modern, object-oriented, and type-safe programming language derived from C and C++.” Most independent observers would probably change Chat to '"derived from C, C++, and Java.^ Such descriptions are technically accurate but do little to convey the beauty or elegance of the language. Syntactically, C# is very similar to both C++ and Java, to such 2an extent that many keywords are (he same, and C# also shares the same block structure with braces ({}) to mark blocks of code, and semicolons to separate statements. The first impression of a piece of C# code is that it looks quite like C++ or Java code. Behind that initial similarity, however, C# is a lot easier to learn than C++, and of comparable difficulty to Java. Its design is more in tune with modern developer tools than both of those other languages, and it has been designed to give us, simultaneously, the ease of use of Visual Basic, and the high performance, low-level memory access of C++ if required. Some of the features of C# arc:LI Full support for classes and object-oriented programming, including both interface and implementation inheritance, virtual functions, and operator overloading.□ A consistent and well-defined set of basic types.□ Built-in support for automatic generation of XML documentation.□ Automatic cleanup of dynamically allocated memory.□ The facility to mark classes or methods with user-defined attributes. This can be useful for documentation and can have some effects on compilation (for example, marking methods to be compiled only in debug builds).□ Full access to base class library, as well as easy access to the Windows AP I (if you really need it, which won’t be all that often).□ Pointers and direct memory access are available if required, but the language has been designed in such a way that you can work without them in almost all cases. □ Support for properties and eve nts in the style of Visual Basic.LJ Just by changing the compiler options, you can compile either to an executable or to a library components that can be called up by other code in the same way as 3外文文献-中文翻译-c#ActiveX controls (COM components).LI C# can be used to write dynamic Web pages and XMLWeb services.Most of the above statements, it should be pointed out. do also apply to Visual and Managed C++. The fact that C# is designed from the start to work however, means that its support for the features is both more complete, and offered within the context of a more suitable syntax than for those other languages. While the C# language itself is very similar to Java, there are some improvements: in particular. Java is not designed to work with environment.Before we leave the subject, we should point out a couple of limitations of C#. The one area the language is not designed for is time-critical or extremely high performance code—the kind where you really are worried about whether a loop takes 1.000 or 1,050 machine cycles to run through, and you need to clean up your resources the millisecond they arc no longer needed. C++ is likely to continue to reign supreme among low-level languages in this area. C# lacks certain key facilities needed for extremely high performance apps, including the ability to specify inline functions and destructors that are guaranteed to run at particular points in the code. However, the proportions of applications that fall into this category are very low.4外文文献-中文翻译-c#中文译文1:《C#的优点》C#在某种程度上k可以打作足.NET面向Windows环境的种编程语言。
汽车电子系统中英文对照外文翻译文献
汽车电子系统中英文对照外文翻译文献汽车电子系统中英文对照外文翻译文献1汽车电子系统中英文对照外文翻译文献(文档含英文原文和中文翻译)The Changing Automotive Environment: High-Temperature ElectronicsR. Wayne Johnson, Fellow, IEEE, John L. Evans, Peter Jacobsen, James R. (Rick) Thompson, and Mark ChristopherAbstract —The underhood automotive environment is harsh and current trends in the automotive electronics industry will be pushing the temperatureenvelope for electronic components. The desire to place engine control unitson the engine and transmission control units either on or in the transmissionwill push the ambient temperature above 125125℃℃.However, extreme cost pressures,increasing reliability demands (10 year/241 350 km) and the cost of field failures (recalls, liability, customer loyalty) will make the shift to higher temperatures occur incrementally. The coolest spots on engine and in the transmission will be used. These large bodies do provide considerableheat sinking to reduce temperature rise due to power dissipation in the controlunit. The majority of near term applications will be at 150 ℃ or less andthese will be worst case temperatures, not nominal. The transition toX-by-wire technology, replacing mechanical and hydraulic systems with electromechanical systems will require more power electronics. Integrationof power transistors and smart power devices into the electromechanical℃ to 200℃ . Hybridactuator will require power devices to operate at 175electric vehicles and fuel cell vehicles will also drive the demand for higher temperature power electronics. In the case of hybrid electric and fuel cell vehicles, the high temperature will be due to power dissipation. Thealternates to high-temperature devices are thermal management systems which add weight and cost. Finally, the number of sensors in vehicles is increasingas more electrically controlled systems are added. Many of these sensors mustwork in high-temperature environments. The harshest applications are exhaustgas sensors and cylinder pressure or combustion sensors. High-temperature electronics use in automotive systems will continue to grow, but it will be gradual as cost and reliability issues are addressed. This paper examines themotivation for higher temperature operation,the packaging limitations evenat 125 C with newer package styles and concludes with a review of challenge at both the semiconductor device and packaging level as temperatures push beyond 125 ℃.Index Terms—Automotive, extreme-environment electronics.I. INTRODUCTIONI N 1977, the average automobile contained $110 worth of electronics [1]. By 2003 the electronics content was $1510 per vehicle and is expected to reach$2285 in 2013 [2].The turning point in automotive electronics was governmentTABLE IMAJOR AUTOMOTIVE ELECTRONIC SYSTEMSTABLE IIAUTOMOTIVETEMPERATUREEXTREMES(DELPHIDELCOELECTRONIC SYSTEMS) [3]regulation in the 1970s mandating emissions control and fuel economy. The complex fuel control required could not be accomplished using traditional mechanical systems. These government regulations coupled with increasing semiconductor computing power at decreasing cost have led to an ever increasing array of automotive electronics. Automotive electronics can be divided into five major categories as shown in Table I.The operating temperature of the electronics is a function of location, power dissipation by the electronics, and the thermal design. The automotive electronics industry defines high-temperature electronics as electronics operating above 125 ℃. However, the actual temperature for various electronics mounting locations varies considerably. Delphi Delco Electronic Systems recently published the typical continuous maximum temperatures as reproduced in Table II [3]. The corresponding underhood temperatures are shown in Fig. 1. The authors note that typical junction temperatures for integrated circuits are 10 ℃to15℃ higher than ambient or baseplate temperature, while power devices can reach 25 ℃ higher. At-engine temperatures of 125℃ peak can be maintained by placing the electronics on theintake manifold.Fig. 1. Engine compartment thermal profile (Delphi Delco Electronic Systems) [3].TABLE III THEAUTOMOTIVEENVIRONMENT(GENERALMOTORS ANDDELPHIDELCO ELECTRONICSYSTEMS) [4]TABLE IV REQUIREDOPERATIONTEMPERATURE FORAUTOMOTIVEELECTRONIC SYSTEMS(TOYOTAMOTORCORP. [5]TABLE VMECHA TRONICMAXIMUMTEMPERA TURERANGES(DAIMLERCHRYSLER,EA TONCORPORA TION, ANDAUBURNUNIVERSITY) [6]Fig. 2. Automotive temperatures and related systems (DaimlerChrysler) [8].automotive electronic systems [8]. Fig. 3 shows an actual measured transmission transmission temperature temperature temperature profile profile profile during during during normal normal normal and and excessive excessive driving drivingconditions [8]. Power braking is a commonly used test condition where the brakes are applied and the engine is revved with the transmission in gear.A similar real-world situation would be applying throttle with the emergencybrake applied. Note that when the temperature reached 135135℃℃,the over temperature light came on and at the peak temperature of 145145℃℃,the transmission was beginning to smell of burnt transmission fluid.TABLE VI2002I NTERNA TIONAL T ECHNOLOGY R OADMAPFOR S EMICONDUCTORS A MBI ENTOPERA TINGTEMPERA TURES FORHARSHENVIRONMENTS (AUTOMOTIVE) [9]The 2002 update to the International Technology Roadmap for Semiconductors (ITRS) did not reflect the need for higher operating temperatures for complex integrated circuits, but did recognize increasing temperature requirements for power and linear devices as shown in Table VI [9]. Higher temperature power devices (diodes and transistors) will be used for the power section of power converters and motor drives for electromechanical actuators. Higher temperature linear devices will be used for analog control of power converters and for amplification and some signal processing of sensor outputs prior to transmission to the control units. It should be noted that at the maximum rated temperature for a power device, the power handling capability is derated to zero. Thus, a 200℃ rated power transistor in a 200℃ environment would have zero current carrying capability. Thus, the actual operating environments must be lower than the maximum rating.In the 2003 edition of the ITRS, the maximum junction temperatures identified forharsh-environment complex integrated circuits was raised to 150℃through 2018 [9]. Theambient operating temperature extreme for harsh-environment complex integrated circuits was defined as 40℃to 125℃ through 2009, increasing to 40℃to 150℃for 2010 and beyond. Power/linear devices were not separately listed in 2003.The ITRS is consistent with the current automotive high-temperature limitations. Delphi Delco Electronic Systems offers two production engine controllers (one on ceramic and one on thin laminate) for direct mounting on the engine. These controllers are rated for operation over the temperature range of 40℃to 125℃. The ECU must be mounted on the coolest spot on the engine. The packaging technology is consistent with 140℃ operation, but the ECU is limited by semiconductor and capacitor technologies to 125℃.The future projections in the ITRS are not consistent with the desire to place controllers on-engine or in-transmission. It will not always be possible to use the coolest location for mounting control units. Delphi Delco Electronics Systems has developed an in-transmission controller for use in an ambient temperature of 140℃[10] using ceramic substrate technology. DaimlerChrysler is also designing an in-transmission controller for usewith a maximum ambient temperature of 150℃ (Figs. 4 and 5) [11].II. MECHATRONICSMechatronics, or the integration of electrical and mechanical systems offers a number ofadvantages in automotive assembly. Integration of the engine controller with the engine allows pretest of the engine as a complete system prior to vehicle assembly. Likewise with the integration of the transmission controller and the transmission, pretesting and tuning to account for machining variations can be performed at the transmission factory prior to shipment to the automobile assembly site. In addition, most of the wires connecting to a transmission controller run to the solenoid pack inside the transmission. Integration of the controller into the transmission reduces the wiring harness requirements at the automobile assembly level.Fig. 4. Prototype DaimlerChrysler ceramic transmission controller [11]Fig. 5. DaimlerChrysler in-transmission module [11].The trend in automotive design is to distribute control with network communications. As the industry moves to more X-by-wire systems, this trend will continue. Automotivefinalassembly plants assemble subsystems and components supplied by numerous vendors to build the vehicle. Complete mechatronic subsystems simplify the design, integration, management, inventory control, and assembly of vehicles. As discussed in the previous section, higher temperature electronics will be required to meet future mechatronic designs.III. PACKAGINGCHALLENGES AT125℃Trends in electronics packaging, driven by computer and portable products are resulting in packages which will not meet underhood automotive requirements at 125℃. Most notable are leadless and area array packages such as small ball grid arrays (BGAs) and quadflatpacks no-lead (QFNs). Fig. 6 shows the thermal cycle test 40 ℃to 125℃ results for two sizes of QFN from two suppliers [12]. A typical requirement is for the product to survive 2000–2500 thermal cycles with<1% failure for underhood applications. Smaller I/O QFNs have been found to meet the requirements.Fig. 7 presents the thermal cycle results for BGAs of various body sizes [13]. The die size in the BGA remained constant (8.6 *8.6 mm). As the body size decreases so does the reliability. Only the 23-mm BGA meets the requirements. The 15-mm BGA with the 0.56-mm-thick BT substrate nearly meets the minimum requirements. However, the industry trend is to use thinner BT substrates (0.38 mm) for BGA packages.One solution to increasing the thermal cycle performance of smaller BGAs is to use underfill. Capillary underfill was dispensed and cured after reflow assembly of the BGA. Fig. 8 shows a Weibull plot of the thermal cycle data for the 15-mm BGAs with four different underfills. Underfill UF1 had no failures after 5500 cycles and is, therefore, not plotted. Underfill, therefore, provides a viable approach to meeting underhood automotive requirements with smaller BGAs, but adds process steps, time, and cost to the electronics assembly process.Since portable and computer products dominate the electronics market, the packages developed for these applications are replacing traditional packages such as QFPs for new devices. The automotive electronics industry will have to continuedeveloping assembly approaches such as underfill just to use these new packages in current underhood applications.IV. TECHNOLOGY CHALLENGES ABOVE125 ℃The technical challenges for high-temperature automotive applications are interrelated, but can be divided into semiconductors, passives, substrates,interconnections, and housings/connectors. Industries such as oil well logging have successfully fielded high-temperature electronics operating at 200℃ and above. However, automotive electronics are further constrained by high-volume production, low cost, and long-term reliability requirements. The typical operating life for oil well logging electronics may only be 1000 h, production volumes are in the range of 10s or 100s and, while cost is a concern, it is not a dominant issue. In the following paragraphs, the technical challenges for high-temperature automotive electronics are discussed.Semiconductors: The maximum rated ambient temperature for most silicon basedintegrated circuits is 85℃, which is sufficient for consumer, portable, and computing product applications. Devices for military and automotive applications are typically rated to 125℃. A few integrated circuits are rated to 150℃, particularly for power supply controllers and a few automotive applications. Finally, many power semiconductor devices are derated to zero power handling capability at 200℃.Nelmset al.and Johnsonet al.have shown that power insulated-gate bipolar transistors (IGBTs) and metal–oxide–semiconductorfield-effect transistors (MOSFETs) can be used at 200℃[14], [15]. The primary limitations of these power transistors at the higher temperatures are the packaging (the glass transition temperature of common molding compounds is in the 180℃ to 200℃range) and the electrical stress on the transistor during hard switching.A number of factors limit the use of silicon at high temperatures. First, with a bandgap of 1.12 eV, the silicon p-n junction becomes intrinsic at high temperature (225℃ to 400℃depending on doping levels). The intrinsic carrier concentration is given by (1)As the temperature increases, the intrinsic carrier concentration increases. When the intrinsic carrier concentration nears the doping concentration level, p-n junctions behave as resistors, not diodes, and transistors lose their switching characteristics. One approach used in high-temperature integrated circuit design is to increase the doping levels, which increases the temperature at which the device becomes intrinsic. However, increasing the doping levels decreases the depletion widths, resulting in higher electricfields within the device that can lead to breakdown.A second problem is the increase in leakage current through a reverse-biased p-n junction with increasing temperature. Reverse-biased p-n junctions are commonly used in IC design to provide isolation between devices. The saturation current (I,the ideal reverse-bias current of the junction) is proportional to the square of the intrinsic carrier concentrationwhere Ego=bandgap energy atT= 0KThe leakage current approximately doubles for each 10℃rise in junction temperature. Increased junction leakage currents increase power dissipation within the device and can lead to latch-up of the parasitic p-n-p-n structure in complimentary metal–oxide–semiconductor (CMOS) devices. Epitaxial-CMOS (epi-CMOS) has been developed to improve latch-up resistance as the device dimensions are decreased due to scaling and provides improved high-temperature performance compared to bulk CMOS.Silicon-on-insulator (SOI) technology replaces reverse-biased p-n junctions with insulators, typically SiO2 , reducing the leakage currents and extending the operating range of silicon above 200℃. At present, SOI devices are more expensive than conventional p-njunction isolated devices. This is in part due to the limited use of SOI technology. With the continued scaling of device dimensions, SOI is being used in some high-performance applications and the increasing volume may help to eventually lower the cost.Other device performance issues at higher temperatures include gate threshold voltage shifts, decreased noise margin, decreased switching speed, decreased mobility, decreased gain-bandwidth product, and increased amplifier input–offset voltage [16]. Leakage currents also increase for insulators with increasing temperature. This results in increased gate leakage currents, and increased leakage of charge stored in memory cells (data loss). For dynamic memory, the increased leakage currents require faster refresh rates. For nonvolatile memory, the leakage limits the life of the stored data, a particular issue for FLASH memory used in microcontrollers and automotive electronics modules.Beyond the electrical performance of the device, the device reliability must also be considered. Electromigration of the aluminum metallization is a major concern. Electromigration is the movement of the metal atoms due to their bombardment by electrons (current flow). Electromigration results in the formation of hillocks and voids in the conductor traces. The mean time to failure (MTTF) for electromigration is related to the current density (J)and temperature(T) as shown in (3)The exact rate of electromigration and resulting time to failure is a function of the aluminum microstructure. Addition of copper to the aluminum increases electromigration resistance. The trend in the industry to replace aluminum with copper will improve the electromigration resistance by up to three orders of magnitude [17].Time dependent dielectric breakdown (TDDB) is a second reliability concern. Time to failure due to TDDB decreases with increasing temperature. Oxide defects, including pinholes, asperities at the Si–SiO2 interface and localized changes in chemical structure that reduce the barrier height or increase the charge trapping are common sources of early failure [18]. Breakdown can also occur due to hole trapping (Fowler–Nordheim tunneling). The holes can collect at weak spots in the Si–SiO2 interface, increasing the electricfield locally and leading to breakdown [18]. The temperature dependence of time-to-breakdown(tBD) can be expressed as [18]Values reported for Etbd vary in the literature due to its dependence on the oxidefield and the oxide quality. Furthermore, the activation energy increases with breakdown time [18].With proper high-temperature design, junction isolated silicon integrated circuits can be used to junction temperatures of 150℃ to 165℃, epi-CMOS can extend the range to 225℃to 250℃ and SOI can be used to 250℃ to 280℃ [16, pp. 224]. High-temperature, nonvolatile memory remains an issue.For temperatures beyond the limits of silicon, silicon carbidebased semiconductors are being developed. The bandgap of SiC ranges from 2.75–3.1 depending on the polytype. SiC has lower leakage currents and higher electric field strength than Si. Due to its wider bandgap, SiC can be used as a semiconductor device at temperatures over 600℃. Theprimary focus of SiC device research is currently for power devices. SiC power devices may eventuallyfind application as power devices in braking systems and direct fuel injection. High-temperature sensors have also been fabricated with SiC. Berget al.have demonstrated a SiCbased sensor for cylinder pressure in combustion engines [19] at up to 350℃ and Casadyet al.[20] have shown a SiC-based temperature sensor for use to 500℃. At present, the wafer size, cost, and device yield have made SiC devices too expensive for general automotive use. Most SiC devices are discrete, as the level of integration achieved in SiC to date is low.Passives: Thick and thin-film chip resistors are typically rated to 125 ℃. Naefeet al.[21] and Salmonet al.[22] have shown that thick-film resistors can be used at temperatures above 200℃ if the allowable absolute tolerance is 5% or greater. The resistors studied were specifically formulated with a higher softening point glass. The minimum resistance as afunction of temperature was shifted from 25℃to 150℃to minimize the temperature coefficient of resistance (TCR) over the temperature range to 300℃. TaN and NiCr thin-film resistors have been shown to have less than 1% drift after 1000 h at 200℃ [23]. Thus, for tighter tolerance applications, thin-film chip resistors are preferred. Wire wound resistors provide a high-temperature option for higher power dissipation levels [21].High-temperature capacitors present more of a challenge. For low-value capacitors, negative-positive-zero (NPO) ceramic and MOS capacitors provide low-temperature coefficient of capacitance (TCC) to 200℃. NPO ceramic capacitorshave been demonstrated to 500℃ [24]. Higher dielectric constant ceramics (X7R, X8R, X9U), used to achieve the high volumetric efficiency necessary for larger capacitor values, exhibit a significant capacitance decrease above the Curie temperature, which is typically between 125℃ to 150℃. As the temperature increases, the leakage current increases, the dissipation factor increases, and the breakdown strength decreases. Increasing the dielectric tape thickness to increase breakdown strength reduces the capacitance and is a tradeoff. X7R ceramic capacitors have been shown to be stable when stored at 200℃ [23]. X9U chip capacitors are commercially available for use to 200 C, but there is a significant decrease in capacitance above 150℃.Consideration must also be given to the capacitor electrodes and terminations. Ni is now being substituted for Ag and PdAg to lower capacitor cost. The impact of this change on hightemperature reliability must be evaluated. The surface finish for ceramic capacitor terminations is typically Sn. The melting point of the Sn (232℃) and its interaction with potential solders/brazes must also be considered. Alternate surfacefinishes may be required.For higher value, low-voltage requirements, wet tantalum capacitors show reasonable behavior at 200℃ if the hermetic seal does not lose integrity [23]. Aluminum electrolytics are also available for use to 150℃. Mica paper (260℃) and Teflonfilm (200℃) capacitors can provide higher voltage capability, but are large and bulky [25]. High-temperature capacitors are relatively expensive. V capacitors are relatively expensive. Volumetrically efficient, high-voltage, highcapacitance, olumetrically efficient, high-voltage, highcapacitance, high-temperature and low-cost capacitors are still needed.Standard transformers and inductor cores with copper wire and teflon insulation are suitable for operation to 200℃. For higher temperature operation, the magnetic core, the conductor metal (Ni instead of Cu) and insulator must be selected to be compatible with the higher temperatures [16, pp. 651–652] Specially designed transformers can be used to 450℃ to 500℃, however, they are limited in operating frequency.Crystals are required for clock frequency generation for microcontrollers. Crystals with acceptable frequency shift over the temperature range from 55℃to 200℃ have been demonstrated [22]. However, the selection of packaging materials and assembly process for the crystal are key to high-temperature performance and reliability. For example, epoxies used in assembly must be compatible with 200℃ operation.Substrates: Thick-film substrates with gold metallization have been used in circuits to 500℃ [21], [23]. Palladium silver, platinum silver, and silver conductors are morecommonly used in automotive hybrids for reduced cost. Silver migration has been observed with an unpassivated PdAg thick-film conductor under bias at 300℃ [21]. The time-to-failure needs to be examined as a function of temperature and bias voltage with and without passivation. Low-temperature cofired ceramic (LTCC) and high-temperature cofired ceramic (HTCC) are also suitable for high-temperature automotive applications. Embedded resistors are standard to thick-film hybrids, LTCC, and some HTCC technologies. As previously mentioned, thick-film resistors have been demonstrated at temperatures 200℃. Dielectric tapes for embedded capacitors have also been developed for LTCC and HTCC. However, these embedded capacitors have not been characterized for high-temperature use.High-Tg laminates are also available for fabrication of hightemperature printed wiring boards. Cyanate esters [Tg=250℃by differential scanning calorimetry (DSC)], polyimide (260℃by DSC), and liquid crystal polymers(Tm>280℃)provide options for use to 200℃. Cyanate ester boards have been used successfully in test vehicles at 175℃, but failed when exposed to 250℃ [26]. The higher coefficient of thermal expansion (CTE) of the laminate substrates compared to the ceramics must be considered in the selection of component attachment materials. The temperature limits of the laminates with respect to assembly temperatures must also be carefully considered. Work is ongoing to develop and implement embedded resistor and capacitor technology for laminate substrates for conventional temperature ranges. This technology has not been extended to high-temperature applications.One method many manufacturers are using to address the higher temperatures whilemaintaining lower cost is the use of laminate substrates attached to metal. The typical design involves the use of higher Tg( +140℃ and above) laminate substrates attached to an aluminum plate (approximately 2.54-mm thick) using a sheet or liquid adhesive. To assist in thermal performance, the laminate substrate is often thinner (0.76 mm) than traditional automotive substrates for under-the-hood applications. While this design provides improved thermal performance, the attachment of the laminate to aluminum increases the CTE for the overall substrates. The resultant CTE is very dependent on the ability of the attachment material to decouple the CTE between the laminate substrate and the metal backing. However, regardless of the attachment material used, the combination of the laminate and metal will increase the CTE of the overall substrate above that of a stand-alone laminate substrate. This impact can be quite significant in the reliability performance for components with low CTE values (such as ceramic chip resistors). Fig. 9 illustrates the impact of two laminate-to-metal attachment options compared to standard laminate substrates [27], [28]. The reliability data presented is for 2512 ceramic chip resistors attached to a 0.79-mm-thick laminate substrate attached to aluminum using two attachment materials. Notice that while one material significantly outperforms the other, both are less reliable than the same chip resistor attached to laminate without metal backing.This decrease in reliability is also exhibited on small ball grid array (BGA) packages. Fig. 10 shows the reliability of a 15-mm BGA package attached to laminate compared to the same package attached to a laminate substrate with metal backing [27], [28]. The attachment material used for the metal-backed substrate was the best material selected from previous testing. Notice again that the metal-backed substrate deteriorates the reliability. This reliability deterioration is of particular concern since many IC packages used for automotive applications are ball grid array packages and the packaging trend is for reduced packaging size. These packaging trends make the use of metal-backed substrates difficult for next generation products.One potential solution to the above reliability concern is the use of encapsulants and underfills. Fig. 11 illustrates how conformal coating can improve component reliability for surface mount chip resistors [27], [28]. Notice that the reliability varies greatly depending on material composition. However, for components which meet a marginal level of reliability, conformal coatings may assist the design in meeting the target reliability requirements. The same scenario can be found for BGA underfills. Typical underfill materials may extend the component life by a factor of two or more. For marginal IC packages, this enhancement may provide enough reliability improvement toall the designs to meet under-the-hood requirements. Unfortunately, the improvements provided byencapsulants and underfills increase the material cost and adds one or more manufacturing processes for material dispense and cure.Interconnections: Methods of mechanical and electrical interconnection of the active and passive components to the board include chip and wire,flip-chip, and soldering of packaged parts. In chip and wire assembly, epoxy die-attach materials can beused to 165℃ [29]. Polyimide and silicone die-attach materials can be used to 200℃. For higher temperatures, SnPb ( >90Pb), AuGe, AuSi, AuSn, and AuIn have been used. However,with the exception of SnPb, these are hard brazes and with increasing die size, CTE mismatches between the die and the substrate will lead to cracking with thermal。
电子商务营销中英文对照外文翻译文献
电子商务营销中英文对照外文翻译文献电子商务营销中英文对照外文翻译文献(文档含英文原文和中文翻译)电子商务在马来西亚中小企业的应用摘要:该研究项目旨在探讨电子商务是否适用于马来西亚马来洲的中小型企业。
主要参与研究的人群是马来西亚马来人的德瓦恩和吉兰丹州的登记入住人员,一共有302个受访者被选择参加我们的研究。
根据世界商界的一般假设,一致认为,电子商务的应用与全球经济的生存和挑战高度相关。
同时,获取知识和认识环境,应对和处理变化,加快业务决策的过程能够进一步提高中小型企业的竞争力。
通过应用建立的模型,我们的调查集中在5个可识别的变量,以表现采用电子商务对推动中小企业的实用性。
我们的分析表明,所有选择的变量对加强电子商务的应用,从而保持其在该行业的的竞争优势有显著意义。
关键词:电子商务应用物流营销采购安全中小企业1.介绍电子商务电子商务的出现正在根本性地改变商业进行的方式。
客户可以在其全面休闲的任何地方,任何时候购物,并且总是享受几乎没有任何成本的同等水平的服务。
显然,通过这种无纸化交易,顾客不再需要填写订购表格,或到经营场所去放置他们的订单。
什么事都可以在客户便利的条件下电子化地完成。
根据EDI报文(2000),即使中小企业因为缺乏专业知识和资金而可能有困难建立一个先进的网站,但是他们仍然需要电子商务去繁荣和持续生存。
许多个人和组织在用典型的方式去解释电子商务。
当企业开始意识到互联网作为强大媒体的角色开展业务,特别是在服务行业,因为它能够提高客户与供应商的关系,电子商务术语出现了。
电子商务是指主要的相关商业关系或交易通过互联网实现的流程,包括采购,营销,销售和客户支持。
劳顿和特拉弗形容电子商务涉及所有时间周期,速度和全球化,可以增强生产力,获取新顾客和跨机构分享知识,通过数字化实现跨边界产品和服务的交易。
电子商务是商业圈各种关系演变而成的。
它可以是企业对个人的形式(B2C),企业对企业(B2B)的形式,商业业务(BIB)的形式,和最后的个人对个人(C2C)的形式。
信息系统信息技术中英文对照外文翻译文献
中英文资料外文翻译文献Information Systems Outsourcing Life Cycle And Risks Analysis 1. IntroductionInformation systems outsourcing has obtained tremendous attentions in the information technology industry.Although there are a number of reasons for companies to pursuing information systems (IS)outsourcing , the most prominent motivation for IS outsourcing that revealed in the literatures was “cost saving”. Costfactor has been a major decision factors for IS outsourcing.Other than cost factor, there are other reasons for outsourcing decision.The Outsourcing Institute surveyed outsourcing end-users from their membership in 1998 and found that top 10 reasons companies outsource were:Reduce and control operating costs,improve company focus,gain access to world-class capabilities,free internal resources for other purposes, resources are not available internally, accelerate reengineering benefits, function difficult to manage/out of control,make capital funds available, share risks, and cash infusion.Within these top ten outsourcing reasons, there are three items that related to financial concerns, they are operating costs, capital funds available, and cash infusion. Since the phenomenon of wage difference exists in the outsourced countries, it is obvious that outsourcing companies would save remarkable amount of labor cost.According to Gartner, Inc.'s report, world business outsourcing services would grow from $110 billion in 2002 to $173 billion in 2007,a proximately 9.5% annual growth rate.In addition to cost saving concern, there are other factors that influence outsourcing decision, including the awareness of success and risk factors, the outsourcing risks identification and management,and the project quality management. Outsourcing activities are substantially complicated and outsourcing project usually carries a huge array of risks. Unmanaged outsourcing risks will increase total project cost, devaluatesoftware quality, delay project completion time, and finally lower the success rate of the outsourcing project.Outsourcing risks have been discovered in areas such as unexpected transition and management costs, switching costs, costly contractual amendments, disputes and litigation, service debasement, cost escalation, loss of organizational competence, hidden service costs,and so on.Most published outsourcing studies focused on organizational and managerial issues. We believe that IS outsourcing projects embrace various risks and uncertainty that may inhibit the chance of outsourcing success. In addition to service and management related risk issues, we feel that technical issues that restrain the degree of outsourcing success may have been overlooked. These technical issues are project management, software quality, and quality assessment methods that can be used to implement IS outsourcing projects.Unmanaged risks generate loss. We intend to identify the technical risks during outsourcing period, so these technical risks can be properly managed and the cost of outsourcing project can be further reduced. The main purpose of this paper is to identify the different phases of IS outsourcing life cycle, and to discuss the implications of success and risk factors, software quality and project management,and their impacts to the success of IT outsourcing.Most outsourcing initiatives involve strategic planning and management participation, therefore, the decision process is obviously broad and lengthy. In order to conduct a comprehensive study onto outsourcing project risk analysis, we propose an IS outsourcing life cycle framework to be served as a yardstick. Each IS outsourcing phase is named and all inherited risks are identified in this life cycle framework.Furthermore,we propose to use software qualitymanagement tools and methods in order to enhance the success rate of IS outsourcing project.ISO 9000 is a series of quality systems standards developed by the International Organization for Standardization (ISO).ISO's quality standards have been adopted by many countries as a major target for quality certification.Other ISO standards such as ISO 9001, ISO 9000-3,ISO 9004-2, and ISO 9004-4 are quality standards that can be applied to the software industry.Currently, ISO is working on ISO 31000, a risk management guidance standard. These ISO quality systems and risk management standards are generic in nature, however, they may not be sufficient for IS outsourcing practice. This paper, therefore,proposes an outsourcing life cycle framework to distinguish related quality and risk management issues during outsourcing practice.The following sections start with needed theoretical foundations to IS outsourcing,including economic theories, outsourcing contracting theories, and risk theories. The IS outsourcing life cycle framework is then introduced.It continues to discuss the risk implications in precontract,contract, and post-contract phases. ISO standards on quality systems and risk management are discussed and compared in the next section. A conclusion and direction for future study are provided in the last section.2. Theoretical foundations2.1. Economic theories related to outsourcingAlthough there are a number of reasons for pursuing IS outsourcing,the cost savingis a main attraction that leads companies to search for outsourcing opportunities. In principle, five outsourcing related economic theories that lay the groundwork of outsourcing practice, theyare:(1)production cost economics,(2)transaction cost theory,(3)resource based theory,(4)competitive advantage, and(5)economies of scale.Production cost economics was proposed by Williamson, who mentioned that “a firm seeks to maximize its profit also subjects to its production function and market opportunities for selling outputs and buying inputs”. It is clear that production cost economics identifies the phenomenon that a firm may pursue the goal of low-cost production process.Transaction cost theory was proposed by Coase. Transaction cost theory implies that in an economy, there are many economic activities occurred outside the price systems. Transaction costs in business activities are the time and expense of negotiation, and writing and enforcing contracts between buyers and suppliers .When transaction cost is low because of lower uncertainty, companies are expected to adopt outsourcing.The focus of resource-based theory is “the heart of the firm centers on deployment and combination of specific inputs rather than on avoidance of opportunities”. Conner suggested that “Firms as seekers of costly-to-copy inputs for production and distribution”.Through resource-based theory, we can infer that “outsourcing decision is to seek external resources or capability for meeting firm's objectives such as cost-saving and capability improving”.Porter, in his competitive forces model, proposed the concept of competitive advantage. Besanko et al.explicated the term of competitive advantage, through economic concept, as “When a firm(or business unit within a multi-business firm) earns a higher rate of economic profit than the average rate of economic profit of other firms competing within the same market, the firm has a competitive advantage.” Outsourcing decision, therefore, is to seek cost saving that meets the goal of competitive advantage within a firm.The economies of scale is a theoretical foundation for creating and sustaining the consulting business. Information systems(IS) and information technology(IT) consulting firms, in essence, bear the advantage of economies of scale since their average costs decrease because they offer a mass amount of specialized IS/IT services in the marketplace.2.2. Economic implication on contractingAn outsourcing contract defines the provision of services and charges that need to be completed in a contracting period between two contracting parties. Since most IS/IT projects are large in scale, a valuable contract should list complete set of tasks and responsibilities that each contracting party needs to perform. The study of contracting becomes essential because a complete contract setting could eliminate possible opportunistic behavior, confusion, and ambiguity between two contracting parties.Although contracting parties intend to reach a complete contract,in real world, most contracts are incomplete. Incomplete contracts cause not only implementing difficultiesbut also resulting in litigation action. Business relationship may easily be ruined by holding incomplete contracts. In order to reach a complete contract, the contracting parties must pay sufficient attention to remove any ambiguity, confusion, and unidentified and immeasurable conditions/ terms from the contract. According to Besanko et al., incomplete contracting stems from the following three factors: bounded rationality, difficulties on specifying or measuring performance, and asymmetric information.Bounded rationality describes human limitation on information processing, complexity handling, and rational decision-making. An incomplete contract stems from unexpected circumstances that may be ignored during contract negotiation. Most contracts consist of complex product requirements and performance measurements. In reality, it is difficult to specify a set of comprehensive metrics for meeting each party's right and responsibility. Therefore, any vague or open-ended statements in contract will definitely result in an incomplete contract. Lastly, it is possible that each party may not have equal access to all contract-relevant information sources. This situation of asymmetric information results in an unfair negotiation,thus it becomes an incomplete contract.2.3. Risk in outsource contractingRisk can be identified as an undesirable event, a probability function,variance of the distribution of outcomes, or expected loss. Risk can be classified into endogenous and exogenous ris ks. Exogenous risks are“risks over which we have no control and which are not affected by our actions.”. For example, natural disasters such as earthquakes and flood are exogenous risks. Endogenous risks are “risks that are dependent on our actions”.We can infer that risks occurring during outsource contracting should belong to such category.Risk (RE) can be calculated through “a function of the probability of a negative outcome and the importance of the loss due to the occurrence of this outcome:RE = ΣiP(UOi)≠L(UOi) (1) where P(UOi) is the probability of an undesirable outcome i, and L(UOi) is the loss due to the undesirable outcome i.”.Software risks can also be analyzed through two characteristics :uncertainty and loss. Pressman suggested that the best way to analyze software risks is to quantify the level of uncertainty and the degree of loss that associated with each kind of risk. His risk content matches to above mentioned Eq.(1).Pressman classified software risks into the following categories: project risks, technical risks, and business risks.Outsourcing risks stem from various sources. Aubert et al. adopted transaction cost theory and agency theory as the foundation for deriving undesirable events and their associated risk factors.Transaction cost theory has been discussed in the Section 2.2. Agency theory focuses on client's problem while choosing an agent(that is, a service provider), and working relationship building and maintenance, under the restriction of information asymmetry.Various risk factors would be produced if such agent–client relationship becomes crumble.It is evident that a complete contract could eliminate the risk that caused by an incomplete contract and/or possible opportunistic behavior prompted by any contracting party. Opportunistic behavior is one of the main sources that cause transactional risk. Opportunistic behavior occurs when a transactional partner observes away of saving cost or removing responsibility during contracting period, this company may take action to pursue such opportunity. This type of opportunistic behavior could be encouraged if such contract was not completely specified at the first place.Outsourcing risks could generate additional unexpected cost to an outsourcing project. In order to conduct a better IS outsourcing project, identifying possible risk factors and implementing matured risk management process could make information systems outsourcing more successful than ever.rmation system outsourcing life cycleThe life cycle concept is originally used to describe a period of one generation of organism in biological system. In essence, the term of life cycle is the description of all activities that a subject is involved in a period from its birth to its end. The life cycle concept has been applied into project management area. A project life cycle, according to Schwalbe, is a collection of project phases such as concept,development, implementation, and close-out. Within the above mentioned four phases, the first two phases center on “planning”activity and the last two phases focus on “delivery the actual work” Of project management.Similarly, the concept of life cycle can be applied into information systems outsourcing analysis. Information systems outsourcing life cycle describes a sequence of activities to be performed during company's IS outsourcing practice. Hirsch heim and Dibbern once described a client-based IS outsourcing life cycle as: “It starts with the IS outsourcing decision, continues with the outsourcing relationship(life of the contract)and ends with the cancellation or end of the relationship, i.e., the end of the contract. The end of the relationship forces a new outsourcing decision.” It is clear that Hirsch heim and Dibbern viewed “outsourcing relationship” as a determinant in IS outsourcing life cycle.IS outsourcing life cycle starts with outsourcing need and then ends with contract completion. This life cycle restarts with the search for a new outsourcing contract if needed. An outsourcing company may be satisfied with the same outsourcing vendor if the transaction costs remain low, then a new cycle goes on. Otherwise, a new search for an outsourcing vendor may be started. One of the main goals for seeking outsourcing contract is cost minimization. Transaction cost theory(discussed in the Section 2.1)indicates that company pursuing contract costs money, thus low transaction cost will be the driver of extending IS outsourcing life cycle.The span of IS outsourcing life cycle embraces a major portion of contracting activities. The whole IS outsourcing life cycle can be divided into three phases(see Fig.1): pre-contract phase, contract phase, and post-contract phase. Pre-contract phase includes activities before a major contract is signed, such as identifying the need for outsourcing, planning and strategic setting, and outsourcing vendor selection. Contract phase startswhile an outsourcing contract is signed and then lasted until the end of contracting period. It includes activities such as contracting process, transitioning process, and outsourcing project execution. Post-contract phase contains those activities to be done after contract expiration, such as outsourcing project assessment, and making decision for the next outsourcing contract.Fig.1. The IS outsourcing life cycleWhen a company intends to outsource its information systems projects to external entities, several activities are involved in information systems outsourcing life cycle. Specifically, they are:1. Identifying the need for outsourcing:A firm may face strict external environment such as stern market competition,competitor's cost saving through outsourcing, or economic downturn that initiates it to consider outsourcing IS projects. In addition to external environment, some internal factors may also lead to outsourcing consideration. These organizational predicaments include the need for technical skills, financial constraint, investors' request, or simply cost saving concern. A firm needs to carefully conduct a study to its internal and external positioning before making an outsourcing decision.2. Planning and strategic setting:If a firm identifies a need for IS outsourcing, it needs to make sure that the decision to outsource should meet with company's strategic plan and objectives. Later, this firm needs to integrate outsourcing plan into corporate strategy. Many tasks need to be fulfilled during planning and strategic setting stages, including determining outsourcing goals, objectives, scope, schedule, cost, business model, and processes. A careful outsourcing planning prepares a firm for pursuing a successful outsourcing project.3. Outsourcing vendor selection:A firm begins the vendor selection process with the creation of request for information (RFI) and request for proposal (RFP) documents. An outsourcing firm should provide sufficient information about the requirements and expectations for an outsourcing project. After receiving those proposals from vendors, this company needs to select a prospective outsourcing vendor, based on the strategic needs and project requirements.4. Contracting process:A contract negotiation process begins after the company selects a probable outsourcing vendor. Contracting process is critical to the success of an outsourcing project since all the aspects of the contract should be specified and covered, including fundamental, managerial, technological, pricing, financial, and legal features. In order to avoid resulting in an incomplete contract, the final contract should be reviewed by two parties' legal consultants.Most importantly, the service level agreements (SLA) must be clearly identified in the contract.5. Transitioning process:Transitioning process starts after a company signed an outsourcing contract with a vendor. Transition management is defined as “the detailed, desk-level knowledge transfer and documentation of all relevant tasks, technologies, workflows, people, and functions”.Transitioni ng process is a complicate phase in IS outsourcing life cycle since it involves many essential workloads before an outsourcing project can be actually implemented. Robinson et al.characterized transition management into the following components:“employee management, communication management, knowledge management, and quality management”. It is apparent that conducting transitioning process needs the capabilities of human resources, communication skill, knowledge transfer, and quality control.6. Outsourcing project execution:After transitioning process, it is time for vendor and client to execute their outsourcing project. There are four components within this“contract governance” stage:project management, relationship management, change management, and risk management. Any items listed in the contract and its service level agreements (SLAs) need to be delivered and implemented as requested. Especially, client and vendor relationships, change requests and records, and risk variables must be carefully managed and administered.7. Outsourcing project assessment:During the end of an outsourcing project period, vendor must deliver its final product/service for client's approval. The outsourcing client must assess the quality of product/service that provided by its client. The outsourcing client must measure his/her satisfaction level to the product/service provided by the client. A satisfied assessment and good relationship will guarantee the continuation of the next outsourcing contract.The results of the previous activity (that is, project assessment) will be the base of determining the next outsourcing contract. A firm evaluates its satisfaction level based on predetermined outsourcing goals and contracting criteria. An outsourcing company also observes outsourcing cost and risks involved in the project. If a firm is satisfied with the current outsourcing vendor, it is likely that a renewable contract could start with the same vendor. Otherwise, a new “precontract phase” would restart to s earch for a new outsourcing vendor.This activity will lead to a new outsourcing life cycle. Fig.1 shows two dotted arrowlines for these two alternatives: the dotted arrow line 3.a.indicates “renewable contract” path and the dotted arrow line 3.b.indicates “a new contract search” path.Each phase in IS outsourcing life cycle is full of needed activities and processes (see Fig.1). In order to clearly examine the dynamics of risks and outsourcing activities, the following sections provide detailed analyses. The pre-contract phase in IS outsourcing life cycle focuses on the awareness of outsourcing success factors and related risk factors. The contract phase in IS outsourcing life cycle centers on the mechanism of project management and risk management. The post-contract phase in IS outsourcing life cycle concentrates on the need of selecting suitable project quality assessment methods.4. Actions in pre-contract phase: awareness of success and risk factorsThe pre-contract period is the first phase in information systems outsourcing life cycle (see Fig.1). While in this phase, an outsourcing firm should first identify its need for IS outsourcing. After determining the need for IS outsourcing, the firm needs to carefully create an outsourcing plan. This firm must align corporate strategy into its outsourcing plan.In order to well prepare for corporate IS outsourcing, a firm must understand current market situation, its competitiveness, and economic environment. The next important task to be done is to identify outsourcing success factors, which can be used to serve as the guidance for strategic outsourcing planning. In addition to know success factors,an outsourcing firm must also recognize possible risks involved in IS outsourcing, thus allows a firm to formulate a better outsourcing strategy.Conclusion and research directionsThis paper presents a three-phased IS outsourcing life cycle and its associated risk factors that affect the success of outsourcing projects.Outsourcing life cycle is complicated and complex in nature. Outsourcing companies usually invest a great effort to select suitable service vendors However,many risks exit in vendor selection process. Although outsourcing costs are the major reason for doing outsourcing, the firms are seeking outsourcing success through quality assurance and risk control. This decision path is understandable since the outcome of project risks represents the amount of additional project cost. Therefore, carefully manage the project and its risk factors would save outsourcing companies a tremendous amount of money.This paper discusses various issues related to outsourcing success, risk factors, quality assessment methods, and project management techniques. The future research may touch alternate risk estimation methodology. For example, risk uncertainty can be used to identify the accuracy of the outsourcing risk estimation. Another possible method to estimate outsourcing risk is through the Total Cost of Ownership(TCO) method. TCO method has been used in IT management for financial portfolio analysis and investment decision making. Since the concept of risk is in essence the cost (of loss) to outsourcing clients, it thus becomes a possible research method to solve outsourcing decision.信息系统的生命周期和风险分析1.绪言信息系统外包在信息技术工业已经获得了巨大的关注。
(完整word版)外文文献及翻译doc
Criminal Law1.General IntroductionCriminal law is the body of the law that defines criminal offenses, regulates the apprehension, charging, and trial of suspected offenders,and fixes punishment for convicted persons. Substantive criminal law defines particular crimes, and procedural law establishes rules for the prosecution of crime. In a democratic society, it is the function of the legislative bodies to decide what behavior will be made criminal and what penalties will be attached to violations of the law.Capital punishment may be imposed in some jurisdictions for the most serious crimes. And physical or corporal punishment may still be imposed such as whipping or caning, although these punishments are prohibited in much of the world. A convict may be incarcerated in prison or jail and the length of incarceration may vary from a day to life.Criminal law is a reflection of the society that produce it. In an Islamic theocracy, such as Iran, criminal law will reflect the religious teachings of the Koran; in an Catholic country, it will reflect the tenets of Catholicism. In addition, criminal law will change to reflect changes in society, especially attitude changes. For instance, use of marijuana was once considered a serious crime with harsh penalties, whereas today the penalties in most states are relatively light. As severity of the penaltieswas reduced. As a society advances, its judgments about crime and punishment change.2.Elements of a CrimeObviously, different crimes require different behaviors, but there are common elements necessary for proving all crimes. First, the prohibited behavior designated as a crime must be clearly defined so that a reasonable person can be forewarned that engaging in that behavior is illegal. Second, the accused must be shown to have possessed the requisite intent to commit the crime. Third, the state must prove causation. Finally, the state must prove beyond a reasonable doubt that the defendant committed the crime.(1) actus reusThe first element of crime is the actus reus.Actus is an act or action and reus is a person judicially accused of a crime. Therefore, actus reus is literally the action of a person accused of a crime. A criminal statute must clearly define exactly what act is deemed “guilty”---that is, the exact behavior that is being prohibited. That is done so that all persons are put on notice that if they perform the guilty act, they will be liable for criminal punishment. Unless the actus reus is clearly defined, one might not know whether or not on e’s behavior is illegal.Actus reus may be accomplished by an action, by threat of action,or exceptionally, by an omission to act, which is a legal duty to act. For example, the act of Cain striking Abel might suffice, or a parent’s failure to give to a young child also may provide the actus reus for a crime.Where the actus reus is a failure to act, there must be a duty of care. A duty can arise through contract, a voluntary undertaking, a blood relation, and occasionally through one’s official position. Duty also can arise from one’s own creation of a dangerous situation.(2)mens reaA second element of a crime is mens rea. Mens rea refers to an individual’s state of mind when a crime is committed. While actus reus is proven by physical or eyewitness evidence, mens rea is more difficult to ascertain. The jury must determine for itself whether the accused had the necessary intent to commit the act.A lower threshold of mens rea is satisfied when a defendant recognizes an act is dangerous but decides to commit it anyway. This is recklessness. For instance, if Cain tears a gas meter from a wall, and knows this will let flammable gas escape into a neighbor’s house, he could be liable for poisoning. Courts often consider whether the actor did recognise the danger, or alternatively ought to have recognized a danger (though he did not) is tantamount to erasing intent as a requirement. In this way, the importance of mens rea hasbeen reduced in some areas of the criminal law.Wrongfulness of intent also may vary the seriousness of an offense. A killing committed with specific intent to kill or with conscious recognition that death or serious bodily harm will result, would be murder, whereas a killing affected by reckless acts lacking such a consciousness could be manslaughter.(3)CausationThe next element is causation. Often the phrase “but for”is used to determine whether causation has occurred. For example, we might say “Cain caused Abel”, by which we really mean “Cain caused Abel’s death. ”In other words, ‘but for Cain’s act, Abel would still be alive.” Causation, then, means “but for” the actions of A, B would not have been harmed. In criminal law, causation is an element that must be proven beyond a reasonable doubt.(4) Proof beyond a Reasonable DoubtIn view of the fact that in criminal cases we are dealing with the life and liberty of the accused person, as well as the stigma accompanying conviction, the legal system places strong limits on the power of the state to convict a person of a crime. Criminal defendants are presumed innocent. The state must overcome this presumption of innocence by proving every element of the offense charged against the defendant beyond a reasonable doubt to thesatisfaction of all the jurors. This requirement is the primary way our system minimizes the risk of convicting an innocent person.The state must prove its case within a framework of procedural safeguards that are designed to protect the accused. The state’s failure to prove any material element of its case results in the accused being acquitted or found not guilty, even though he or she may actually have committed the crime charged.3. Strict LiabilityIn modern society, some crimes require no more mens rea, and they are known as strict liability offenses. For in stance, under the Road Traffic Act 1988 it is a strict liability offence to drive a vehicle with an alcohol concentration above the prescribed limit.Strict liability can be described as criminal or civil liability notwithstanding the lack mens rea or intent by the defendant. Not all crimes require specific intent, and the threshold of culpability required may be reduced. For example, it might be sufficient to show that a defendant acted negligently, rather than intentionally or recklessly.1. 概述刑法是规定什么试犯罪,有关犯罪嫌疑人之逮捕、起诉及审判,及对已决犯处以何种刑罚的部门法。
at89c52单片机中英文资料对照外文翻译文献综述
D.htmlat89c52单片机中英文资料对照外文翻译文献综述at89c52单片机简介中英文资料对照外文翻译文献综述AT89C52 Single-chip microprocessor introductionSelection of Single-chip microprocessor1. Development of Single-chip microprocessorThe main component part of Single-chip microprocessor as a result of by such centralize to be living to obtain on the chip,In immediate future middle processor CPU。
Storage RAM immediately﹑memoy readROM﹑Interrupt system、Timer /'s counter along with I/O's rim electric circuit awaits the main microcomputer section,The lumping is living on the chip。
Although the Single-chip microprocessor r is only a chip,Yet through makes up and the meritorous service be able to on sees,It had haveed the calculating machine system property,calling it for this reason act as Single-chip microprocessor r minisize calculating machine SCMS and abbreviate the Single-chip microprocessor。
中英文双语参考文献
中英文双语参考文献一、介绍参考文献是研究中不可或缺的重要资源,它为学术研究提供了相应领域的前沿知识和相关研究成果。
中英文双语参考文献是指同时包含中文和英文的参考文献,它在跨语言研究、国际交流等方面起到了至关重要的作用。
二、双语参考文献的优势•跨语言交流:中英文双语参考文献使得不同语言背景的研究人员能够共享知识,促进学术交流和合作。
•提高可读性:双语参考文献可以使读者更容易理解文献的内容,避免了由于语言障碍造成的阅读障碍。
•跨文化理解:双语参考文献可以帮助读者更好地理解研究背景和方法,促进不同文化之间的相互理解和尊重。
三、创建双语参考文献的方法1. 维护两个版本创建双语参考文献的一种方法是维护两个版本的参考文献,一个是中文版本,另一个是英文版本。
这样可以确保参考文献内容的准确性和一致性。
2. 翻译和校对在创建双语参考文献时,需要对原文进行翻译,并进行校对以确保翻译的准确性和语义的一致性。
可以请专业的翻译人员进行翻译工作,并由领域专家进行校对。
3. 统一格式在创建参考文献时,需要统一参考文献的格式。
可以参考相关领域的国际标准,如APA格式、MLA格式等。
四、适用领域中英文双语参考文献适用于各个学科领域,包括但不限于科学、工程、人文社科等。
尤其在各个学科领域的国际交流和合作中,中英文双语参考文献具有重要的意义。
五、注意事项•在翻译参考文献时,需要注意专业术语的翻译准确性,避免产生歧义。
•校对过程中,需要仔细核对原文和翻译文本,确保翻译的准确性和一致性。
•在引用双语参考文献时,需要标注原始文献的作者和出处,以及翻译文献的作者和出处。
六、结论中英文双语参考文献在跨语言研究、学术交流和合作中具有重要的作用。
创建双语参考文献需要注意翻译的准确性和一致性,并统一文献的格式。
双语参考文献的应用范围广泛,对于促进学术交流和跨文化理解起到了积极的推动作用。
七、参考文献1.Smith, J. (2020). 文献翻译与校对方法. 《翻译学研究》, 10(2), 123-135.2.Johnson, T. (2019). The importance of bilingual references incross-cultural research. International Journal of Cross-CulturalStudies, 5(2), 67-78.3.Wang, Y., & Li, H. (2018). Creating bilingual references forinternational communication. Journal of Academic Exchange, 15(3),89-102.4.刘, M., & 张, S. (2017). 英文参考文献的翻译与校对方法. 《外语教学与研究》, 12(3), 45-56.5.李, X., & 王, L. (2016). 跨文化交流中的双语参考文献应用研究. 《国际研究评论》, 8(4), 67-79.。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
C++多平台中英文资料外文翻译文献外文原文From one code base to many platforms using Visual C++Multiple-platform development is a hot issue today. Developers want to be able to support diverse platforms such as the Microsoft® Windows® version 3.x, Microsoft Windows NT®, and Microsoft Windows 95 operating systems, and Apple®, Macintosh®, UNIX, and RISC (reduced instruction set computer) machines. Until recently, developers wanting to build versions of their application for more than one platform had few choices: ∙Maintain separate code bases for each platform, written to the platform's native application programming interface (API).∙Write to a "virtual API" such as those provided by cross-platform toolsets.∙Build their own multiple-platform layer and support it.Today, however, a new choice exists. Developers can use their existing code written to the Windows API and, using tools available from Microsoft and third parties, recompile for all of the platforms listed above. This paper looks at the methods and some of the issues involved in doing so.Currently the most lucrative market for graphical user interface (GUI) applications, after Microsoft Windows, is the Apple Macintosh. However, vast differences separate these wholly different operating systems, requiring developers to learn new APIs, programming paradigms, and tools. Generally, Macintosh development requires a separate code base from the Windows sources, increasing the complexity of maintenance and enhancement.Because porting code from Windows to the Macintosh can be the most difficult porting case, this paper concentrates in this area. In general, if your code base is sufficiently portable to enable straightforward recompiling for the Macintosh (excluding any platform-specific, or "edge" code, you may elect to include), you'll find that it will come up on other platforms easily as well.Microsoft Visual C++® Cross-Development Edition for Macintosh (Visual C++ for Mac™) provides a set of Windows NT– or Windows 95–hosted tools for recompiling your Windows code for the Motorola 680x0 and PowerPC processors, and a portability library that implements Windows on the Macintosh. This allows you to develop GUI applications with a single source code base (written to the Win32® API) and implement it on Microsoft Windows or Apple Macintosh platforms.Figure 1, below, illustrates how Visual C++ for Mac works. Your source code is edited, compiled, and linked on a Windows NT–or Windows 95–based (Intel) host machine. The tools create 68000 and PowerPC native code and Macintosh resources. An Ethernet-based or serial transport layer (TL) moves the resulting binaries to a Macintosh target machine running remotely. The Macintosh application is started on the Macintosh and debugged remotely from the Windows-based machine.Now that Apple has two different Macintosh architectures to contend with (Motorola 680x0 and PowerPC) portability is particularly important.Porting can involve several steps, depending on whether you are working with old 16-bit applications or with new 32-bit sources. In general, the steps to a Macintosh port are as follows:1.Make your application more portable by following some general portability guidelines.This will help insure not only portability to the 680x0-based Macintosh machines, but also to the newer, more powerful PowerPC machines that are based on a RISC chip.2.Port your application from Windows 16-bit code to 32-bit code. This may be the mostcomplex and time-consuming part of the job.3.Segregate those parts of your application that are unique to Windows from similarimplementations that are specific to the Macintosh. This may involve using conditional compilation or it may involve changing the source tree for your project.4.Port your Win32 API code to the Macintosh by using the portability library for theMacintosh and Visual C++ for compiling, linking, and debugging.e the Microsoft Foundation Class Library (MFC) version 4.0 to implement newfunctionality such as OLE 2.0 containers, servers, and clients or database supportusing open database connectivity (ODBC). Code written using MFC is highly portable to the Macintosh.6.Write Macintosh-specific code to take advantage of unique Macintosh features, suchas Apple Events or Publish and Subscribe.The chief challenge among the families of Windows operating systems is the break from 16 bits (Windows 3.11 and Windows for Workgroups 3.11 operating system with integrated networking) to 32 bits (Windows NT and Windows 95). In general, 16-bit and 32-bit code bases are somewhat incompatible, unless they are written using MFC. Developers have the choice of branching their sources into two trees, or migrating everything to 32 bits. Once the Win32 choice has been made, how are legacy platforms to be run (that is, machines still running Windows 3.11)? The obvious choice is to use the Win32s® API libraries, which thunk 32-bit calls down to their 16-bit counterparts.Developers who want their applications to be able to take advantage of the hot new RISC hardware, such as DEC Alpha AXP machines, can use the special multiple platform editions of Visual C++. These include versions for the MIPS R4000 series of processors as well as the aforementioned DEC Alpha AXP chip and the Motorola Power PC. These toolsets run under Windows NT 3.51 and create highly optimized native Win32 applications for DEC Alpha and Motorola PowerPC platforms.Developers who have recompiled their Win32 sources using these toolsets are amazed at how simple it is. Since the operating system is identical on all platforms, and the tools are identical, little work has to be done in order to achieve a port. The key difference in the RISC machines from Intel is the existence of a native 64-bit integer, which is far more efficient than on 32-bit (that is, Intel) processors.Microsoft works closely with two third-party UNIX tools providers, Bristol Technology and Mainsoft Corporation, to allow developers to recompile their Win32-based or MFC-based applications for UNIX. Developers seeking additional information should contact those companies directly.You'll have to decide early on whether to write to the native API (Win32) or to MFC. In general you'll find MFC applications will port more quickly than Win32 applications. This is because one of the intrinsic benefits of an application framework is an abstraction of the code away from the native operating system to some extent. This abstraction is like an insurance policy for you. However, developers frequently have questions about MFC, such as: What if I need an operating system service that isn't part of the framework?Call the Win32 API directly. MFC never prevents you from calling any function in the Win32 API directly. Just precede your function call with the global scope operator (::).∙I don't know C++. Can I still use MFC?Sure. MFC is based on C++, but you can mix C and C++ code seamlessly.∙How can I get started using MFC?Start by taking some classes and/or reading some books. Visual C++ ships with a fine tutorial on MFC (Scribble). Then, check out the MFC Migration Kit (available on CompuServe or, for a modest shipping and handling fee, from Microsoft). It will help you migrate your C-based application code to MFC and C++.All porting will be easier if you begin today writing more portable programs. Following some basic portability guidelines will make your code less platform-specific.Never assume anything. Particularly, don't make assumptions about the sizes of types, the state of the machine at any time, byte ordering, or alignment.Don't assume the size of primitive types, because these have different sizes on different processors. For example, an int is two bytes in Win16 and four bytes in Win32. At all costs, avoid code that relies on the size of a type. Use sizeof() instead. To determine the offset of a field in a structure, use the offsetof() macro. Don't try to compute this manually.Use programmatic interfaces to access all system or hidden "objects," for example, the stack or heap.Parsing data types to extract individual bytes or even bits can cause problems when porting from Windows to the Macintosh unless you are careful to write code that doesn't assume any particular byte order. LIMITS.H contains constants that can be used to help write platform-independent macros to access individual bytes in a word.This may seem obvious, because nothing could be less portable than assembly language. Compilers, such as Microsoft Visual C++, that provide inline assemblers make it easy to slip in a little assembler code to speed things up. If you want portable code, however, avoid this temptation. It may not be necessary. Modern compilers can often generate code as good as hand-tuned native assembler code. Our own research at Microsoft indicates that performance problems are more often the result of poor algorithms than they are of poor code generation. Indeed, with RISC machines, hand-turned native assembler code may actually be worse than machine-generated code, due to the complexity of instruction scheduling and picking register usage.Write all routines in C first; then, if you absolutely need to rewrite one in assembler, be sure to leave both implementations in your sources, controlled by conditional compiles, and keep both up to date.A major goal of American National Standards Institute (ANSI) C/C++ is to provide a portable implementation of the language. Theoretically, code written to strict ANSI Ccompliance is completely portable to any compiler that implements the standard correctly. Microsoft Visual C++ provides a compiler option (/Za) to enable strict ANSI compatibility checking.Microsoft Visual C++ provides some language features that are in addition to ANSI C, such as four-character constants and single-line comments. Programs that use the Microsoft C extensions should be portable to all other implementations of Microsoft Visual C++. Thus, you can write programs that use four-character constants, for example, and know that your program is portable to any 16-bit or 32-bit Microsoft Windows platform or to the Macintosh.Compilers normally align structures based on the target machine architecture; some RISC machines, such as the MIPS R4000, are particularly sensitive to alignment. Alignment faults may generate run-time errors or, instead, may silently and seriously degrade the performance of your application. For portability, therefore, avoid packing structures. Limit packing to hardware interfaces and to compatibility issues such as file formats and on-disk structures.Using function prototypes is mandatory for fully portable code. All functions should be prototyped, and the prototype should exactly match the actual function declaration.Following the guidelines above will make your code a lot more portable. However, if you have 16-bit Windows code, your first step is to make it work properly under Win32. This will require additional changes in your sources.Code written for Win32 can run on any version of Windows, including on the Macintosh, using the portability library. Portable code should compile and execute properly on any platform. Of course, if you use APIs that only function under Windows NT, they will not work when your application runs under Windows 3.x. For example, threads work under Windows NT but not under Windows 3.11. Those types of functionality differences will have to be accounted for in the design of your application.Chief among the differences between Win16 and Win32 is linear addressing. That means pointers are now 32 bits wide and the keywords near and far are no longer supported. It also means code that assumes segmented memory will break under Win32.In addition to pointers, handles and graphic coordinates are now 32 bits. WINDOWS.H will resolve many of these size differences for you, but some work is still necessary.The recommended strategy to get your application running under Win32 is to recompile for 32 bits, noting error messages and warnings. Next, replace complex procedures and assembly language routines with stub procedures. Then, make your main program work properly using the techniques above. Finally, replace each stubbed-out procedure with a portable version.After you successfully convert your Windows-based program from 16 bits to 32 bits, you're ready to embark on porting it to the Macintosh. Because significant differences exist between the two platforms, this task can appear daunting. Before you can begin to port your application, you need to better understand these differences. The Macintosh is differentiated from Windows in three general areas:∙Programming model differences∙Processor differences∙User interface (UI) differencesThese areas of difference are described below. Porting issues that accompany these differences are discussed in the section titled "Porting from Win32 to the Macintosh."The Windows and Macintosh APIs are completely different. For example:∙The event models are different. In Windows, you dispatch messages to WindowProcs.You use DefWindowProc to handle messages in which you're not specifically interested. On the Macintosh, you have a big main event loop to handle all possible events.∙Windows uses the concept of child windows. The Macintosh uses no child windows.∙Windows-based applications can draw using either pens or brushes. Macintosh applications can use only pens.∙Controls in Windows are built-in window classes. On the Macintosh, controls are unrelated to windows.∙Windows allows for 256 binary raster operations; the Macintosh allows for only 16.Because of the differences between the two platforms, porting a Windows-based application to the Macintosh can be monumental task without powerful tools.Windows has always run on Intel x86 processors (until Windows NT), and the Macintosh has run on Motorola 680x0 processors (of course, the PowerPC-based Macintosh is now available as well). Differences between the processor families include addressing and byte ordering, in addition to the more expected differences like opcodes, instruction sets, and the name and number of registers.The Intel 8086 processor, from which subsequent 80x86 processors are descended, used 16-bit addresses, which unfortunately allowed only 65,536 bytes of memory to be addressed. To allow the use of more memory, Intel implemented a segmented memory architecture to address one megabyte (2^20 bytes) of memory that used an unsigned 16-bit segment register and an unsigned 16-bit offset. This original Intel scheme has been extended to allow much larger amounts of memory to be addressed, but most existing Intel-based programming relies on separating code and data into 64K segments.Although all Intel x86 processors since the 80386 have used 32-bit addressing, for compatibility reasons Microsoft Windows 3.x is actually a 16-bit application, and all Microsoft Windows-based applications had to be written as 16-bit applications. That meant, for example, that most pointers and handles were 16 bits wide. With the advent of Microsoft Windows NT, which is a true 32-bit operating system, all native applications are 32-bit applications, which means that pointers and handles are 32 bits wide. Because Windows NT uses linear addressing, programs can share up to 4 gigabytes of memory.In contrast, the Motorola 68000 and PowerPC processor have always provided the ability to address a "flat" 32-bit memory space. In theory, a flat memory space of this kind simplifies memory addressing. In practice, because 4-byte addresses are too large to use all the time, Macintosh code is generally divided into segments no larger than 32K.Microsoft Windows and Windows NT run only on so-called "little-endian" machines—processors that place the least significant byte first and the most significant byte last. In contrast, the Motorola 680x0 and PowerPC (a so-called "big-endian" architecture) place the most significant byte first, followed by the next most significant byte, and so on, with the least significant byte last.Compilers normally handle all details of byte ordering for your application program. Nevertheless, well-written portable code should never depend on the order of bytes.Microsoft Windows and the Macintosh present quite different user interfaces in many key areas, including menus, filenames, and multiple-document interface (MDI) applications.Only one menu bar exists on the Macintosh, and it is always in the same place, regardless of the number or arrangement of windows on the screen. The "active window" contains the menu, which dynamically changes as necessary when different windows are made active. Windows, on the other hand, gives each top-level window its own menu. In addition, under MDI, each child window can also have its own menu. MDI is discussed in greater detail below.Macintosh applications generally have an "Apple menu" (the leftmost menu) that contains all the installed Desk Accessories and usually contains an About entry for the application. Under System 7, the extreme right side of the Macintosh menu contains an icon for Apple's Balloon Help and the Application menu for switching between applications.Windows-based applications always have a System menu at the upper-left corner of their top-level window. This menu contains system-level functions for sizing, moving, and closing the window, as well as an item that calls the Task Manager for switching applications.Generally, Windows-based applications contain keyboard equivalents in their menus. These are underlined letters in each menu entry that the user can select with the keyboard inlieu of the mouse. This, however, is convention rather than requirement. Although some Macintosh applications have these equivalents, most do not.Filenames and pathnames represent one of the most fundamental differences between Windows and the Macintosh, as well as perhaps the one most difficult to deal with. Many programmers report dealing with filenames as the area of porting in which the most time and energy is spent.Your Windows-based application probably already handles (and expects) filenames such as "C:\ACCTG\DATA\SEPT93.DAT." Applications for the MS-DOS and Windows operating systems are bound by the traditional 8.3 filename format. Macintosh applications, on the other hand, can handle filenames such as "September, 1993 Accounting Data."MDI windows allow for multiple child windows within the borders of a top-level window (the "MDI frame"). Many Windows-based applications, such as the Microsoft Word word processor for Windows, are MDI applications. Characteristic of MDI applications are clipped child windows that can be minimized to an icon within the MDI frame. Each MDI child window can also have its own menu.The Macintosh does not support MDI windows. An application can have multiple windows open; those windows, however, cannot be made into icons, and they share a common menu. Depending on the application, this difference may necessitate significant redesign for a Macintosh port.Finally you can keep doing what you know how to do best, writing to the Windows API, and still allow for versions of your application that run on other platforms. Visual C++ now gives you special versions that allow you to do this. Keeping your code portable, thinking about portability all the time, and using the right tools will help you make the multiple platform jump as effortless as possible..中文翻译在今天,多平台的开发是一个热门课题。