计算机专业外文文献及翻译

合集下载

计算机科学与技术专业使用阈值技术的图像分割等毕业论文外文文献翻译及原文

计算机科学与技术专业使用阈值技术的图像分割等毕业论文外文文献翻译及原文

毕业设计(论文)外文文献翻译文献、资料中文题目: 1.使用阈值技术的图像分割2.最大类间方差算法的图像分割综述文献、资料英文题目:文献、资料来源:文献、资料发表(出版)日期:院(部):专业:计算机科学与技术班级:姓名:学号:指导教师:翻译日期: 2017.02.14毕业设计(论文)题目基于遗传算法的自动图像分割软件开发翻译(1)题目Image Segmentation by Using ThresholdTechniques翻译(2)题目A Review on Otsu Image Segmentation Algorithm使用阈值技术的图像分割 1摘要本文试图通过5阈值法作为平均法,P-tile算法,直方图相关技术(HDT),边缘最大化技术(EMT)和可视化技术进行了分割图像技术的研究,彼此比较从而选择合的阈值分割图像的最佳技术。

这些技术适用于三个卫星图像选择作为阈值分割图像的基本猜测。

关键词:图像分割,阈值,自动阈值1 引言分割算法是基于不连续性和相似性这两个基本属性之一的强度值。

第一类是基于在强度的突然变化,如在图像的边缘进行分区的图像。

第二类是根据预定义标准基于分割的图像转换成类似的区域。

直方图阈值的方法属于这一类。

本文研究第二类(阈值技术)在这种情况下,通过这项课题可以给予这些研究简要介绍。

阈分割技术可分为三个不同的类:首先局部技术基于像素和它们临近地区的局部性质。

其次采用全局技术分割图像可以获得图像的全局信息(通过使用图像直方图,例如;全局纹理属性)。

并且拆分,合并,生长技术,为了获得良好的分割效果同时使用的同质化和几何近似的概念。

最后的图像分割,在图像分析的领域中,常用于将像素划分成区域,以确定一个图像的组成[1][2]。

他们提出了一种二维(2-D)的直方图基于多分辨率分析(MRA)的自适应阈值的方法,降低了计算的二维直方图的复杂而提高了多分辨率阈值法的搜索精度。

这样的方法源于通过灰度级和灵活性的空间相关性的多分辨率阈值分割方法中的阈值的寻找以及效率由二维直方图阈值分割方法所取得的非凡分割效果。

计算机专业外文文献翻译6

计算机专业外文文献翻译6

外文文献翻译(译成中文2000字左右):As research laboratories become more automated,new problems are arising for laboratory managers.Rarely does a laboratory purchase all of its automation from a single equipment vendor. As a result,managers are forced to spend money training their users on numerous different software packages while purchasing support contracts for each. This suggests a problem of scalability. In the ideal world,managers could use the same software package to control systems of any size; from single instruments such as pipettors or readers to large robotic systems with up to hundreds of instruments. If such a software package existed, managers would only have to train users on one platform and would be able to source software support from a single vendor.If automation software is written to be scalable, it must also be flexible. Having a platform that can control systems of any size is far less valuable if the end user cannot control every device type they need to use. Similarly, if the software cannot connect to the customer’s Laboratory Information Management System (LIMS) database,it is of limited usefulness. The ideal automation software platform must therefore have an open architecture to provide such connectivity.Two strong reasons to automate a laboratory are increased throughput and improved robustness. It does not make sense to purchase high-speed automation if the controlling software does not maximize throughput of the system. The ideal automation software, therefore, would make use of redundant devices in the system to increase throughput. For example, let us assume that a plate-reading step is the slowest task in a given method. It would make that if the system operator connected another identical reader into the system, the controller software should be able to use both readers, cutting the total throughput time of the reading step in half. While resource pooling provides a clear throughput advantage, it can also be used to make the system more robust. For example, if one of the two readers were to experience some sort of error, the controlling software should be smart enough to route all samples to the working reader without taking the entire system offline.Now that one embodiment of an ideal automation control platform has been described let us see how the use of C++ helps achieving this ideal possible.DISCUSSIONC++: An Object-Oriented LanguageDeveloped in 1983 by BjarneStroustrup of Bell Labs,C++ helped propel the concept of object-oriented programming into the mainstream.The term ‘‘object-oriented programming language’’ is a familiar phrase that has been in use for decades. But what does it mean? And why is it relevant for automation software? Essentially, a language that is object-oriented provides three important programming mechanisms:encapsulation, inheritance, and polymorphism.Encapsulation is the ability of an object to maintain its own methods (or functions) and properties (or variables).For example, an ‘‘engine’’ object might contain methods for starting, stopping, or accelerating, along with properties for ‘‘RPM’’ and ‘‘Oil pressure’’. Further, encapsulation allows an object to hide private data from a ny entity outside the object. The programmer can control access to the object’s data by marking methods or properties as public, protected,or private. This access control helps abstract away the inner workings of a class while making it obvious to a caller which methods and properties are intended to be used externally.Inheritance allows one object to be a superset of another object. For example, one can create an object called Automobile that inherits from Vehicle. The Automobile object has access to all non-private methods and properties of Vehicle plus any additional methods or properties that makes it uniquely an automobile.Polymorphism is an extremely powerful mechanism that allows various inherited objects to exhibit different behaviors when the same named method is invoked upon them. For example, let us say our Vehicle object contains a method called CountWheels. When we invoke this method on our Automobile, we learn that the Automobile has four wheels.However, when we call this method on an object called Bus,we find that the Bus has 10 wheels.Together, encapsulation, inheritance, and polymorphism help promote code reuse, which is essential to meeting our requirement that the software package be flexible. A vendor can build up a comprehensive library of objects (a serial communications class, a state machine class, a device driver class,etc.) that can be reused across many different code modules.A typical control software vendor might have 100 device drivers. It would be a nightmare if for each of these drivers there were no building blocks for graphical user interface (GUI) or communications to build on. By building and maintaining a library of foundation objects, the vendor will save countless hours of programming and debugging time.All three tenets of object-oriented programming are leveraged by the use of interfaces. An interface is essentially a specification that is used to facilitate communication between software components, possibly written by different vendors. An interface says, ‘‘if your cod e follows this set of rules then my software component will be able to communicate with it.’’ In the next section we will see how interfaces make writing device drivers a much simpler task.C++ and Device DriversIn a flexible automation platform, one optimal use for interfaces is in device drivers. We would like our open-architecture software to provide a generic way for end users to write their own device drivers without having to divulge the secrets of our source code to them. To do this, we define a simplifiedC++ interface for a generic device, as shown here:class IDevice{public:virtual string GetName() ? 0; //Returns the name//of the devicevirtual void Initialize() ? 0; //Called to//initialize the devicevirtual void Run() ? 0; // Called to run the device};In the example above, a Ctt class (or object) called IDevice has been defined. The prefix I in IDevice stands for ‘‘interface’’. This class defines three public virtual methods: GetName, Initialize, and Run. The virtual keyword is what enables polymorphism, allowing the executing program to run the methods of the inheriting class. When a virtual method declaration is suffixed with ?0, there is no base class implementation. Such a method is referred to as ‘‘pure virtual’’. A class like IDevice that contains only pure virtual functions is known as an ‘‘abstract class’’, or an‘‘interface’’. The IDevice definition, along with appropriate documentation, can be published to the user community,allowing developers to generate their own device drivers that implement the IDevice interface.Suppose a thermal plate sealer manufacturer wants to write a driver that can be controlled by our software package. They would use inheritance to implement our IDevice interface and then override the methods to produce the desired behavior: class CSealer : public IDevice{public:virtual string GetName() {return ‘‘Sealer’’;}virtual void Initialize() {InitializeSealer();}virtual void Run() {RunSealCycle();}private:void InitializeSealer();void RunSealCycle();};Here the user has created a new class called CSealer that inherits from the IDevice interface. The public methods,those that are accessible from outside of the class, are the interface methods defined in IDevice. One, GetName, simply returns the name of the device type that this driver controls.The other methods,Initialize() and Run(), call private methods that actually perform the work. Notice how the privatekeyword is used to prevent external objects from calling InitializeSealer() and RunSealCycle() directly.When the controlling software executes, polymorphism will be used at runtime to call the GetName, Initialize, and Run methods in the CSealer object, allowing the device defined therein to be controlled.DoSomeWork(){//Get a reference to the device driver we want to useIDevice&device ? GetDeviceDriver();//Tell the world what we’re about to do.cout !! ‘‘Initializing ’’!! device.GetName();//Initialize the devicedevice.Initialize();//Tell the world what we’re about to do.cout !! ‘‘Running a cycle on ’’ !!device.GetName();//Away we go!device.Run();}The code snippet above shows how the IDevice interface can be used to generically control a device. If GetDevice-Driver returns a reference to a CSealer object, then DoSomeWork will control sealers. If GetDeviceDriver returns a reference to a pipettor, then DoSomeWork will control pipettors. Although this is a simplified example, it is straightforward to imagine how the use of interfaces and polymorphism can lead to great economies of scale in controller software development.Additional interfaces can be generated along the same lines as IDevice. For example, an interface perhaps called ILMS could be used to facilitate communication to and from a LIMS.The astute reader will notice that the claim that anythird party can develop drivers simply by implementing the IDevice interface is slightly flawed. The problem is that any driver that the user writes, like CSealer, would have to be linked directly to the controlling software’s exec utable to be used. This problem is solved by a number of existing technologies, including Microsoft’s COMor .NET, or by CORBA. All of these technologies allow end users to implement abstract interfaces in standalone components that can be linked at runtime rather than at design time. The details are beyond the scope of this article.中文翻译:随着研究实验室更加自动化,实验室管理人员出现的新问题。

【计算机专业文献翻译】工程工作站

【计算机专业文献翻译】工程工作站

附件1:外文资料翻译译文工程工作站就原始性能而言,工程工作站大体上介于PC机和大的小型机之间;尽管随着PC 机和工作站两者功能的不断增强,这三者之间上的差别越来越难以分清了。

但是,工程工作站不论同PC机,或是同传统的分时共享技术(或称小型机技术)相比确实有几个优点。

跟PC机相比,工作站通常具有更多的功能强的CPU,而且能够支持更多的主存,尽管PC机在功能上同低档工作站有重叠现象。

同PC机不同的是,工作站能够提供多用户,多任务操作系统,这已成为它的一种标准特点。

OS/2和UNIX可用于PC机,尤其是以Intel80386为基础的PC机。

然而,PC机用得最多的操作系统仍是MS—DOS。

多任务系统同单任务系统相比有几个优点。

首先,用户可同时运行多道程序,因此对于应用程序是透明的。

虽然PC机的台式附件和常驻RAM程序可给用户提供某种原始的多任务功能,足以运行后台打印假脱机程序以及诸如此类的程序。

但是,他们对应用程序可能是不透明的,而且不能提供像过程间通信和支持多个并行用户这样的重要特点。

对于当今的工程应用来说,也许更为重要的是PC机上缺少大容量的物理内存和虚拟内存。

对于大型应用程序而言,虚拟存储器是很重要的,因为数据组太长,这种大型应用程序简直不能全部在物理存储器内运行。

要是没有虚拟内存的话,像编辑大型文件之类的简单任务都会慢的令人头疼,甚至不可能完成。

加上,许多应用程序更加复杂,因为它们必须缓冲数据或采用覆盖方式将应用程序的不同部分分页从物理内存中调进调出。

最后,大多数工作站的用户接口要比大多数PC机的用户接口高级一个明显的例外情形是Macintosh苹果机上的用户接口。

计算机的用户接口。

计算机的用户接口和连接它的可编程接口决定了应用程序接口的高级程度。

强有力的开发手段可让程序员创建直观的用户接口。

虽然工作站比PC机功能强,但跟现代小型机例如数字设备公司(DEC)VAX—8000系列的小型机相比,情况通常就不是那样了。

计算机java外文翻译外文文献英文文献

计算机java外文翻译外文文献英文文献

英文原文:Title: Business Applications of Java. Author: Erbschloe, Michael, Business Applications of Java -- Research Starters Business, 2008DataBase: Research Starters - BusinessBusiness Applications of JavaThis article examines the growing use of Java technology in business applications. The history of Java is briefly reviewed along with the impact of open standards on the growth of the World Wide Web. Key components and concepts of the Java programming language are explained including the Java Virtual Machine. Examples of how Java is being used bye-commerce leaders is provided along with an explanation of how Java is used to develop data warehousing, data mining, and industrial automation applications. The concept of metadata modeling and the use of Extendable Markup Language (XML) are also explained.Keywords Application Programming Interfaces (API's); Enterprise JavaBeans (EJB); Extendable Markup Language (XML); HyperText Markup Language (HTML); HyperText Transfer Protocol (HTTP); Java Authentication and Authorization Service (JAAS); Java Cryptography Architecture (JCA); Java Cryptography Extension (JCE); Java Programming Language; Java Virtual Machine (JVM); Java2 Platform, Enterprise Edition (J2EE); Metadata Business Information Systems > Business Applications of JavaOverviewOpen standards have driven the e-business revolution. Networking protocol standards, such as Transmission Control Protocol/Internet Protocol (TCP/IP), HyperText Transfer Protocol (HTTP), and the HyperText Markup Language (HTML) Web standards have enabled universal communication via the Internet and the World Wide Web. As e-business continues to develop, various computing technologies help to drive its evolution.The Java programming language and platform have emerged as major technologies for performing e-business functions. Java programming standards have enabled portability of applications and the reuse of application components across computing platforms. Sun Microsystems' Java Community Process continues to be a strong base for the growth of the Java infrastructure and language standards. This growth of open standards creates new opportunities for designers and developers of applications and services (Smith, 2001).Creation of Java TechnologyJava technology was created as a computer programming tool in a small, secret effort called "the Green Project" at Sun Microsystems in 1991. The Green Team, fully staffed at 13 people and led by James Gosling, locked themselves away in an anonymous office on Sand Hill Road in Menlo Park, cut off from all regular communications with Sun, and worked around the clock for18 months. Their initial conclusion was that at least one significant trend would be the convergence of digitally controlled consumer devices and computers. A device-independent programming language code-named "Oak" was the result.To demonstrate how this new language could power the future of digital devices, the Green Team developed an interactive, handheld home-entertainment device controller targeted at the digital cable television industry. But the idea was too far ahead of its time, and the digital cable television industry wasn't ready for the leap forward that Java technology offered them. As it turns out, the Internet was ready for Java technology, and just in time for its initial public introduction in 1995, the team was able to announce that the Netscape Navigator Internet browser would incorporate Java technology ("Learn about Java," 2007).Applications of JavaJava uses many familiar programming concepts and constructs and allows portability by providing a common interface through an external Java Virtual Machine (JVM). A virtual machine is a self-contained operating environment, created by a software layer that behaves as if it were a separate computer. Benefits of creating virtual machines include better exploitation of powerful computing resources and isolation of applications to prevent cross-corruption and improve security (Matlis, 2006).The JVM allows computing devices with limited processors or memory to handle more advanced applications by calling up software instructions inside the JVM to perform most of the work. This also reduces the size and complexity of Java applications because many of the core functions and processing instructions were built into the JVM. As a result, software developersno longer need to re-create the same application for every operating system. Java also provides security by instructing the application to interact with the virtual machine, which served as a barrier between applications and the core system, effectively protecting systems from malicious code.Among other things, Java is tailor-made for the growing Internet because it makes it easy to develop new, dynamic applications that could make the most of the Internet's power and capabilities. Java is now an open standard, meaning that no single entity controls its development and the tools for writing programs in the language are available to everyone. The power of open standards like Java is the ability to break down barriers and speed up progress.Today, you can find Java technology in networks and devices that range from the Internet and scientific supercomputers to laptops and cell phones, from Wall Street market simulators to home game players and credit cards. There are over 3 million Java developers and now there are several versions of the code. Most large corporations have in-house Java developers. In addition, the majority of key software vendors use Java in their commercial applications (Lazaridis, 2003).ApplicationsJava on the World Wide WebJava has found a place on some of the most popular websites in the world and the uses of Java continues to grow. Java applications not only provide unique user interfaces, they also help to power the backend of websites. Two e-commerce giants that everybody is probably familiar with (eBay and Amazon) have been Java pioneers on the World Wide Web.eBayFounded in 1995, eBay enables e-commerce on a local, national and international basis with an array of Web sites-including the eBay marketplaces, PayPal, Skype, and -that bring together millions of buyers and sellers every day. You can find it on eBay, even if you didn't know it existed. On a typical day, more than 100 million items are listed on eBay in tens of thousands of categories. Recent listings have included a tunnel boring machine from the Chunnel project, a cup of water that once belonged to Elvis, and the Volkswagen that Pope Benedict XVI owned before he moved up to the Popemobile. More than one hundred million items are available at any given time, from the massive to the miniature, the magical to the mundane, on eBay; the world's largest online marketplace.eBay uses Java almost everywhere. To address some security issues, eBay chose Sun Microsystems' Java System Identity Manager as the platform for revamping its identity management system. The task at hand was to provide identity management for more than 12,000 eBay employees and contractors.Now more than a thousand eBay software developers work daily with Java applications. Java's inherent portability allows eBay to move to new hardware to take advantage of new technology, packaging, or pricing, without having to rewrite Java code ("eBay drives explosive growth," 2007).Amazon (a large seller of books, CDs, and other products) has created a Web Service application that enables users to browse their product catalog and place orders. uses a Java application that searches the Amazon catalog for books whose subject matches a user-selected topic. The application displays ten books that match the chosen topic, and shows the author name, book title, list price, Amazon discount price, and the cover icon. The user may optionally view one review per displayed title and make a buying decision (Stearns & Garishakurthi, 2003).Java in Data Warehousing & MiningAlthough many companies currently benefit from data warehousing to support corporate decision making, new business intelligence approaches continue to emerge that can be powered by Java technology. Applications such as data warehousing, data mining, Enterprise Information Portals (EIP's), and Knowledge Management Systems (which can all comprise a businessintelligence application) are able to provide insight into customer retention, purchasing patterns, and even future buying behavior.These applications can not only tell what has happened but why and what may happen given certain business conditions; allowing for "what if" scenarios to be explored. As a result of this information growth, people at all levels inside the enterprise, as well as suppliers, customers, and others in the value chain, are clamoring for subsets of the vast stores of information such as billing, shipping, and inventory information, to help them make business decisions. While collecting and storing vast amounts of data is one thing, utilizing and deploying that data throughout the organization is another.The technical challenges inherent in integrating disparate data formats, platforms, and applications are significant. However, emerging standards such as the Application Programming Interfaces (API's) that comprise the Java platform, as well as Extendable Markup Language (XML) technologies can facilitate the interchange of data and the development of next generation data warehousing and business intelligence applications. While Java technology has been used extensively for client side access and to presentation layer challenges, it is rapidly emerging as a significant tool for developing scaleable server side programs. The Java2 Platform, Enterprise Edition (J2EE) provides the object, transaction, and security support for building such systems.Metadata IssuesOne of the key issues that business intelligence developers must solve is that of incompatible metadata formats. Metadata can be defined as information about data or simply "data about data." In practice, metadata is what most tools, databases, applications, and other information processes use to define, relate, and manipulate data objects within their own environments. It defines the structure and meaning of data objects managed by an application so that the application knows how to process requests or jobs involving those data objects. Developers can use this schema to create views for users. Also, users can browse the schema to better understand the structure and function of the database tables before launching a query.To address the metadata issue, a group of companies (including Unisys, Oracle, IBM, SAS Institute, Hyperion, Inline Software and Sun) have joined to develop the Java Metadata Interface (JMI) API. The JMI API permits the access and manipulation of metadata in Java with standard metadata services. JMI is based on the Meta Object Facility (MOF) specification from the Object Management Group (OMG). The MOF provides a model and a set of interfaces for the creation, storage, access, and interchange of metadata and metamodels (higher-level abstractions of metadata). Metamodel and metadata interchange is done via XML and uses the XML Metadata Interchange (XMI) specification, also from the OMG. JMI leverages Java technology to create an end-to-end data warehousing and business intelligence solutions framework.Enterprise JavaBeansA key tool provided by J2EE is Enterprise JavaBeans (EJB), an architecture for the development of component-based distributed business applications. Applications written using the EJB architecture are scalable, transactional, secure, and multi-user aware. These applications may be written once and then deployed on any server platform that supports J2EE. The EJB architecture makes it easy for developers to write components, since they do not need to understand or deal with complex, system-level details such as thread management, resource pooling, and transaction and security management. This allows for role-based development where component assemblers, platform providers and application assemblers can focus on their area of responsibility further simplifying application development.EJB's in the Travel IndustryA case study from the travel industry helps to illustrate how such applications could function. A travel company amasses a great deal of information about its operations in various applications distributed throughout multiple departments. Flight, hotel, and automobile reservation information is located in a database being accessed by travel agents worldwide. Another application contains information that must be updated with credit and billing historyfrom a financial services company. Data is periodically extracted from the travel reservation system databases to spreadsheets for use in future sales and marketing analysis.Utilizing J2EE, the company could consolidate application development within an EJB container, which can run on a variety of hardware and software platforms allowing existing databases and applications to coexist with newly developed ones. EJBs can be developed to model various data sets important to the travel reservation business including information about customer, hotel, car rental agency, and other attributes.Data Storage & AccessData stored in existing applications can be accessed with specialized connectors. Integration and interoperability of these data sources is further enabled by the metadata repository that contains metamodels of the data contained in the sources, which then can be accessed and interchanged uniformly via the JMI API. These metamodels capture the essential structure and semantics of business components, allowing them to be accessed and queried via the JMI API or to be interchanged via XML. Through all of these processes, the J2EE infrastructure ensures the security and integrity of the data through transaction management and propagation and the underlying security architecture.To consolidate historical information for analysis of sales and marketing trends, a data warehouse is often the best solution. In this example, data can be extracted from the operational systems with a variety of Extract, Transform and Load tools (ETL). The metamodels allow EJBsdesigned for filtering, transformation, and consolidation of data to operate uniformly on datafrom diverse data sources as the bean is able to query the metamodel to identify and extract the pertinent fields. Queries and reports can be run against the data warehouse that contains information from numerous sources in a consistent, enterprise-wide fashion through the use of the JMI API (Mosher & Oh, 2007).Java in Industrial SettingsMany people know Java only as a tool on the World Wide Web that enables sites to perform some of their fancier functions such as interactivity and animation. However, the actual uses for Java are much more widespread. Since Java is an object-oriented language like C++, the time needed for application development is minimal. Java also encourages good software engineering practices with clear separation of interfaces and implementations as well as easy exception handling.In addition, Java's automatic memory management and lack of pointers remove some leading causes of programming errors. Most importantly, application developers do not need to create different versions of the software for different platforms. The advantages available through Java have even found their way into hardware. The emerging new Java devices are streamlined systems that exploit network servers for much of their processing power, storage, content, and administration.Benefits of JavaThe benefits of Java translate across many industries, and some are specific to the control and automation environment. For example, many plant-floor applications use relatively simple equipment; upgrading to PCs would be expensive and undesirable. Java's ability to run on any platform enables the organization to make use of the existing equipment while enhancing the application.IntegrationWith few exceptions, applications running on the factory floor were never intended to exchange information with systems in the executive office, but managers have recently discovered the need for that type of information. Before Java, that often meant bringing together data from systems written on different platforms in different languages at different times. Integration was usually done on a piecemeal basis, resulting in a system that, once it worked, was unique to the two applications it was tying together. Additional integration required developing a brand new system from scratch, raising the cost of integration.Java makes system integration relatively easy. Foxboro Controls Inc., for example, used Java to make its dynamic-performance-monitor software package Internet-ready. This software provides senior executives with strategic information about a plant's operation. The dynamic performance monitor takes data from instruments throughout the plant and performs variousmathematical and statistical calculations on them, resulting in information (usually financial) that a manager can more readily absorb and use.ScalabilityAnother benefit of Java in the industrial environment is its scalability. In a plant, embedded applications such as automated data collection and machine diagnostics provide critical data regarding production-line readiness or operation efficiency. These data form a critical ingredient for applications that examine the health of a production line or run. Users of these devices can take advantage of the benefits of Java without changing or upgrading hardware. For example, operations and maintenance personnel could carry a handheld, wireless, embedded-Java device anywhere in the plant to monitor production status or problems.Even when internal compatibility is not an issue, companies often face difficulties when suppliers with whom they share information have incompatible systems. This becomes more of a problem as supply-chain management takes on a more critical role which requires manufacturers to interact more with offshore suppliers and clients. The greatest efficiency comes when all systems can communicate with each other and share information seamlessly. Since Java is so ubiquitous, it often solves these problems (Paula, 1997).Dynamic Web Page DevelopmentJava has been used by both large and small organizations for a wide variety of applications beyond consumer oriented websites. Sandia, a multiprogram laboratory of the U.S. Department of Energy's National Nuclear Security Administration, has developed a unique Java application. The lab was tasked with developing an enterprise-wide inventory tracking and equipment maintenance system that provides dynamic Web pages. The developers selected Java Studio Enterprise 7 for the project because of its Application Framework technology and Web Graphical User Interface (GUI) components, which allow the system to be indexed by an expandable catalog. The flexibility, scalability, and portability of Java helped to reduce development timeand costs (Garcia, 2004)IssueJava Security for E-Business ApplicationsTo support the expansion of their computing boundaries, businesses have deployed Web application servers (WAS). A WAS differs from a traditional Web server because it provides a more flexible foundation for dynamic transactions and objects, partly through the exploitation of Java technology. Traditional Web servers remain constrained to servicing standard HTTP requests, returning the contents of static HTML pages and images or the output from executed Common Gateway Interface (CGI ) scripts.An administrator can configure a WAS with policies based on security specifications for Java servlets and manage authentication and authorization with Java Authentication andAuthorization Service (JAAS) modules. An authentication and authorization service can bewritten in Java code or interface to an existing authentication or authorization infrastructure. Fora cryptography-based security infrastructure, the security server may exploit the Java Cryptography Architecture (JCA) and Java Cryptography Extension (JCE). To present the user with a usable interaction with the WAS environment, the Web server can readily employ a formof "single sign-on" to avoid redundant authentication requests. A single sign-on preserves user authentication across multiple HTTP requests so that the user is not prompted many times for authentication data (i.e., user ID and password).Based on the security policies, JAAS can be employed to handle the authentication process with the identity of the Java client. After successful authentication, the WAS securitycollaborator consults with the security server. The WAS environment authentication requirements can be fairly complex. In a given deployment environment, all applications or solutions may not originate from the same vendor. In addition, these applications may be running on different operating systems. Although Java is often the language of choice for portability between platforms, it needs to marry its security features with those of the containing environment.Authentication & AuthorizationAuthentication and authorization are key elements in any secure information handling system. Since the inception of Java technology, much of the authentication and authorization issues have been with respect to downloadable code running in Web browsers. In many ways, this had been the correct set of issues to address, since the client's system needs to be protected from mobile code obtained from arbitrary sites on the Internet. As Java technology moved from a client-centric Web technology to a server-side scripting and integration technology, it required additional authentication and authorization technologies.The kind of proof required for authentication may depend on the security requirements of a particular computing resource or specific enterprise security policies. To provide such flexibility, the JAAS authentication framework is based on the concept of configurable authenticators. This architecture allows system administrators to configure, or plug in, the appropriate authenticatorsto meet the security requirements of the deployed application. The JAAS architecture also allows applications to remain independent from underlying authentication mechanisms. So, as new authenticators become available or as current authentication services are updated, system administrators can easily replace authenticators without having to modify or recompile existing applications.At the end of a successful authentication, a request is associated with a user in the WAS user registry. After a successful authentication, the WAS consults security policies to determine if the user has the required permissions to complete the requested action on the servlet. This policy canbe enforced using the WAS configuration (declarative security) or by the servlet itself (programmatic security), or a combination of both.The WAS environment pulls together many different technologies to service the enterprise. Because of the heterogeneous nature of the client and server entities, Java technology is a good choice for both administrators and developers. However, to service the diverse security needs of these entities and their tasks, many Java security technologies must be used, not only at a primary level between client and server entities, but also at a secondary level, from served objects. By using a synergistic mix of the various Java security technologies, administrators and developers can make not only their Web application servers secure, but their WAS environments secure as well (Koved, 2001).ConclusionOpen standards have driven the e-business revolution. As e-business continues to develop, various computing technologies help to drive its evolution. The Java programming language and platform have emerged as major technologies for performing e-business functions. Java programming standards have enabled portability of applications and the reuse of application components. Java uses many familiar concepts and constructs and allows portability by providing a common interface through an external Java Virtual Machine (JVM). Today, you can find Java technology in networks and devices that range from the Internet and scientific supercomputers to laptops and cell phones, from Wall Street market simulators to home game players and credit cards.Java has found a place on some of the most popular websites in the world. Java applications not only provide unique user interfaces, they also help to power the backend of websites. While Java technology has been used extensively for client side access and in the presentation layer, it is also emerging as a significant tool for developing scaleable server side programs.Since Java is an object-oriented language like C++, the time needed for application development is minimal. Java also encourages good software engineering practices with clear separation of interfaces and implementations as well as easy exception handling. Java's automatic memory management and lack of pointers remove some leading causes of programming errors. The advantages available through Java have also found their way into hardware. The emerging new Java devices are streamlined systems that exploit network servers for much of their processing power, storage, content, and administration.中文翻译:标题:Java的商业应用。

计算机专业Overview-of-Group-Policy组策略的概述大学毕业论文外文文献翻译及原文

计算机专业Overview-of-Group-Policy组策略的概述大学毕业论文外文文献翻译及原文

毕业设计(论文)外文文献翻译文献、资料中文题目:组策略的概述文献、资料英文题目:Overview of Group Policy 文献、资料来源:文献、资料发表(出版)日期:院(部):专业:计算机班级:姓名:学号:指导教师:翻译日期: 2017.02.14作者:Darren Mar-Elia, Derek Melber, William R. StanekThe出版社:Microsoft Press出版时间:2005-05-25章节:第一张,3至12页组策略的概述在本章,我们将介绍组策略。

你将学到组策略是做什么的,它是怎样被用于域和工作组的设置中,以及实施组策略需要什么基础设施。

如果你要搭建一个活动目录服务的网络环境,你就需要组策略。

就是这么回事。

毫无疑问,没有一点问题。

你面对的真正的问题是给你组织的架构和需求如何最大限度的运用组策略提供的。

为什么?因为组策略意味着你作为管理员的生活变的更容易。

微软定义组策略这个术语是为了描述能够允许你一起设置组策略同时把它们运用到离散集合(不相干记录)的技术。

实际上,组策略是一组为了简化管理共同的、重复的以及独特的任务而又难以手工实施但是可以被自动化执行的策略设置(例如部署新的软件或者强制执行能够安装在电脑上的程序)。

相关信息■关于域名服务器体系结构的信息,参考Microsoft Windows Server 2003 Inside Out(微软服务器2003视窗)的第26章(微软出版,2004)。

■关于活动目录系结构的信息,参考Microsoft Windows Server 2003 Inside Out 的第32章。

■关于组策略实施的信息,参考本书的第四章。

了解组策略组策略提供了一种方便高效的方法来管理计算机和用户设置。

组策略做什么通过组策略,你能用完成(manage)设置一个用户或者一台计算机的方法完成设置成千的用户或者计算机——无需离开你的办公桌(座位)。

计算机专业中英文翻译(外文翻译、文献翻译)

计算机专业中英文翻译(外文翻译、文献翻译)

英文参考文献及翻译Linux - Operating system of cybertimesThough for a lot of people , regard Linux as the main operating system to make up huge work station group, finish special effects of " Titanic " make , already can be regarded as and show talent fully. But for Linux, this only numerous news one of. Recently, the manufacturers concerned have announced that support the news of Linux to increase day by day, users' enthusiasm to Linux runs high unprecedentedly too. Then, Linux only have operating system not free more than on earth on 7 year this piece what glamour, get the favors of such numerous important software and hardware manufacturers as the masses of users and Orac le , Informix , HP , Sybase , Corel , Intel , Netscape , Dell ,etc. , OK?1.The background of Linux and characteristicLinux is a kind of " free (Free ) software ": What is called free,mean users can obtain the procedure and source code freely , and can use them freely , including revise or copy etc.. It is a result of cybertimes, numerous technical staff finish its research and development together through Inte rnet, countless user is it test and except fault , can add user expansion function that oneself make conveniently to participate in. As the most outstanding one in free software, Linux has characteristic of the following:(1)Totally follow POSLX standard, expand the network operatingsystem of supporting all AT&T and BSD Unix characteristic. Because of inheritting Unix outstanding design philosophy , and there are clean , stalwart , high-efficient and steady kernels, their all key codes are finished by Li nus Torvalds and other outstanding programmers, without any Unix code of AT&T or Berkeley, so Linu x is not Unix, but Linux and Unix are totally compatible.(2)Real many tasks, multi-user's system, the built-in networksupports, can be with such seamless links as NetWare , Windows NT , OS/2 , Unix ,etc.. Network in various kinds of Unix it tests to be fastest in comparing and assess efficiency. Support such many kinds of files systems as FAT16 , FAT32 , NTFS , Ex t2FS , ISO9600 ,etc. at the same time .(3) Can operate it in many kinds of hardwares platform , including such processors as Alpha , SunSparc , PowerPC , MIPS ,etc., to various kinds of new-type peripheral hardwares, can from distribute on global numerous programmer there getting support rapidly too.(4) To that the hardware requires lower, can obtain very good performance on more low-grade machine , what deserves particular mention is Linux outstanding stability , permitted " year " count often its running times.2.Main application of Linux At present,Now, the application of Linux mainly includes:(1) Internet/Intranet: This is one that Linux was used most at present, it can offer and include Web server , all such Inter net services as Ftp server , Gopher server , SMTP/POP3 mail server , Proxy/Cache server , DNS server ,etc.. Linux kernel supports IPalias , PPP and IPtunneling, these functions can be used for setting up fictitious host computer , fictitious service , VPN (fictitious special-purpose network ) ,etc.. Operating Apache Web server on Linux mainly, the occupation rate of market in 1998 is 49%, far exceeds the sum of such several big companies as Microsoft , Netscape ,etc..(2) Because Linux has outstanding networking ability , it can be usedin calculating distributedly large-scaly, for instance cartoon making , scientific caculation , database and file server ,etc..(3) As realization that is can under low platform fullness of Unix that operate , apply at all levels teaching and research work of universities and colleges extensively, if Mexico government announce middle and primary schools in the whole country dispose Linux and offer Internet service for student already.(4) Tabletop and handling official business appliedly. Application number of people of in this respect at present not so good as Windows of Microsoft far also, reason its lie in Lin ux quantity , desk-top of application software not so good as Windows application far not merely,because the characteristic of the freedom software makes it not almost have advertisement that support (though the function of Star Office is not second to MS Office at the same time, but there are actually few people knowing).3.Can Linux become a kind of major operating system?In the face of the pressure of coming from users that is strengthened day by day, more and more commercial companies transplant its application to Linux platform, comparatively important incident was as follows, in 1998 ①Compaq and HP determine to put forward user of requirement truss up Linux at their servers , IBM and Dell promise to offer customized Linux system to user too. ②Lotus announce, Notes the next edition include one special-purpose edition in Linux. ③Corel Company transplants its famous WordPerfect to on Linux, and free issue. Corel also plans to move the other figure pattern process products to Linux platform completely.④Main database producer: Sybase , Informix , Oracle , CA , IBM have already been transplanted one's own database products to on Linux, or has finished Beta edition, among them Oracle and Informix also offer technical support to their products.4.The gratifying one is, some farsighted domestic corporations have begun to try hard to change this kind of current situation already. Stone Co. not long ago is it invest a huge sum of money to claim , regard Linux as platform develop a Internet/Intranet solution, regard this as the core and launch Stone's system integration business , plan to set up nationwide Linux technical support organization at the same time , take the lead to promote the freedom software application and development in China. In addition domestic computer Company , person who win of China , devoted to Linux relevant software and hardware application of system popularize too. Is it to intensification that Linux know , will have more and more enterprises accede to the ranks that Linux will be used with domestic every enterprise to believe, more software will be planted in Linux platform. Meanwhile, the domestic university should regard Linux as the original version and upgrade already existing Unix content of courses , start with analysing the source code and revising the kernel and train a large number of senior Linux talents, improve our country's own operating system. Having only really grasped the operating system, the software industry of our country could be got rid of and aped sedulously at present, the passive state led by the nose byothers, create conditions for revitalizing the software industry of our country fundamentally.中文翻译Linux—网络时代的操作系统虽然对许多人来说,以Linux作为主要的操作系统组成庞大的工作站群,完成了《泰坦尼克号》的特技制作,已经算是出尽了风头。

计算机专业外文资料翻译----微机发展简史

计算机专业外文资料翻译----微机发展简史

附录外文文献及翻译Progress in computersThe first stored program computers began to work around 1950. The one we built in Cambridge, the EDSAC was first used in the summer of 1949.These early experimental computers were built by people like myself with varying backgrounds. We all had extensive experience in electronic engineering and were confident that that experience would standus in good stead. This proved true, although we had some new things to learn. The most important of these was that transients must be treated correctly; what would cause a harmless flash on the screen of a television set could lead to a serious error in a computer.As far as computing circuits were concerned, we found ourselves with an embarrass de riches. For example, we could use vacuum tube diodes for gates as we did in the EDSAC or pentodes with control signals on both grids, a system widely used elsewhere. This sort of choice persisted and the term famillogic came into use. Those who have worked in the computer field will remember TTL, ECL and CMOS. Of these, CMOS has now become dominant.In those early years, the IEE was still dominated by power engineering and we had to fight a number of major battles in order to get radio engineering along with the rapidly developing subject of electronics. dubbed in the IEE light current electrical engineering. properlyrecognized as an activity in its own right. I remember that we had some difficulty in organizing a co nference because the power engineers‟ ways of doing things were not our ways. A minor source of irritation was that all IEE published papers were expected to start with a lengthy statement of earlier practice, something difficult to do when there was no earlier practiceConsolidation in the 1960sBy the late 50s or early 1960s, the heroic pioneering stage was over and the computer field was starting up in real earnest. The number of computers in the world had increased and they were much more reliable than the very early ones . To those years we can ascribe the first steps in high level languages and the first operating systems. Experimental time-sharing was beginning, and ultimately computer graphics was to come along.Above all, transistors began to replace vacuum tubes. This change presented a formidable challenge to the engineers of the day. They had to forget what they knew about circuits and start again. It can only be said that they measured up superbly well to the challenge and that the change could not have gone more smoothly.Soon it was found possible to put more than one transistor on the same bit of silicon, and this was the beginning of integrated circuits. As time went on, a sufficient level of integration was reached for one chip to accommodate enough transistors for a small number of gates or flip flops. This led to a range of chips known as the 7400 series. The gates and flip flops were independent of one another and each had its own pins. They could be connected by off-chip wiring to make a computer or anything else.These chips made a new kind of computer possible. It was called a minicomputer. It was something less that a mainframe, but still very powerful, and much more affordable. Instead of having one expensive mainframe for the whole organization, a business or a university was able to have a minicomputer for each major department.Before long minicomputers began to spread and become more powerful. The world was hungry for computing power and it had been very frustrating for industry not to be able to supply it on the scalerequired and at a reasonable cost. Minicomputers transformed the situation.The fall in the cost of computing did not start with the minicomputer; it had always been that way. This was what I meant when I referred in my abstract to inflation in the computer industry …going the other way‟. As time goes on people get more for their money, not less.Research in Computer Hardware.The time that I am describing was a wonderful one for research in computer hardware. The user of the 7400 series could work at the gate and flip-flop level and yet the overall level of integration was sufficient to give a degree of reliability far above that of discreet transistors. The researcher, in a university orelsewhere, could build any digital device that a fertile imagination could conjure up. In the Computer Laboratory we built the Cambridge CAP, a full-scaleminicomputer with fancy capability logic.The 7400 series was still going strong in the mid 1970s and was used for the Cambridge Ring, a pioneering wide-band local area network. Publication of the design study for the Ring came just before the announcement of the Ethernet. Until these two systems appeared, users had mostly been content with teletype-based local area networks. Rings need high reliability because, as the pulses go repeatedly round the ring, they must be continually amplified and regenerated. It was the high reliability provided by the 7400 series of chips that gave us the courage needed to embark on the project for the Cambridge Ring.The RISC Movement and Its AftermathEarly computers had simple instruction sets. As time went on designers of commercially available machines added additional features which they thought would improve performance. Few comparative measureme nts were done and on the whole the choice of features depended upon the designer‟s intuition.In 1980, the RISC movement that was to change all this broke on the world. The movement opened with a paper by Patterson and ditzy entitled The Case for the Reduced Instructions Set Computer.Apart from leading to a striking acronym, this title conveys little of the insights into instruction set design which went with the RISC movement, in particular the way it facilitated pipelining, a system whereby several instructions may be in different stages of execution within the processor at the same time. Pipelining was not new, but it was new for small computersThe RISC movement benefited greatly from methods which had recently become available for estimating the performance to be expected from a computer design without actually implementing it. I refer to the use of a powerful existing computer to simulate the new design. By the use of simulation, RISC advocates were able to predict with some confidence that a good RISC design would be able to out-perform the best conventional computers using the same circuit technology. This prediction was ultimately born out in practice.Simulation made rapid progress and soon came into universal use by computer designers. In consequence, computer design has become more of a science and less of an art. Today, designers expect to have a roomful of, computers available to do their simulations, not just one. They refer to such a roomful by the attractive name of computer farm.The x86 Instruction SetLittle is now heard of pre-RISC instruction sets with one major exception, namely that of the Intel 8086 and its progeny, collectively referred to as x86. This has become the dominant instruction set and the RISC instruction sets that originally had a considerable measure of success are having to put up a hard fight for survival.This dominance of x86 disappoints people like myself who come from the research wings. both academic and industrial. of the computer field. No doubt, business considerations have a lot to do with the survival of x86, but there are other reasons as well. However much we research oriented people would liketo think otherwise. high level languages have not yet eliminated the use of machine code altogether. We need to keep reminding ourselves that there is much to be said for strict binary compatibility with previous usage when that can be attained. Nevertheless, things might have been different if Intel‟s major attempt to produce a good RISC chip had been more successful. I am referring to the i860 (not the i960, which was something different). In many ways the i860 was an excellent chip, but its software interface did not fit it to be used in aworkstation.There is an interesting sting in the tail of this apparently easy triumph of the x86 instruction set. It proved impossible to match the steadily increasing speed of RISC processors by direct implementation ofthe x86 instruction set as had been done in the past. Instead, designers took a leaf out of the RISC book; although it is not obvious, on the surface, a modern x86 processor chip contains hidden within it a RISC-style processor with its own internal RISC coding. The incoming x86 code is, after suitable massaging, converted into this internal code and handed over to the RISC processor where the critical execution is performed. In this summing up of the RISC movement, I rely heavily on the latest edition of Hennessy and Patterson‟s books on computer design as my supporting authority; see in particular Computer Architecture, third edition, 2003, pp 146, 151-4, 157-8.The IA-64 instruction set.Some time ago, Intel and Hewlett-Packard introduced the IA-64 instruction set. This was primarily intended to meet a generally recognized need for a 64 bit address space. In this, it followed the lead of the designers of the MIPS R4000 and Alpha. However one would have thought that Intel would have stressed compatibility with the x86; the puzzle is that they did the exact opposite.Moreover, built into the design of IA-64 is a feature known as predication which makes it incompatible in a major way with all other instruction sets. In particular, it needs 6 extra bits with each instruction. This upsets the traditional balance between instruction word length and information content, and it changes significantly the brief of the compiler writer.In spite of having an entirely new instruction set, Intel made the puzzling claim that chips based on IA-64 would be compatible with earlier x86 chips. It was hard to see exactly what was meant.Chips for the latest IA-64 processor, namely, the Itanium, appear to have special hardware for compatibility. Even so, x86 code runs very slowly.Because of the above complications, implementation of IA-64 requires a larger chip than is required for more conventional instruction sets. This in turn implies a higher cost. Such at any rate, is the received wisdom, and, as a general principle, it was repeated as such by Gordon Moore when he visited Cambridge recently to open the Betty and Gordon Moore Library. I have, however, heard it said that the matter appears differently from within Intel. This I do not understand. But I am very ready to admit that I am completely out of my depth as regards the economics of the semiconductor industry.Shortage of ElectronsAlthough shortage of electrons has not so far appeared as an obvious limitation, in the long term it may become so. Perhaps this is where the exploitation of non-conventional CMOS will lead us. However, some interesting work has been done. notably by HuronAmend and his team working in the Cavendish Laboratory. on the direct development of structures in which a single electron more or less makes the difference between a zero and a one. However very little progress has been made towards practical devices that could lead to the construction of a computer. Even with exceptionally good luck, many tens of years must inevitably elapse before a working computer based on single electron effects can be contemplated.微机发展简史第一台存储程序的计算开始出现于1950前后,它就是1949年夏天在剑桥大学,我们创造的延迟存储自动电子计算机(EDSAC)。

【计算机专业文献翻译】21世纪的高级编程语言

【计算机专业文献翻译】21世纪的高级编程语言

外文文献阅读与翻译第1章英文原文Scripting: Higher Level Programming for the 21st Century1 IntroductionFor the last fifteen years a fundamental change has been occurring in the way people write computer programs. The change is a transition from system programming languages such as C or C++ to scripting languages such as Perl or Tcl. Although many people are participating in the change, few people realize that it is occurring and even fewer people know why it is happening. This article is an opinion piece that explains why scripting languages will handle many of the programming tasks of the next century better than system programming languages.Scripting languages are designed for different tasks than system programming languages, and this leads to fundamental differences in the languages. System programming languages were designed for building data structures and algorithms from scratch, starting from the most primitive computer elements such as words of memory. In contrast, scripting languages are designed for gluing: they assume the existence of a set of powerful components and are intended primarily for connecting components together. System programming languages are strongly typed to help manage complexity, while scripting languages are typeless to simplify connections between components and provide rapid application development.Scripting languages and system programming languages are complementary, and most major computing platforms since the 1960's have provided both kinds of languages. The languages are typically used together in component frameworks, where components are created with system programming languagesand glued together with scripting languages. However, several recent trends, such as faster machines, better scripting languages, the increasing importance of graphical user interfaces and component architectures, and the growth of the Internet, have greatly increased the applicability of scripting languages. These trends will continue over the next decade, with more and more new applications written entirely in scripting languages and system programming languages used primarily for creating components.1.1 2 Scripting languagesScripting languages such as Perl[9], Python[4], Rexx[6], Tcl[8], Visual Basic, and the Unix shells represent a very different style of programming than system programming languages. Scripting languages assume that there already exists a collection of useful components written in other languages. Scripting languages aren't intended for writing applications from scratch; they are intended primarily for plugging together components. For example, Tcl and Visual Basic can be used to arrange collections of user interface controls on the screen, and Unix shell scripts are used to assemble filter programs into pipelines. Scripting languages are often used to extend the features of components but they are rarely used for complex algorithms and data structures; features like these are usually provided by the components. Scripting languages are sometimes referred to as glue languages or system integration languages.In order to simplify the task of connecting components, scripting languages tend to be typeless: all things look and behave the same so that they are interchangeable. For example, in Tcl or Visual Basic a variable can hold a string one moment and an integer the next. Code and data are often interchangeable, so that a program can write another program and then execute it on the fly. Scripting languages are often string-oriented, since this provides a uniform representation for many different things.A typeless language makes it much easier to hook together components. There are no a priori restrictions on how things can be used, and all components and values are represented in a uniform fashion. Thus any component or value can be used in any situation; components designed for one purpose can be used for totally different purposes never foreseen by the designer. For example, in the Unix shells, all filter programs read a stream of bytes from an input and write a string of bytes to an output; any two programs can be connected together by attaching the output of one program to the input of the other. The following shell command stacks three filters together to count the number of lines in the selection that contain the word "scripting":select | grep scripting | wcThe select program reads the text that is currently selected on the display and prints it on its output; the grep program reads its input and prints on its output the lines containing "scripting"; the wc program counts the number of lines on its input. Each of these programs can be used in numerous other situations to perform different tasks.The strongly typed nature of system programming languages discourages reuse. Typing encourages programmers to create a variety of incompatible interfaces ("interfaces are good; more interfaces are better"). Each interface requires objects of specific types and the compiler prevents any other types of objects from being used with the interface, even if that would be useful. In order to use a new object with an existing interface, conversion code must be written to translate between the type of the object and the type expected by the interface. This in turn requires recompiling part or all of the application, which isn't possible in the common case where the application is distributed in binary form.To see the advantages of a typeless language, consider the following Tcl command:button .b -text Hello! -font {Times 16} -command {puts hello}This command creates a new button control that displays a text string in a 16-point Times font and prints a short message when the user clicks on the control. It mixes six different types of things in a single statement: a command name (button), a button control (.b), property names (-text, -font, and -command), simple strings (Hello! and hello), a font name (Times 16) that includes a typeface name (Times) and a size in points (16), and a Tcl script (puts hello). Tcl represents all of these things uniformly with strings. In this example the properties may be specified in any order and unspecified properties are given default values; more than 20 properties were left unspecified in the example.The same example requires 7 lines of code in two methods when implemented in Java. With C++ and Microsoft Foundation Classes, it requires about 25 lines of code in three procedures (see [7]for the code for these examples). Just setting the font requires several lines of code in Microsoft Foundation Classes:CFont *fontPtr = new CFont();fontPtr->CreateFont(16, 0, 0,0,700, 0, 0, 0, ANSI_CHARSET,OUT_DEFAULT_PRECIS,CLIP_DEFAULT_PRECIS, DEFAULT_QUALITY,DEFAULT_PITCH|FF_DONTCARE, "Times New Roman");buttonPtr->SetFont(fontPtr);Much of this code is a consequence of the strong typing. In order to set the font of a button, its SetFont method must be invoked, but this method must be passed a pointer to a CFont object. This in turn requires a new object to be declared and initialized. In order to initialize the CFont object its CreateFont method must be invoked, but CreateFont has a rigid interface that requires 14 different arguments to be specified. In Tcl, the essential characteristics of the font (typeface Times, size 16 points) can be used immediately with no declarations or conversions. Furthermore, Tcl allows the behavior for the button to be included directly in the command that creates the button, while C++ and Java require it to be placed in a separately declared method.(In practice, a trivial example like this would probably be handled with a graphical development environment that hides the complexity of the underlying language: the user enters property values in a form and the development environment outputs the code. However, in more complex situations such as conditional assignment of property values or interfaces generated programmatically, the developer must write code in the underlying language.)It might seem that the typeless nature of scripting languages could allow errors to go undetected, but in practice scripting languages are just as safe as system programming languages. For example, an error will occur if the font size specified for the button example above is a non-integer string such as xyz. The difference is that scripting languages do their error checking at the last possible moment, when a value is used. Strong typing allows errors to be detected at compile-time, so the cost of run-time checks is avoided. However, the price to be paid for this efficiency is restrictions on how information can be used: this results in more code and less flexible programs.Another key difference between scripting languages and system programming languages is th at scripting languages are usually interpreted whereas system programming languages are usually compiled. Interpreted languages provide rapid turnaround during development by eliminating compile times. Interpreters also make applications more flexible by allowing users to program the applications at run-time. For example, many synthesis and analysis tools for integrated circuits include a Tcl interpreter; users of the programs write Tcl scripts to specify their designs and control the operation of the tools. Interpreters also allow powerful effects to be achieved by generating code on the fly. For example, a Tcl-based Web browser can parse a Web page by translating the HTML for the page into a Tcl script using a few regular expression substitutions. It then executes the Tcl script to render the page on the screen.Scripting languages are less efficient than system programming languages, in part because they use interpreters instead of compilers but also because their basic components are chosen for power and ease of use rather than an efficient mapping onto the underlying hardware. For example, scripting languages often use variable-length strings in situations where a system programming language would use a binary value that fits in a single machine word, and scripting languages often use hash tables where system programming languages use indexed arrays.Fortunately, the performance of a scripting language isn't usually a major issue. Applications for scripting languages are generally smaller than applications for system programming languages, and the performance of a scripting application tends to be dominated by the performance of thecomponents, which are typically implemented in a system programming language.Scripting languages are higher level than system programming languages, in the sense that a single statement does more work on average. A typical statement in a scripting language executes hundreds or thousands of machine instructions, whereas a typical statement in a system programming language executes about five machine instructions (see Figure 1). Part of this difference is because scripting languages use interpreters, which are less efficient than the compiled code for system programming languages. But much of the difference is because the primitive operations in scripting languages have greater functionality. For example, in Perl it is about as easy to invoke a regular expression substitution as it is to invoke an integer addition. In Tcl, a variable can have traces associated with it so that setting the variable causes side effects; for example, a trace might be used to keep the variable's value updated continuously on the screen. Because of the features described above, scripting languages allow very rapid development for applications that are gluing-oriented.To summarize, scripting languages are designed for gluing applications. They provide a higher level of programming than assembly or system programming languages, much weaker typing than system programming languages, and an interpreted development environment. Scripting languages sacrifice execution speed to improve development speed.中文翻译脚本语言:21世纪的高级编程语言1.简介在过去的十五年里,人们编写计算机程序的方法发生了根本的转变。

计算机科学与技术 外文翻译 英文文献 中英对照

计算机科学与技术 外文翻译 英文文献 中英对照

附件1:外文资料翻译译文大容量存储器由于计算机主存储器的易失性和容量的限制, 大多数的计算机都有附加的称为大容量存储系统的存储设备, 包括有磁盘、CD 和磁带。

相对于主存储器,大的容量储存系统的优点是易失性小,容量大,低成本, 并且在许多情况下, 为了归档的需要可以把储存介质从计算机上移开。

术语联机和脱机通常分别用于描述连接于和没有连接于计算机的设备。

联机意味着,设备或信息已经与计算机连接,计算机不需要人的干预,脱机意味着设备或信息与机器相连前需要人的干预,或许需要将这个设备接通电源,或许包含有该信息的介质需要插到某机械装置里。

大量储存器系统的主要缺点是他们典型地需要机械的运动因此需要较多的时间,因为主存储器的所有工作都由电子器件实现。

1. 磁盘今天,我们使用得最多的一种大量存储器是磁盘,在那里有薄的可以旋转的盘片,盘片上有磁介质以储存数据。

盘片的上面和(或)下面安装有读/写磁头,当盘片旋转时,每个磁头都遍历一圈,它被叫作磁道,围绕着磁盘的上下两个表面。

通过重新定位的读/写磁头,不同的同心圆磁道可以被访问。

通常,一个磁盘存储系统由若干个安装在同一根轴上的盘片组成,盘片之间有足够的距离,使得磁头可以在盘片之间滑动。

在一个磁盘中,所有的磁头是一起移动的。

因此,当磁头移动到新的位置时,新的一组磁道可以存取了。

每一组磁道称为一个柱面。

因为一个磁道能包含的信息可能比我们一次操作所需要得多,所以每个磁道划分成若干个弧区,称为扇区,记录在每个扇区上的信息是连续的二进制位串。

传统的磁盘上每个磁道分为同样数目的扇区,而每个扇区也包含同样数目的二进制位。

(所以,盘片中心的储存的二进制位的密度要比靠近盘片边缘的大)。

因此,一个磁盘存储器系统有许多个别的磁区, 每个扇区都可以作为独立的二进制位串存取,盘片表面上的磁道数目和每个磁道上的扇区数目对于不同的磁盘系统可能都不相同。

磁区大小一般是不超过几个KB; 512 个字节或1024 个字节。

计算机外文翻译英文文献中英版仓库管理系统(WMS)

计算机外文翻译英文文献中英版仓库管理系统(WMS)

Warehouse Management Systems (WMS).The evolution of warehouse management systems (WMS) is very similar to that of many other software solutions. Initially a system to control movement and storage of materials within a warehouse, the role of WMS is expanding to including light manufacturing, transportation management, order management, and complete accounting systems. To use the grandfather of operations-related software, MRP, as a comparison, material requirements planning (MRP) started as a system for planning raw material requirements in a manufacturing environment. Soon MRP evolved into manufacturing resource planning (MRPII), which took the basic MRP system and added scheduling and capacity planning logic. Eventually MRPII evolved into enterprise resource planning (ERP), incorporating all the MRPII functionality with full financials and customer and vendor management functionality. Now, whether WMS evolving into a warehouse-focused ERP system is a good thing or not is up to debate. What is clear is that the expansion of the overlap in functionality between Warehouse Management Systems, Enterprise Resource Planning, Distribution Requirements Planning, Transportation Management Systems, Supply Chain Planning, Advanced Planning and Scheduling, and Manufacturing Execution Systems will only increase the level of confusion among companies looking for software solutions for their operations.Even though WMS continues to gain added functionality, the initial core functionality of a WMS has not really changed. The primary purpose of a WMS is to control the movement and storage of materials within an operation and process the associated transactions. Directed picking, directed replenishment, and directed putaway are the key to WMS. The detailed setup and processing within a WMS can vary significantly from one software vendor to another, however the basic logic will use a combination of item, location, quantity, unit of measure, and order information to determine where to stock, where to pick, and in what sequence to perform these operations.Do You Really Need WMS?Not every warehouse needs a WMS. Certainly any warehouse could benefit from some of the functionality but is the benefit great enough to justify the initial and ongoing costs associated with WMS? Warehouse Management Systems are big, complex, data intensive, applications. They tend to require a lot of initial setup, a lot of system resources to run, and a lot of ongoing data management to continue to run. That’s right, you need to "manage" your warehouse "management" system. Often times, large operations will end up creating a new IS department with the sole responsibility of managing the WMS.The Claims:WMS will reduce inventory!WMS will reduce labor costs!WMS will increase storage capacity!WMS will increase customer service!WMS will increase !The Reality:The implementation of a WMS along with automated data collection will likely give you increases in accuracy, reduction in labor costs (provided the labor required to maintain the system is less than the labor saved on the warehouse floor), and a greater ability to service the customer by reducing cycle times. Expectations of inventory reduction and increased storage capacity are less likely. While increased accuracy and efficiencies in the receiving process may reduce the level of required, the impact of this reduction will likely be negligible in comparison to overall inventory levels. The predominant factors that control inventory levels are , lead times, and demand variability. It is unlikely that a WMS will have a significant impact on any of these factors. And while a WMS certainly provides the tools for more organized storage which may result in increased storage capacity, this improvement will be relative to just how sloppy your pre-WMS processes were.Beyond labor efficiencies, the determining factors in deciding to implement a WMS tend to be more often associated with the need to do something to service your customers that your current system does not support (or does not support well) such asfirst-in-first-out, cross-docking, automated pick replenishment, wave picking, lot tracking, yard management, automated data collection, automated material handling equipment, etc.SetupThe setup requirements of WMS can be extensive. The characteristics of each item and location must be maintained either at the detail level or by grouping similar items and locations into categories. An example of item characteristics at the detail level would include exact dimensions and weight of each item in each unit of measure the item is stocked (each, cases, pallets, etc) as well as information such as whether it can be mixed with other items in a location, whether it is rack able, max stack height, max quantity per location, hazard classifications, finished goods or raw material, fast versus slow mover, etc. Although some operations will need to set up each item this way, most operations will benefit by creating groups of similar products. For example, if you are a distributor of music CDs you would create groups for single CDs, and double CDs, maintaining the detailed dimension and weight information at the group level and only needing to attach the group code to each item. You would likely need to maintain detailed information on special items such as boxed sets or CDs in special packaging. You would also create groups for the different types of locations within your warehouse. An example would be to create three different groups (P1, P2, P3) for the three different sized forward picking locations you use for your CD picking. You then set up the quantity of single CDs that will fit in a P1, P2, and P3 location, quantity of double CDsthat fit in a P1, P2, P3 location etc. You would likely also be setting up case quantities, and pallet quantities of each CD group and quantities of cases and pallets per each reserve storage location group.If this sounds simple, it is…well… sort of. In reality most operations have a much more diverse product mix and will require much more system setup. And setting up the physical characteristics of the product and locations is only part of the picture. You have set up enough so that the system knows where a product can fit and how many will fit in that location. You now need to set up the information needed to let the system decide exactly which location to pick from, replenish from/to, and put away to, and in what sequence these events should occur (remember WMS is all about “directed” movement). You do this by assigning specific logic to the various combinations of item/order/quantity/location information that will occur.Below I have listed some of the logic used in determining actual locations and sequences.Location Sequence. This is the simplest logic; you simply define a flow through your warehouse and assign a sequence number to each location. In order picking this is used to sequence your picks to flow through the warehouse, in put away the logic would look for the first location in the sequence in which the product would fit.Zone Logic. By breaking down your storage locations into zones you can direct picking, put away, or replenishment to or from specific areas of your warehouse. Since zone logic only designates an area, you will need to combine this with some other type oflogic to determine exact location within the zone.Fixed Location. Logic uses predetermined fixed locations per item in picking, put away, and replenishment. Fixed locations are most often used as the primary picking location in piece pick and case-pick operations, however, they can also be used for secondary storage.Random Location. Since computers cannot be truly random (nor would you want them to be) the term random location is a little misleading. Random locations generally refer to areas where products are not stored in designated fixed locations. Like zone logic, you will need some additional logic to determine exact locations.First-in-first-out (FIFO).Directs picking from the oldest inventory first.Last-in-first-out (LIFO).Opposite of FIFO. I didn't think there were any real applications for this logic until a visitor to my site sent an email describing their operation that distributes perishable goods domestically and overseas. They use LIFO for their overseas customers (because of longer in-transit times) and FIFO for their domestic customers.Pick-to-clear. Logic directs picking to the locations with the smallest quantities on hand. This logic is great for space utilization.Reserved Locations. This is used when you want to predetermine specific locations to put away to or pick from. An application for reserved locations would be cross-docking, where you may specify certain quantities of an inbound shipment be moved to specific outbound staging locations or directly to an awaiting outbound trailer.Maximize Cube. Cube logic is found in most WMS systems however it is seldom used. Cube logic basically uses unit dimensions to calculate cube (cubic inches per unit) and then compares this to the cube capacity of the location to determine how much will fit. Now if the units are capable of being stacked into the location in a manner that fills every cubic inch of space in the location, cube logic will work. Since this rarely happens in the real world, cube logic tends to be impractical.Consolidate. Looks to see if there is already a location with the same product stored in it with available capacity. May also create additional moves to consolidate like product stored in multiple locations.Lot Sequence. Used for picking or replenishment, this will use the lot number or lot date to determine locations to pick from or replenish from.It’s very common to combine multiple logic methods to determine the best location. For example you may chose to use pick-to-clear logic within first-in-first-out logic when there are multiple locations with the same receipt date. You also may change the logic based upon current workload. During busy periods you may chose logic that optimizes productivity while during slower periods you switch to logic that optimizes space utilization.Other Functionality/ConsiderationsWave Picking/Batch Picking/Zone Picking. Support for various picking methods varies from one system to another. In high-volume fulfillment operations, picking logiccan be a critical factor in WMS selection. See my article on for more info on these methods.Task Interleaving. Task interleaving describes functionality that mixes dissimilar tasks such as picking and put away to obtain maximum productivity. Used primarily in full-pallet-load operations, task interleaving will direct a lift truck operator to put away a pallet on his/her way to the next pick. In large warehouses this can greatly reduce travel time, not only increasing productivity, but also reducing wear on the lift trucks and saving on energy costs by reducing lift truck fuel consumption. Task interleaving is also used with cycle counting programs to coordinate a cycle count with a picking or put away task.Integration with Automated Material Handling Equipment. If you are planning on using automated material handling equipment such as carousels, ASRS units, AGNS, pick-to-light systems, or separation systems, you’ll want to consider this during the software selection process. Since these types of automation are very expensive and are usually a core component of your warehouse, you may find that the equipment will drive the selection of the WMS. As with automated data collection, you should be working closely with the equipment manufacturers during the software selection process.Advanced Shipment Notifications (ASN). If your vendors are capable of sending advanced shipment notifications (preferably electronically) and attaching compliance labels to the shipments you will want to make sure that the WMS can use this toautomate your receiving process. In addition, if you have requirements to provide ASNs for customers, you will also want to verify this functionality.Yard Management. Yard management describes the function of managing the contents (inventory) of trailers parked outside the warehouse, or the empty trailers themselves. Yard management is generally associated with cross docking operations and may include the management of both inbound and outbound trailers.Labor Tracking/Capacity Planning. Some WMS systems provide functionality related to labor reporting and capacity planning. Anyone that has worked in manufacturing should be familiar with this type of logic. Basically, you set up standard labor hours and machine (usually lift trucks) hours per task and set the available labor and machine hours per shift. The WMS system will use this info to determine capacity and load. Manufacturing has been using capacity planning for decades with mixed results. The need to factor in efficiency and utilization to determine rated capacity is an example of the shortcomings of this process. Not that I’m necessarily against capacity planning in warehousing, I just think most operations don’t really need it and can avoid the disappointment of trying to make it work. I am, however, a big advocate of labor tracking for individual productivity measurement. Most WMS maintain enough data to create productivity reporting. Since productivity is measured differently from one operation to another you can assume you will have to do some minor modifications here (usually in the form of ).Integration with existing accounting/ERP systems. Unless the WMS vendor has already created a specific interface with your accounting/ERP system (such as those provided by an approved business partner) you can expect to spend some significant programming dollars here. While we are all hoping that integration issues will be magically resolved someday by a standardized interface, we isn’t there yet. Ideally you’ll want an integrator that has already integrated the WMS you chose with the business software you are using. Since this is not always possible you at least want an integrator that is very familiar with one of the systems.WMS + everything else = ? As I mentioned at the beginning of this article, a lot of other modules are being added to WMS packages. These would include full financials, light manufacturing, transportation management, purchasing, and sales order management. I don’t see this as a un ilateral move of WMS from an add-on module to a core system, but rather an optional approach that has applications in specific industries such as 3PLs. Using ERP systems as a point of reference, it is unlikely that this add-on functionality will match the functionality of best-of-breed applications available separately. If warehousing/distribution is your core business function and you don’t want to have to deal with the integration issues of incorporating separate financials, order processing, etc. you may find these WMS based business systems are a good fit.Implementation TipsOutside of the standard “don’t underestimate”, “thoroughly test”, “train, train, train” implementation tips that apply to any business software installation ,it’s important t o emphasize that WMS are very data dependent and restrictive by design. That is, you need to have all of the various data elements in place for the system to function properly. And, when they are in place, you must operate within the set parameters.When implementing a WMS, you are adding an additional layer of technology onto your system. And with each layer of technology there is additional overhead and additional sources of potential problems. Now don’t take this as a condemnation of Warehouse Management Systems. Coming from a warehousing background I definitely appreciate the functionality WMS have to offer, and, in many warehouses, this functionality is essential to their ability to serve their customers and remain competitive. It’s just important t o note that every solution has its downsides and having a good understanding of the potential implications will allow managers to make better decisions related to the levels of technology that best suits their unique environment.仓库管理系统(WMS )仓库管理系统(WMS )的演变与许多其他软件解决方案是超级相似的。

计算机专业毕业设计外文翻译--JSP内置对象

计算机专业毕业设计外文翻译--JSP内置对象

附录1 外文参考文献(译文)JSP内置对象有些对象不用声明就可以在JSP页面的Java程序片和表达式部分使用,这就是JSP 的内置对象。

JSP的内置对象有:request、response、session、application、out.response和request对象是JSP内置对象中较重要的两个,这两个对象提供了对服务器和浏览器通信方法的控制。

直接讨论这两个对象前,要先对HTTP协议—Word Wide Wed底层协议做简单介绍。

Word Wide Wed是怎样运行的呢?在浏览器上键入一个正确的网址后,若一切顺利,网页就出现了。

使用浏览器从网站获取HTML页面时,实际在使用超文本传输协议。

HTTP规定了信息在Internet上的传输方法,特别是规定吧浏览器与服务器的交互方法。

从网站获取页面时,浏览器在网站上打开了一个对网络服务器的连接,并发出请求。

服务器收到请求后回应,所以HTTP协议的核心就是“请求和响应”。

一个典型的请求通常包含许多头,称作请求的HTTP头。

头提供了关于信息体的附加信息及请求的来源。

其中有些头是标准的,有些和特定的浏览器有关。

一个请求还可能包含信息体,例如,信息体可包含HTML表单的内容。

在HTML表单上单击Submit 键时,该表单使用ACTION=”POST”或ACTION=”GET”方法,输入表单的内容都被发送到服务器上。

该表单内容就由POST方法或GET方法在请求的信息体中发送。

服务器发送请求时,返回HTTP响应。

响应也有某种结构,每个响应都由状态行开始,可以包含几个头及可能的信息体,称为响应的HTTP头和响应信息体,这些头和信息体由服务器发送给客户的浏览器,信息体就是客户请求的网页的运行结果,对于JSP 页面,就是网页的静态信息。

用户可能已经熟悉状态行,状态行说明了正在使用的协议、状态代码及文本信息。

例如,若服务器请求出错,则状态行返回错误及对错误描述,比如HTTP/1.1 404 Object Not Found。

论文外文文献翻译

论文外文文献翻译

论文外文文献翻译以下是一篇700字左右的论文外文文献翻译:原文题目:The Role of Artificial Intelligence in Medical Diagnostics: A Review原文摘要:In recent years, there has been a growing interest in the use of artificial intelligence (AI) in the field of medical diagnostics. AI has the potential to improve the accuracy and efficiency of medical diagnoses, and can assist clinicians in making treatment decisions. This review aims to examine the current state of AI in medical diagnostics, and discuss its advantages and limitations. Several AI techniques, including machine learning, deep learning, and natural language processing, are discussed. The review also examines the ethical and legal considerations associated with the use of AI in medical diagnostics. Overall, AI has shown great promise in improving medical diagnostics, but further research is needed to fully understand its potential benefits and limitations.AI在医学诊断中发挥的作用:一项综述近年来,人工智能(AI)在医学诊断领域的应用引起了越来越多的关注。

计算机专业毕业设计论文外文文献中英文翻译——java对象

计算机专业毕业设计论文外文文献中英文翻译——java对象

1 . Introduction To Objects1.1The progress of abstractionAll programming languages provide abstractions. It can be argued that the complexity of the problems you’re able to solve is directly related to the kind and quality of abstraction。

By “kind” I mean,“What is it that you are abstracting?” Assembly language is a small abstraction of the underlying machine. Many so—called “imperative” languages that followed (such as FORTRAN,BASIC, and C) were abstractions of assembly language。

These languages are big improvements over assembly language,but their primary abstraction still requires you to think in terms of the structure of the computer rather than the structure of the problem you are trying to solve。

The programmer must establish the association between the machine model (in the “solution space,” which is the place where you’re modeling that problem, such as a computer) and the model of the problem that is actually being solved (in the “problem space,” which is the place where the problem exists). The effort required to perform this mapping, and the fact that it is extrinsic to the programming language,produces programs that are difficult to write and expensive to maintain,and as a side effect created the entire “programming methods” industry.The alter native to modeling the machine is to model the problem you’re trying to solve。

计算机中英论文

计算机中英论文

Understanding Web Addresses You can think of the World Wide Web as a network of electronic files stored on computers all around the world. Hypertext links these
news - a newsgroup
Ø telnet - a computer system that you can log into over the Internet Ø WAIS - a database or document in a Wide Area Information Search database Ø file - a file located on a local drive (your hard drive)
1
resources together. Uniform Resource Locators or URLs are the addresses used to locate these files. The information contained in a URL gives you the ability to jump from one web page to another with just a click of your mouse. When you type a URL into your browser or click on a hypertext link, your browser is sending a request to a remote computer to download a file. What does a typical URL look like? Here are some examples: / The home page for study English. ftp:///pub/ A directory of files at MIT available for downloading. news:rec.gardens.roses A newsgroup on rose gardening. The first part of a URL (before the two slashes* tells you the type of resource or method of access at that address. For example: Ø Ø Ø files Ø http - a hypertext document or directory gopher - a gopher document or menu ftp - a file available for downloading or a directory of such

【计算机专业文献翻译】信息系统的管理

【计算机专业文献翻译】信息系统的管理
基本上每一台计算机都能连接到网络中,一台计算机要么是客户端,要么就是服务器。服务器更具强大和区别性,因为它存储了网络中其他机器需要使用的数据。个人计算机的客户端在需要数据的时候随时都可以访问服务器。网络中既是服务器又是客户端的计算机称作点对点网络。
传播媒体必须经过仔细选择,平衡每个媒体的优点和缺点,这个选择决定网络的速度。改变一个已经安装好的网络媒体通常非常昂贵。最实用的传播媒体是电缆,光纤,广播,光,红外线。
本科生毕业设计(论文)外文资料译文
(2009届)
论文题目
基于Javamail的邮件收发系统
学生姓名
学号
专业
计算机科学与技术
班级
指导教师
职称
讲师、副教授
填表日期
2008年 12月 10 日
信息科学与工程学院教务科制
外文资料翻译(译文不少于2000汉字)
1.所译外文资料:信息系统的管理Managing Information Systems
数据共享是网络的重要应用之一。网络可以共享交易数据,搜索和查询数据,信息,公告板,日历,团队和个人信息数据,备份等。在交易的时候,连接一个公司的电脑的中央数据库包括现有库存信息和出售的数据信息。如果数据被储存在一个中央数据库中,搜查结果便可从中获取。电子邮件的发送已经成为同事之间最常用的信息共享的方式之一。
自从信号在空中传输后,广播,光以及红外线作为传播媒体已经不需要电缆。
传输能力,即一个传播媒体一次性传输的数据量,在不同的媒体中,材料不同,安装时付出的劳动不同,传输的能力有很大的区别。传播媒体有时候被合并,代替远地域之间的高速传播媒体,速度虽慢,但是成本低,在一幢大楼中进行信息传播。
连接设备包括网络连接卡NICS,或者在计算机和网络间进行传输和信号传递的局域网LAN卡。其他常用的设备连接不同的网络,特别是当一个网络使用不用的传输媒体的时候。使用一个对很多用户都开放的系统很重要,比如windows/NT,Office2000,Novell,UNIX.

计算机类外文文献翻译---Java核心技术

计算机类外文文献翻译---Java核心技术

本科毕业论文外文文献及译文文献、资料题目:Core Java™ V olume II–AdvancedFeatures文献、资料来源:著作文献、资料发表(出版)日期:2008.12.1院(部):计算机科学与技术学院专业:网络工程班级:姓名:学号:指导教师:翻译日期:外文文献:Core Java™ Volume II–Advanced Features When Java technology first appeared on the scene, the excitement was not about a well-crafted programming language but about the possibility of safely executing applets that are delivered over the Internet (see V olume I, Chapter 10 for more information about applets). Obviously, delivering executable applets is practical only when the recipients are sure that the code can't wreak havoc on their machines. For this reason, security was and is a major concern of both the designers and the users of Java technology. This means that unlike other languages and systems, where security was implemented as an afterthought or a reaction to break-ins, security mechanisms are an integral part of Java technology.Three mechanisms help ensure safety:•Language design features (bounds checking on arrays, no unchecked type conversions, no pointer arithmetic, and so on).•An access control mechanism that controls what the code can do (such as file access, network access, and so on).•Code signing, whereby code authors can use standard cryptographic algorithms to authenticate Java code. Then, the users of the code can determine exactly who created the code and whether the code has been altered after it was signed.Below, you'll see the cryptographic algorithms supplied in the java.security package, which allow for code signing and user authentication.As we said earlier, applets were what started the craze over the Java platform. In practice, people discovered that although they could write animated applets like the famous "nervous text" applet, applets could not do a whole lot of useful stuff in the JDK 1.0 security model. For example, because applets under JDK 1.0 were so closely supervised, they couldn't do much good on a corporate intranet, even though relatively little risk attaches to executing an applet from your company's secure intranet. It quickly became clear to Sun that for applets to become truly useful, it was important for users to be able to assign different levels of security, depending on where the applet originated. If an applet comes from a trusted supplier and it has not been tampered with, the user of that applet can then decide whether to give the applet more privileges.To give more trust to an applet, we need to know two things:•Where did the applet come from?•Was the code corrupted in transit?In the past 50 years, mathematicians and computer scientists have developed sophisticated algorithms for ensuring the integrity of data and for electronic signatures. The java.security package contains implementations of many of these algorithms. Fortunately, you don't need to understand the underlying mathematics to use the algorithms in the java.security package. In the next sections, we show you how message digests can detect changes in data files and how digital signatures can prove the identity of the signer.A message digest is a digital fingerprint of a block of data. For example, the so-called SHA1 (secure hash algorithm #1) condenses any data block, no matter how long, into a sequence of 160 bits (20 bytes). As with real fingerprints, one hopes that no two messages have the same SHA1 fingerprint. Of course, that cannot be true—there are only 2160 SHA1 fingerprints, so there must be some messages with the same fingerprint. But 2160is so large that the probability of duplication occurring is negligible. How negligible? According to James Walsh in True Odds: How Risks Affect Your Everyday Life (Merritt Publishing 1996), the chance that you will die from being struck by lightning is about one in 30,000. Now, think of nine other people, for example, your nine least favorite managers or professors. The chance that you and all of them will die from lightning strikes is higher than that of a forged message having the same SHA1 fingerprint as the original. (Of course, more than ten people, none of whom you are likely to know, will die from lightning strikes. However, we are talking about the far slimmer chance that your particular choice of people will be wiped out.)A message digest has two essential properties:•If one bit or several bits of the data are changed, then the message digest also changes.• A forger who is in possession of a given message cannot construct a fake message that has the same message digest as the original.The second property is again a matter of probabilities, of course. Consider the following message by the billionaire father:"Upon my death, my property shall be divided equally among my children; however, my son George shall receive nothing."That message has an SHA1 fingerprint of2D 8B 35 F3 BF 49 CD B1 94 04 E0 66 21 2B 5E 57 70 49 E1 7EThe distrustful father has deposited the message with one attorney and the fingerprint with another. Now, suppose George can bribe the lawyer holding the message. He wants to change the message so that Bill gets nothing. Of course, that changes the fingerprint to a completely different bit pattern:2A 33 0B 4B B3 FE CC 1C 9D 5C 01 A7 09 51 0B 49 AC 8F 98 92Can George find some other wording that matches the fingerprint? If he had been the proud owner of a billion computers from the time the Earth was formed, each computing a million messages a second, he would not yet have found a message he could substitute.A number of algorithms have been designed to compute these message digests. The two best-known are SHA1, the secure hash algorithm developed by the National Institute of Standards and Technology, and MD5, an algorithm invented by Ronald Rivest of MIT. Both algorithms scramble the bits of a message in ingenious ways. For details about these algorithms, see, for example, Cryptography and Network Security, 4th ed., by William Stallings (Prentice Hall 2005). Note that recently, subtle regularities have been discovered in both algorithms. At this point, most cryptographers recommend avoiding MD5 and using SHA1 until a stronger alternative becomes available. (See /rsalabs/node.asp?id=2834 for more information.) The Java programming language implements both SHA1 and MD5. The MessageDigest class is a factory for creating objects that encapsulate the fingerprinting algorithms. It has a static method, called getInstance, that returns an object of a class that extends the MessageDigest class. This means the MessageDigest class serves double duty:•As a factory class•As the superclass for all message digest algorithmsFor example, here is how you obtain an object that can compute SHA fingerprints:MessageDigest alg = MessageDigest.getInstance("SHA-1");(To get an object that can compute MD5, use the string "MD5" as the argument to getInstance.)After you have obtained a MessageDigest object, you feed it all the bytes in the message by repeatedly calling the update method. For example, the following code passes all bytes in a file to the alg object just created to do the fingerprinting:InputStream in = . . .int ch;while ((ch = in.read()) != -1)alg.update((byte) ch);Alternatively, if you have the bytes in an array, you can update the entire array at once:byte[] bytes = . . .;alg.update(bytes);When you are done, call the digest method. This method pads the input—as required by the fingerprinting algorithm—does the computation, and returns the digest as an array of bytes.byte[] hash = alg.digest();The program in Listing 9-15 computes a message digest, using either SHA or MD5. You can load the data to be digested from a file, or you can type a message in the text area.Message SigningIn the last section, you saw how to compute a message digest, a fingerprint for the original message. If the message is altered, then the fingerprint of the altered message will not match the fingerprint of the original. If the message and its fingerprint are delivered separately, then the recipient can check whether the message has been tampered with. However, if both the message and the fingerprint were intercepted, it is an easy matter to modify the message and then recompute the fingerprint. After all, the message digest algorithms are publicly known, and they don't require secret keys. In that case, the recipient of the forged message and the recomputed fingerprint would never know that the message has been altered. Digital signatures solve this problem.To help you understand how digital signatures work, we explain a few concepts from the field called public key cryptography. Public key cryptography is based on the notion of a public key and private key. The idea is that you tell everyone in the world your public key. However, only you hold the private key, and it is important that you safeguard it and don't release it to anyone else. The keys are matched by mathematical relationships, but the exact nature of these relationships is not important for us. (If you are interested, you can look it up in The Handbook of Applied Cryptography at http://www.cacr.math.uwaterloo.ca/hac/.)The keys are quite long and complex. For example, here is a matching pair of public andprivate Digital Signature Algorithm (DSA) keys.Public key:Code View:p:fca682ce8e12caba26efccf7110e526db078b05edecbcd1eb4a208f3ae1617ae01f35b91a47e6df 63413c5e12ed0899bcd132acd50d99151bdc43ee737592e17q: 962eddcc369cba8ebb260ee6b6a126d9346e38c5g:678471b27a9cf44ee91a49c5147db1a9aaf244f05a434d6486931d2d14271b9e35030b71fd7 3da179069b32e2935630e1c2062354d0da20a6c416e50be794ca4y:c0b6e67b4ac098eb1a32c5f8c4c1f0e7e6fb9d832532e27d0bdab9ca2d2a8123ce5a8018b8161 a760480fadd040b927281ddb22cb9bc4df596d7de4d1b977d50Private key:Code View:p:fca682ce8e12caba26efccf7110e526db078b05edecbcd1eb4a208f3ae1617ae01f35b91a47e6df 63413c5e12ed0899bcd132acd50d99151bdc43ee737592e17q: 962eddcc369cba8ebb260ee6b6a126d9346e38c5g:678471b27a9cf44ee91a49c5147db1a9aaf244f05a434d6486931d2d14271b9e35030b71fd73 da179069b32e2935630e1c2062354d0da20a6c416e50be794ca4x: 146c09f881656cc6c51f27ea6c3a91b85ed1d70aIt is believed to be practically impossible to compute one key from the other. That is, even though everyone knows your public key, they can't compute your private key in your lifetime, no matter how many computing resources they have available.It might seem difficult to believe that nobody can compute the private key from the public keys, but nobody has ever found an algorithm to do this for the encryption algorithms that are in common use today. If the keys are sufficiently long, brute force—simply trying all possible keys—would require more computers than can be built from all the atoms in the solar system, crunching away for thousands of years. Of course, it is possible that someone could come up withalgorithms for computing keys that are much more clever than brute force. For example, the RSA algorithm (the encryption algorithm invented by Rivest, Shamir, and Adleman) depends on the difficulty of factoring large numbers. For the last 20 years, many of the best mathematicians have tried to come up with good factoring algorithms, but so far with no success. For that reason, most cryptographers believe that keys with a "modulus" of 2,000 bits or more are currently completely safe from any attack. DSA is believed to be similarly secure.Figure 9-12 illustrates how the process works in practice.Suppose Alice wants to send Bob a message, and Bob wants to know this message came from Alice and not an impostor. Alice writes the message and then signs the message digest with her private key. Bob gets a copy of her public key. Bob then applies the public key to verify the signature. If the verification passes, then Bob can be assured of two facts:•The original message has not been altered.•The message was signed by Alice, the holder of the private key that matches the public key that Bob used for verification.You can see why security for private keys is all-important. If someone steals Alice's private key or if a government can require her to turn it over, then she is in trouble. The thief or a government agent can impersonate her by sending messages, money transfer instructions, and so on, that others will believe came from Alice.The X.509 Certificate FormatTo take advantage of public key cryptography, the public keys must be distributed. One of the most common distribution formats is called X.509. Certificates in the X.509 format are widely used by VeriSign, Microsoft, Netscape, and many other companies, for signing e-mail messages, authenticating program code, and certifying many other kinds of data. The X.509 standard is part of the X.500 series of recommendations for a directory service by the international telephone standards body, the CCITT.The precise structure of X.509 certificates is described in a formal notation, called "abstract syntax notation #1" or ASN.1. Figure 9-13 shows the ASN.1 definition of version 3 of the X.509 format. The exact syntax is not important for us, but, as you can see, ASN.1 gives a precise definition of the structure of a certificate file. The basic encoding rules, or BER, and a variation, called distinguished encoding rules (DER) describe precisely how to save this structure in abinary file. That is, BER and DER describe how to encode integers, character strings, bit strings, and constructs such as SEQUENCE, CHOICE, and OPTIONAL.中文译文:Java核心技术卷Ⅱ高级特性当Java技术刚刚问世时,令人激动的并不是因为它是一个设计完美的编程语言,而是因为它能够安全地运行通过因特网传播的各种applet。

计算机专业外文文献翻译

计算机专业外文文献翻译

毕业设计(论文)外文文献翻译(本科学生用)题目:Plc based control system for the music fountain 学生姓名:_ ___学号:060108011117 学部(系): 信息学部专业年级: _06自动化(1)班_指导教师: ___职称或学位:助教__20 年月日外文文献翻译(译成中文1000字左右):【主要阅读文献不少于5篇,译文后附注文献信息,包括:作者、书名(或论文题目)、出版社(或刊物名称)、出版时间(或刊号)、页码。

提供所译外文资料附件(印刷类含封面、封底、目录、翻译部分的复印件等,网站类的请附网址及原文】英文节选原文:Central Processing Unit (CPU) is the brain of a PLC controller. CPU itself is usually one of the microcontrollers. Aforetime these were 8-bit microcontrollers such as 8051, and now these are 16-and 32-bit microcontrollers. Unspoken rule is that you’ll find mostly Hitachi and Fujicu microcontrollers in PLC controllers by Japanese makers, Siemens in European controllers, and Motorola microcontrollers in American ones. CPU also takes care of communication, interconnectedness among other parts of PLC controllers, program execution, memory operation, overseeing input and setting up of an output. PLC controllers have complex routines for memory checkup in order to ensure that PLC memory was not damaged (memory checkup is done for safety reasons).Generally speaking, CPU unit makes a great number of check-ups of the PLC controller itself so eventual errors would be discovered early. You can simply look at any PLC controller and see that there are several indicators in the form. of light diodes for error signalization.System memory (today mostly implemented in FLASH technology) is used by a PLC for a process control system. Aside form. this operating system it also contains a user program translated forma ladder diagram to a binary form. FLASH memory contents can be changed only in case where user program is being changed. PLC controllers were used earlier instead of PLASH memory and have had EPROM memory instead of FLASH memory which had to be erased with UV lamp and programmed on programmers. With the use of FLASH technology this process was greatly shortened. Reprogramming a program memory is done through a serial cable in a program for application development.User memory is divided into blocks having special functions. Some parts of a memory are used for storing input and output status. The real status of an input is stored either as “1”or as “0”in a specific memory bit/ each input or output has one corresponding bit in memory. Other parts of memory are used to store variable contents for variables used in used program. For example, time value, or counter value would be stored in this part of the memory.PLC controller can be reprogrammed through a computer (usual way), but also through manual programmers (consoles). This practically means that each PLC controller can programmed through a computer if you have the software needed for programming. Today’s transmission computers are ideal for reprogramming a PLC controller in factory itself. This is of great importance to industry. Once the system is corrected, it is also important to read the right program into a PLC again. It is also good to check from time to time whether program in a PLC has not changed. This helps to avoid hazardous situations in factory rooms (some automakers have established communication networks which regularly check programs in PLC controllers to ensure execution only of good programs). Almost every program for programming a PLC controller possesses various useful options such as: forced switching on and off of the system input/outputs (I/O lines),program follow up in real time as well as documenting a diagram. This documenting is necessary to understand and define failures and malfunctions. Programmer can add remarks, names of input or output devices, and comments that can be useful when finding errors, or with system maintenance. Adding comments and remarks enables any technician (and not just a person who developed the system) to understand a ladder diagram right away. Comments and remarks can even quote precisely part numbers if replacements would be needed. This would speed up a repair of any problems that come up due to bad parts. The old way was such that a person who developed a system had protection on the program, so nobody aside from this person could understand how it was done. Correctly documented ladder diagram allows any technician to understand thoroughly how system functions.Electrical supply is used in bringing electrical energy to central processing unit. Most PLC controllers work either at 24 VDC or 220 VAC. On some PLC controllers you’ll find electrical supply as a separate module. Those are usually bigger PLC controllers, while small and medium series already contain the supply module. User has to determine how much current to take from I/O module to ensure that electrical supply provides appropriate amount of current. Different types of modules use different amounts of electrical current. This electrical supply is usually not used to start external input or output. User has to provide separate supplies in starting PLC controller inputs because then you can ensure so called “pure” supply for the PLC controller. With pure supply we mean supply where industrial environment can not affect it damagingly. Some of the smaller PLC controllers supply their inputs with voltage from a small supply source already incorporated into a PLC.中文翻译:从结构上分,PLC分为固定式和组合式(模块式)两种。

计算机专业外文文献及翻译--微软Visual Studio

计算机专业外文文献及翻译--微软Visual Studio

微软Visual Studio1微软Visual StudioVisual Studio 是微软公司推出的开发环境,Visual Studio可以用来创建Windows平台下的Windows应用程序和网络应用程序,也可以用来创建网络服务、智能设备应用程序和Office 插件。

Visual Studio是一个来自微软的集成开发环境IDE(inteqrated development environment),它可以用来开发由微软视窗,视窗手机,Windows CE、.NET框架、.NET精简框架和微软的Silverlight支持的控制台和图形用户界面的应用程序以及Windows窗体应用程序,网站,Web应用程序和网络服务中的本地代码连同托管代码。

Visual Studio包含一个由智能感知和代码重构支持的代码编辑器。

集成的调试工作既作为一个源代码级调试器又可以作为一台机器级调试器。

其他内置工具包括一个窗体设计的GUI应用程序,网页设计师,类设计师,数据库架构设计师。

它有几乎各个层面的插件增强功能,包括增加对支持源代码控制系统(如Subversion和Visual SourceSafe)并添加新的工具集设计和可视化编辑器,如特定于域的语言或用于其他方面的软件开发生命周期的工具(例如Team Foundation Server的客户端:团队资源管理器)。

Visual Studio支持不同的编程语言的服务方式的语言,它允许代码编辑器和调试器(在不同程度上)支持几乎所有的编程语言,提供了一个语言特定服务的存在。

内置的语言中包括C/C + +中(通过Visual C++),(通过Visual ),C#中(通过Visual C#)和F#(作为Visual Studio 2010),为支持其他语言,如M,Python,和Ruby等,可通过安装单独的语言服务。

它也支持的XML/XSLT,HTML/XHTML ,JavaScript和CSS.为特定用户提供服务的Visual Studio也是存在的:微软Visual Basic,Visual J#、Visual C#和Visual C++。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

微软Visual Studio1微软Visual StudioVisual Studio 是微软公司推出的开发环境,Visual Studio可以用来创建Windows平台下的Windows应用程序和网络应用程序,也可以用来创建网络服务、智能设备应用程序和Office 插件。

Visual Studio是一个来自微软的集成开发环境IDE,它可以用来开发由微软视窗,视窗手机,Windows CE、.NET框架、.NET精简框架和微软的Silverlight支持的控制台和图形用户界面的应用程序以及Windows窗体应用程序,网站,Web应用程序和网络服务中的本地代码连同托管代码。

Visual Studio包含一个由智能感知和代码重构支持的代码编辑器。

集成的调试工作既作为一个源代码级调试器又可以作为一台机器级调试器。

其他内置工具包括一个窗体设计的GUI应用程序,网页设计师,类设计师,数据库架构设计师。

它有几乎各个层面的插件增强功能,包括增加对支持源代码控制系统(如Subversion和Visual SourceSafe)并添加新的工具集设计和可视化编辑器,如特定于域的语言或用于其他方面的软件开发生命周期的工具(例如Team Foundation Server的客户端:团队资源管理器)。

Visual Studio支持不同的编程语言的服务方式的语言,它允许代码编辑器和调试器(在不同程度上)支持几乎所有的编程语言,提供了一个语言特定服务的存在。

内置的语言中包括C/C + +中(通过Visual C++),(通过Visual ),C#中(通过Visual C#)和F#(作为Visual Studio 2010),为支持其他语言,如M,Python,和Ruby等,可通过安装单独的语言服务。

它也支持的XML/XSLT,HTML/XHTML ,JavaScript和CSS.为特定用户提供服务的Visual Studio也是存在的:微软Visual Basic,Visual J#、Visual C#和Visual C++。

微软提供了“直通车”的Visual Studio 2010组件的Visual Basic和Visual C#和Visual C + +,和Visual Web Developer版本,不需任何费用。

Visual Studio 2010、2008年和2005专业版,以及Visual Studio 2005的特定语言版本(Visual Basic、C++、C#、J#),通过微软的下载DreamSpark计划,对学生免费。

2架构Visual Studio不支持任何编程语言,解决方案或工具本质。

相反,它允许插入各种功能。

特定的功能是作为一个VS压缩包的代码。

安装时,这个功能可以从服务器得到。

IDE提供三项服务:SVsSolution,它提供了能够列举的项目和解决方案; SVsUIShell,它提供了窗口和用户界面功能(包括标签,工具栏和工具窗口)和SVsShell,它处理VS压缩包的注册。

此外,IDE还可以负责协调和服务之间实现通信。

所有的编辑器,设计器,项目类型和其他工具都是VS压缩包存在。

Visual Studio 使用COM访问VSPackage。

在Visual Studio SDK中还包括了管理软件包框架(MPF),这是一套管理的允许在写的CLI兼容的语言的任何围绕COM的接口。

然而,MPF并不提供所有的Visual Studio COM 功能。

通过使用特定的VSPackage来支持的编程语言的服务,称为语言服务。

一个语言服务定义了各种接口,而这些VSPackage实现包可以实现添加功能支持多种。

功能性的方式,可以添加包括语法着色,语句完成,括号匹配,参数信息工具提示,成员名单和背景汇编的错误标记。

如果接口完成,那么语言就可以使用这些功能。

语言服务要在每个语言的基础实施。

重用代码的实现可以从语言解析器实现。

语言服务可以在本机代码或托管代码实现。

对于本机代码,无论是本地COM接口或巴贝尔框架(部分Visual Studio SDK)都可以使用。

对于托管代码,MPF服务,包括托管语言编写包装。

Visual Studio不包括任何源头控制内建支援,但它定义了两种可供选择的源代码控制系统的方法可以用IDE集成。

一个源代码控制VSPackage可以提供自己的定制的用户界面。

与此相反,源代码管理插件使用MSSCCI(Microsoft源代码控制接口)提供了一个功能集的控制功能,用于落实各项源接口,用标准的Visual Studio用户界面。

MSSCCI首次使用集成的Visual SourceSafe 6.0,但后来在Visual Studio SDK中通过。

Visual 2002使用MSSCCI 1.1,Visual 2003使用MSSCCI 1.2。

Visual Studio 2005、2008和2010使用MSSCCI 1.3版,增加了重命名和删除的支持以及异步传输。

Visual Studio支持运行(每一个都有它自己的一套VSPackage)多个实例的环境。

这些实例使用不同的注册表配置单元来存储它们的配置状态和区别他们的AppID(应用程序ID)。

实例都是由一开始的AppID-specific.exe文件选择的AppID,设置根并启动IDE。

一个AppID的登记VSPackage中集成了其他的VSPackage。

Visual Studio的各种产品版本,是使用不同的AppID。

在Visual Studio速成版产品都设有自己的AppIds,但标准,专业和团队套件产品共享相同的AppID。

因此,人们可以安装Express版本并排侧其他版本,不同的是其他版本更新相同的安装。

专业版包含标准版的超集VSPackage和包括对其他版本中的团队套件的VSPackage超集。

AppID系统由Visual Studio 2008的Visual Studio Shell影响。

3特点3.1代码编辑器Visual Studio,像任何其它的集成开发环境一样,包括一个支持语法高亮和代码自动完成的代码编辑器,不仅变量,函数和方法,就连语言,如结构循环和查询都是使用智能感知的。

在开发网站和Web应用程序时,智能感知是由内部语言支持的,当然XML、层叠样式表和JavaScript也同样支持。

编辑器中会自动弹出一个无模式列表框的代码,覆盖在上面。

在Visual Studio 2008年起,它可暂时半透明地看到它阻碍了代码。

代码编辑器是用于所有支持的语言。

在Visual Studio代码编辑器还支持设置快捷导航代码书签,其他助航设备包括折叠代码块和渐进式搜索,还有正常的文本搜索与正则表达式(在计算机科学中,是指一个用来描述或者匹配一系列符合某个句法规则的字符串的单个字符串。

在很多文本编辑器或其他工具里,正则表达式通常被用来检索和/或替换那些符合某个模式的文本内容)搜索。

代码编辑器还包括一个多项目剪贴板和任务列表。

代码编辑器支持代码片段,它保存模板重复的代码,也可以被插入到正在进行这项工作到的代码和项目自定义中。

一个代码片段管理工具也是这样建立的。

这些工具是在浮动窗口显示,当这个窗口不被使用或者停在屏幕一侧时,可以将它设置成自动隐藏。

在Visual Studio代码编辑器也支持代码重构包括参数重新排序,变量和方法的重命名,界面的提取和内部成员属性的封装等等。

Visual Studio提供了背景编译(也称为增量编译)。

正在写的代码时,Visual Studio编译背景为了强调它在提供反馈有关语法和编译错误,这时标有红色的波浪。

警告标有绿色下划线。

背景编译不生成可执行代码,因为它需要一个不同的编译器而不是一个生成可执行代码的编译器。

背景资料汇编最初是和Microsoft Visual Basic语言一起推出的,但现在它已经扩展到了所有内部语言。

3.2调试器Visual Studio包含一个调试器既可以作为一个源代码级调试器工作,并作为机器级调试器工作。

它可工作在托管代码以及本机代码,可用Visual Studio支持的任何语言调试应用程序。

此外,它也可以附加到正在运行的进程,监测和调试这些进程。

如果源代码的运行过程是可用的,它就会显示代码的运行。

如果源代码是不可用,它可以显示反汇编。

Visual Studio调试器还可以创建内存转储以及负荷调试它们。

多线程程序也支持。

调试器可以被配置为一个应用程序,运行在Visual Studio环境之外。

调试器可以设置(允许执行被暂时停止的位置)和监视(用于监视变量的值执行进度)断点。

断点是有条件的,这意味着他们条件满足时触发。

代码可以加强,即一次只运行一条(源代码)。

它可以步进它里面的功能来调试,或者步过,即执行机构的功能。

也就是说,它允许代码进行编辑,因为它的调试只有32位,不支持64位。

在调试时,如果鼠标指针徘徊在任何变量,其当前值显示在工具提示(“数据提示”),如果需要的话,它也可以修改。

在编码时,Visual Studio调试器让某些职能援引手动Immediate工具窗口。

方法参数提供的是在立即窗口。

4设计Visual Studio包括一个可视化设计,以帮助开发主机的应用程序。

这些工具包括:4.1 Windows窗体设计器Windows窗体设计器是用Windows窗体构建图形用户界面应用程序。

它包括一个UI调色板部件和一些可以在窗体表面拖拽的控件(包括按钮,进度条,标签,布局容器和其他控制),布局可以通过控制其他容器的框架控件或锁定到窗体的一面来改变。

显示数据的控件(如文本框,列表框,网格视图等)都可以绑定到数据源,如数据库或查询。

UI是用一个事件驱动的编程模型与代码关联的。

设计器会生成C#或应用程序代码。

4.2 WPF设计WPF设计器,代号为Cider,用Visual Studio 2008介绍。

像Windows窗体设计器一样它支持拖拽。

它是用来提交用户界面对象的Windows Presentation Foundation。

它支持所有功能,包括WPF的数据绑定和自动布局管理。

它为UI生成的XAML代码。

生成的XAML文件兼容微软Expression设计,设计者为导向的产品。

XAML代码是联系在一起的代码使用代码隐藏模型。

4.3网页设计师/开发Visual Studio还包括一个网站编辑器,网页设计器,允许被拖放部件。

它是用于开发应用程序和支持HTML,CSS和JavaScript。

它使用代码隐藏模型,连接代码。

从Visual Studio 2008年起,设计器的布局引擎所使用的网络共享与微软的Expression Web。

还有 MVC支持MVC的下载技术。

相关文档
最新文档