Abstract Vew of System Components Read摘要从系统组件读
岑华蒙毕业设计外文翻译-SVC与STATCOM在电力系统中应用的效益
广西科技大学毕业设计外文翻译院别电气与信息工程学院专业电气工程与自动化班级电气101班学号 201000307027 姓名岑华蒙指导教师周晓华2013年12月28日Benefits of SVC and STATCOM for ElectricUtility ApplicationAbstract-- Examination of the behavior of SVCs and STATCOMs in electric power systems is presented. The paper is based on analytical and simulation analysis, and conclusions can be used as power industry guidelines.We explain the principle structures of SVCs and STATCOMs, the models for dynamic studies, and the impact of these devices on steady state voltage and transient voltage stability. Sensitivity analysis is provided which shows the impact of SVCs and STATCOMs with regard to network strength. Harmonic issues, space requirements, and price discussions are also briefly addressed.Index Terms—Dynamic Compensators, SVC, STATCOM, Short Circuit Capacity, Gain Sensitivity, Short Term Voltage Stability, HarmonicsI. INTRODUCTIONSHUNT-connected static var compensators (SVCs) are used extensively to control the AC voltage in transmission networks. Power electronic equipment, such as the thyristor controlled reactor (TCR) and the thyristor switched capacitor (TSC) have gained a significant market, primarily because of well-proven robustness to supply dynamic reactive power with fast response time and with low maintenance.With the advent of high power gate turn-off thyristors and transistor devices (GTO, IGBT, …) a new generation of power electronic equipment, STATCOM, shows great promise for application in power systems [3,4].This paper aims to explain the benefits of SVCs and STATCOMs for application in utility power systems. Installation of a large number of SVCs and experience gained from recent STATCOM projects throughout the world motivates us to clarify certain aspects of these devices.II. BASIC DESCRIPTIONThis section explains briefly the basic configuration of SVCs and STATCOMs:A.SVCThe compensator normally includes a thyristor-controlled reactor (TCR), thyristor-switched capacitors (TSCs) and harmonic filters. It might also includemechanically switched shunt capacitors (MSCs), and then the term static var system is used. The harmonic filters (for the TCR-produced harmonics) are capacitive at fundamental frequency. The TCR is typically larger than the TSC blocks so that continuous control is realized. Other possibilities are fixed capacitors (FCs), and thyristor switched reactors (TSRs). Usually a dedicated transformer is used, with the compensator equipment at medium voltage. The transmission side voltage is controlled, and the Mvar ratings are referred to the transmission side.The rating of an SVC can be optimized to meet the required demand. The rating can be symmetric or asymmetric with respect to inductive and capacitive reactive power. As an example, the rating can be 200 Mvar inductive and 200 Mvar capacitive, or 100 Mvar inductive and 200 Mvar capacitive.B. STATCOMThe voltage-sourced converter (VSC) is the basic electronic part of a STATCOM, which converts the dc voltage into a three-phase set of output voltages with desired amplitude, frequency, and phase.There are different methods to realize a voltage-sourced converter for power utility application. Based on harmonics and loss considerations, pulse width modulation (PWM) or multiple converters are used.Inherently, STATCOMs have a symmetrical rating with respect to inductive and capacitive reactive power. For example, the rating can be 100 Mvar inductive and 100 Mvar capacitive. For asymmetric rating, STATCOMs need a complementary reactive power source. This can be realized for example with MSCs.VII. THE FUNCTIONAL RATING CONCEPTTraditionally, SVCs of a common design have been used to handle different types of network problems. The trend today, however, is to tailor SVCs for their intended use. This is important in order to make SVCs cost efficient.For steady state voltage support, i.e., to follow the daily load pattern, bulk reactive power combined with stepless smooth voltage control is desired. Vernier voltage regulation can be provided by a TCR running in parallel with harmonic filters. The bulk reactive power is provided by mechanically switched capacitor banks (MSCs) or reactors (MSRs) governed by the SVC controller. Thus SVCs serve the purpose of continuously maintaining a smooth voltage, piloting the MSC switching.If the task is to support a system limited by post contingency voltageinstability or unacceptable voltage levels, a large amount of quickly controllable reactive power is needed for a short time duration. An SVC with additional TSCs is an excellent choice. Post recovery voltage support may also be necessary—this is then preferably provided by MSCs governed by the SVC.For temporary over voltages, large inductive reactive power is needed for a short period of time. The standard TCR has some short-time over current capability. This capacity can easily be extended by lowering its steady state operating temperature and by “under sizing” the reactors.A.Enhanced SVCsThe SVC characteristic at depressed voltage can be efficiently improved by adding an extra TSC. This branch is intended to operate only during undervoltage conditions. It can be added without introducing additional cost in other parts of the SVC. Most important is that the current rating or the voltage capability of the power transformer does not need to be increased. Power transformers allow large overcurrent during limited time (IEEE C57-115 can be used as a guide for available capacity). In many cases three times overload in current for 10 seconds is available. The additional TSC rating is typically in the range of 50 to 100% of the SVC rating.B. SVC Short Term OverloadThe maximum power from an SVC at a given voltage is determined by its reactance. No overload capacity is available unless the reactance is lowered, e.g., by adding a TSC. For overvoltages, however, the SVC reactance is no longer the limiting factor; instead the current in components defines the limit. In most cases the thyristors set the limit. The design is made so that the thyristors are running at a maximum allowed temperature at maximum steady state system voltage. A margin to destructive temperatures is reserved in order to handle fault cases. The Forbes 500 kV static var system near Duluth Minnestoa USA is an interesting example of an Enhanced SVC [5].VIII. HARMONICSBoth SVCs and STATCOMs generate harmonics. The TCR of an SVC is a harmonic current source. Network harmonic voltages distortion occurs as a result of the currents entering the power system. The STATCOM is a harmonic voltage source. Network voltage harmonic distortion occurs as a result of voltage division between the STATCOM phase impedance and the network impedance.The major harmonic generation in SVCs is at low frequencies; above the 15thharmonic the contribution is normally small. At lower frequencies the generation is large and filters are needed. SVCs normally have at least 5th and 7th harmonic filters. The filter rating is in the range of 25–50% of the TCR size.STATCOMs with PWM operation have their major harmonic generation at higher frequencies. The major contributions are at odd multiples of the PWM switch frequency; at even multiples the levels are lower. The harmonic generation decays with increasing frequency. STATCOMs might also generate harmonics in the same spectra as the conventional SVCs. The magnitudes depend on converter topology and the modulation and switching frequency used. In most cases STATCOMs as well as SVCs require harmonic filters.IX. FOOTPRINTMore and more frequently the footprint available for prospective STATCOMs or SVCs is restricted. The trend is, as in many other fields, more capacity on less space. Requirements for extremely tight designs, however, result in higher costs. In general the footprint issue seems not to hinder the utilization of STATCOMs or SVCs, but occasionally, STATCOM has been preferred based on anticipated smaller footprint.When comparing SVCs with STATCOMs, it is tempting to assume that the latter will fit within a much smaller footprint, as the passive reactive elements (air core reactors and high voltage capacitor banks) are “replaced” with semiconductor assemblies. In the authors’ opin ion, this assumption however remains to be practically proved. The main reason for this is that the voltage sourced converter concepts applied in STATCOMs to date have been built with several (even as many as eight) inverter bridges in parallel. This design philosophy implies many current paths, high fault currents and complex magnetic interfaces between the converters and the grid. All in all, not all STATCOMs come out as downsized compared to SVCs. Also the higher losses in the STATCOM will require substantially larger cooling equipment. However, as the STATCOM technology evolves, including the use of very compact inverter assemblies with series connected semiconductor devices, and with pulse width modulation, there is a definite potential for downsizing.In the case of SVCs, the industry has a long product development where, when necessary, measures have been taken to downsize the installation. Such measures include elevated design of apparatuses, stacking of components (reactors and capacitors), vertical orientation of busbars and use of non-magnetic materialin nearby structures. In a few extreme cases iron core reactors have been utilized in order to allow installation in very tight premises. In addition the development of much higher power density in high power thyristors and capacitors contributes to physically smaller SVCs.X. LIFE CYCLE OR EVALUATED COSTSIt is the authors’ experience that the investment cost of SVCs is today substantially lower than of comparable STATCOMs. As STATCOMs provides improved performance, it will be the choice in the cases where this can be justified, such as flicker compensation at large electrical arc furnaces or in combination with active power transfer (back-to-back DC schemes). The two different concepts cannot be compared on a subsystem basis but it is clear that the cost of the turn-off semiconductor devices used in VSC schemes must come down significantly for the overall cost to favor the STATCOM. In other industries using high power semiconductors, like electrical traction and drives, the mainstream transition to VSC technology is since long completed and it is reasonable to believe that transmission applications, benefiting from traction and drive developments, will follow. Although the semiconductor volumes in these fields are relatively small, there is potential for the cost of STATCOMs to come down.Apart from the losses, the life cycle cost for STATCOM and SVCs will be driven by the efforts required for operation and maintenance. Both technologies can be considered maintenance free—only 1–2 man-days of maintenance with a minimum of equipment is expected as an annual average. The maintenance is primarily needed for auxiliary systems such as the converter cooling and building systems. In all, the difference in the cost for these efforts, when comparing STATCOM and SVC, will be negligible.XI. LOSSESThe primary losses in SVCs are in the “step-down” transformers, the thyristor controlled air core reactors and the thyristor valves. For STATCOMs the losses in the converter bridges dominate. For both technologies the long-term losses will depend on the specific operation of each installation. The evaluation of investments in transmission has also increasingly included the costs during the entire life cycle, not only the initial investment. Losses will then be increasingly important. With a typical evaluation at $3000/kW (based on 30 years), and additional average evaluated losses of say 300 kW (compared to an SVC), theadditional burden on the STATCOM is significant. The evaluated losses at full output will contribute significantly to this, but with less weight on these the difference will be much smaller. Here the evolution does not help the STATCOM as its adequate performance is assumed to be achieved with high frequency PWM, implying that the losses will be quite high even at small reactive power output.We expect most utilities to operate their facilities close to zero Mvar output, in order to have SVCs or STATCOMs available for dynamic voltage support. In these cases both technologies will operate with well below 0.5% losses (based on “step-down” transformer rating). However the losses will typically increase quite rapidly should the operating point be offset from zero. This is valid for both SVCs and STATCOMs. SVCs will frequently operate with both switched capacitors and controlled reactors at the same time, while converter losses of STATCOMs will increase rapidly with output current. The losses of STATCOMs at rated output will be higher than for comparable SVCs.XII. CONCLUSIONSWe have examined the performance of SVCs and STATCOMs in electric power systems. Based on the analytical and simulation studies, the impact of SVCs and STATCOMs on the studied power system is presented. It was shown that both devices significantly improve the transient voltage behavior of power systems. Though SVCs and STATCOMs work on different principles, their impact on increasing power system transmission capacity can be comparable. Specifically, we describe “ enhanced” SVCs with voltag e recovery performance similar to STATCOMs. Other issues such as losses, footprint, harmonics, etc., must be examined for each scenario for an optimum investment.SVC与STATCOM在电力系统中应用的效益摘要:提出了对电力系统中SVC和STATCOM的性能检测。
SSD2电子教材unit1
Exercise 1
? Copyright 1999-2004 iCarnegie, Inc. All rights reserved
1.1 Overview of Computer Systems
This section provides a top-level view of the different components in a computer system. You will also obtain a basic understanding of how a computer works using its sub-components.
The modern computer operates in a similar fashion. Input to a computer can be sent through the keyboard or mouse. The computer then processes the input, stores the result, and displays the result via the monitor, speaker, printer, or other output devices. For example, when you request for a web page by typing in its URL (Uniform Resource Locator), "", the computer processes your input by fetching the requested page over the Internet. It then displays the fetched page on your monitor as output.
基于optisystem光学传感器阵列的仿真设计的国外文献
基于optisystem光学传感器阵列的仿真设计的国外文献IntroductionOptical sensors are widely used in various fields, such as environmental monitoring, medical diagnosis, and industrial applications. Optical sensor arrays, which consist of multiple sensor elements, offer significant advantages in terms of sensitivity, selectivity, and multiplexing capabilities. However, designing and optimizing optical sensor arrays can be a challenging task due to the complex interactions between light and the sensor elements.Simulation tools provide an effective way to study the performance of optical sensor arrays without the need for expensive and time-consuming experimental setups. OptiSystem, a comprehensive software package for designing and simulating optical communication systems, offers a powerful platform for simulating optical sensor arrays. In this review, we provide an overview of the simulation design of optical sensor arrays based on OptiSystem, including the key features of the software, the simulation techniques used, and the applications of optical sensor arrays in different fields.Overview of OptiSystemOptiSystem is a versatile software package developed by Optiwave Systems Inc. It provides a range of tools for designing and simulating optical communication systems, including optical sensors. The software enables users to create complex optical systems by combining optical components such as sources, detectors, fibers, and splitters, and simulate the performance of these systems under different conditions.Key features of OptiSystem include a user-friendly graphical interface, a wide range of built-in optical components, advanced simulation algorithms, and comprehensive data analysis tools. The software also supports a variety of measurement and analysis techniques, such as power spectral density analysis, eye diagram analysis, and bit error rate analysis.Simulation techniques for optical sensor arraysSimulation design of optical sensor arrays based on OptiSystem typically involves the following steps:1. System modeling: The first step in designing an optical sensor array is to model the system architecture using OptiSystem's graphical interface. This involves selecting the appropriate optical components, arranging them in the desired configuration, and setting the operating parameters of the system.2. Light propagation simulation: Once the system architecture is defined, the next step is to simulate the propagation of light through the optical sensor array. OptiSystem uses ray tracing and beam propagation techniques to calculate the transmission and reflection of light at each sensor element, taking into account factors such as refractive index, absorption, and scattering.3. Sensor response simulation: After simulating the light propagation, the next step is to model the response of the sensor elements to the incident light. OptiSystem provides a range of models for different types of sensors, such as photodiodes, photoconductors, and photomultipliers, allowing users to accurately predict the output signal of each sensor element.4. Signal processing and analysis: Finally, the simulated output signals from the sensor elements can be processed and analyzed using OptiSystem's data analysis tools. This allows users to extract useful information from the sensor array, such as the intensity of the incident light, the wavelength of the light, and the spatial distribution of the light.Applications of optical sensor arraysOptical sensor arrays have a wide range of applications in various fields, including:1. Environmental monitoring: Optical sensor arrays can be used to detect pollutants in air and water, monitor the quality of soils, and track environmental changes over time. For example, optical sensor arrays have been used to detect heavy metals in water, monitor greenhouse gases in the atmosphere, and measure the concentration of nutrients in soils.2. Medical diagnosis: Optical sensor arrays can be used for non-invasive medical diagnostics, such as monitoring blood glucose levels, detecting cancer cells, and imaging internal organs. For example, optical sensor arrays have been used to analyze blood samples for diseases, monitor the oxygen saturation in tissues, and image the retinal blood vessels in the eye.3. Industrial applications: Optical sensor arrays can be used for quality control, process monitoring, and product inspection in industrial settings. For example, optical sensor arrays have been used to inspect the surface roughness of machined parts, monitor the temperature of manufacturing processes, and detect defects in semiconductor wafers. ConclusionOptical sensor arrays offer significant advantages in terms of sensitivity, selectivity, and multiplexing capabilities, making them an attractive technology for a wide range of applications. Simulation tools such as OptiSystem provide a powerful platform for designing and optimizing optical sensor arrays, allowing users to study the performance of these systems in a virtual environment before implementing them in real-world applications.In this review, we have provided an overview of the simulation design of optical sensor arrays based on OptiSystem, including the key features of the software, the simulation techniques used, and the applications of optical sensor arrays in different fields. By leveraging the capabilities of OptiSystem, researchers and engineers can develop innovative optical sensor arrays that address pressing challenges in environmental monitoring, medical diagnosis, and industrial applications.。
软件工程:ComputerBasedSystemEngineering
Software Engineering, 6th edition. Chapter 2
Slide 5
Problems of systems engineering
Large systems are usually designed to solve 'wicked' problems
Systems engineering requires a great deal of co-ordination across disciplines
Slide 4
What is a system?
A purposeful collection of inter-related components working together towards some common objective.
A system may include software, mechanical, electrical and electronic hardware and be operated by people.
To introduce the concept of emergent system properties such as reliability and security
To explain why the systems environment must be considered in the system design process
Emergent properties are a consequence of the relationships between system components
They can therefore only be assessed and measured once the components have been integrated into a system
03_Conceptual_Preliminary_and_Detailed_Design_872002188
Research Technology Development and Application
System Specification Conceptual Design Review
Preliminary System Design
Operational Requirements
Mission profile or scenario Performance and related parameters Utilization requirement Effectiveness requirements Operational life cycle
QFD Steps (detailed steps)
1. 2. 3. 4. 5. Derive top-level product requirements or technical characteristics from customer needs (Product Planning Matrix). Develop product concepts to satisfy these requirements. Evaluate product concepts to select most optimum (Concept Selection Matrix). Partition system concept into subsystems or assemblies and flowdown technical characteristics to these subsystems or assemblies. Derive lower-level product requirements (assembly or part characteristics) and specifications from subsystem/assembly requirements (Assembly/Part Deployment Matrix). For critical assemblies or parts, flow-down lower-level product requirements (assembly or part characteristics) to process planning. Determine manufacturing process steps to meet these assembly or part characteristics. Based in these process steps, determine set-up requirements, process controls and quality controls to assure achievement of these critical assembly or part characteristics
基于LabVIEW的指纹验证系统开发和应用
本栏目责任编辑:梁书计算机工程应用技术基于LabVIEW 的指纹验证系统开发和应用刘柱(上海德尔格医疗器械有限公司,上海201321)摘要:针对企业对生产产品测试设备的应用软件使用的安全性和追溯性提出了越来越高的要求,开发了一款基于检测操作人员指纹识别验证的软件系统。
测试设备上位机主机电脑使用基于图形化、模块化编程新模式的虚拟仪器LabVIEW 作为开发设计平台,并使用.NET 动态链接库为间接访问接口技术,实现了对微型指纹采集器的集成和二次研发应用,以及结合数据库技术实现了指纹信息数据的存储、查询和调用功能。
实验证明:指纹识别验证系统运行非常安全、可靠、稳定,可集成到生产测试设备上,符合企业的生产要求。
关键词:指纹识别;虚拟仪器;动态链接库;集成;指纹采集器;数据库中图分类号:TP311.1文献标识码:A文章编号:1009-3044(2021)10-0246-03开放科学(资源服务)标识码(OSID ):Development and Application of Fingerprint Verification System Based on LabVIEW LIU Zhu(Shanghai Draeger Medical Instrument Co.Ltd,Shanghai 201321,China)Abstract:For the requirements of security and traceability of application software of test equipment of product in industry.The fin⁃gerprint of operator verification and validation system is designed and developed.The computer of test equipment uses LabVIEW which is virtual instrument based on graphical and modular development language platform and .NET dynamic link library inter⁃face technology.Realize the integration and acquisition in the second development with SDK using micro fingerprint acquisition in⁃strument.And save 、query and transfer the data with the database.The test result of verification and validation system shows that the system works security,reliable,steady and can be integrated to test equipment which meets the requirements with industry.Key words:fingerprint verification;LabVIEW;dynamic link library;integration;fingerprint acquisition instrument;database1背景目前,制造型企业在产品出厂之前都需要工程技术人员严格地按照国家相关标准和法律法规的要求,使用测试设备对产品相关参数指标进行测试以形成测试报告和结论,从而判断产品性能质量的好坏与否。
英语论文(Operating System)
Operating SystemAbstract: Operating system is to manage all the hardware resources of computer system resources, including software and data resources; controlprocedures; improve the human-machine interface; provide support forother applications, so that all the resources of the computer systemto maximize the role, to provide users with convenient effective,friendly service interface.The operating system controls otherprogram run, the management system management system resources andprovides the operation contact surface for the user system software'sset. The operating system carries such as the management and thedisposition memory, the decision system resources supply and demandprecedence, control the input-output equipment, the operation networkand the management filing system and so on basic business.This article will make the simple analysis and the elaboration for the computer operation system's function, the development and theclassification.Keywords:operating system function development classification1.Functions of The Operating System·Managing computer systems hardware, software, data and other resources,the allocation of resources to minimize manual work and human intervention onthe machine, the computer automatically play efficiency.Operating System·Coordinating the process of using a variety of resources but also the relationship between the use of various resources of the computer scheduling and reasonable, high-speed devices and low-speed operation of equipment with each other.·Provideing users an environment to use computer systems to facilitate the use of computer system components or features. Operating system through its own program, the computer system features a variety of resources provided by the abstract form which is equivalent to the function of the operating system, and vividly demonstrated, provides users easy access to the computer.2.The Development of Operating System2.1 Manual Phasethe computer at this stage, the main components are the tube, slow operation, no software and no operating system. Direct access to machine language programming, the machine completely manual operation, the first program will be prepared in advance the input tape into the machine, and then enter the machine to start the program and data into the computer, and then run through the switch to start to calculate the completion , the printer output. Users must be very professional and technical personnel to achieve control of the computer. 2.2 Batch PhaseSince the mid-20th century, 50, the computer's main components replaced by the transistor, the speed has been greatly improved, when the software is also rapidly expanding, there has been an early operating system, which is submittedearlier to the user program management control procedures and batch software.2.3 Multiprogramming System StageWith the small-scale integrated circuits in the wide application of computer systems, CPU's speed greatly improved, in order to improve CPU utilization, the introduction of multi-channel programming technology, and the emergence of specialized hardware to support multi agency procedures during this period, in order to further improve the efficiency of CPU utilization, there has been multi-channel batch systems, time-sharing systems, etc., resulting in a more powerful regulatory process, and quickly developed into a computer, an important branch of science, is the operating system. Collectively referred to as the traditional operating system.2.4 Stage of Modern Operating SystemsLarge-scale, rapid VLSI rapid development, there have been microprocessors, computer architecture allows more optimized operation of the computer to further improve the speed, and volume is greatly reduced, for personal computers and portable computers emerged and spread . Its biggest advantage is clearly structured, comprehensive, can be adapted to the needs of multiple applications and operating use.3.Classification of The Operating SystemFrom the point of use can be divided into special and general categories. Proprietary operating system is a special thing for the control and management of the operating system, such as the use of modern mobile phone operating system, such systems generally appear embedded in the way of hardware for a particularOperating Systemway. General-purpose operating system with perfect function, able to adapt to the needs of a variety of purposes.From the perspective of stand-alone and network operating systems can be divided into stand-alone and network operating system. Stand-alone operating system is the environment for the design of stand-alone computer systems, it is only that the resources of the local system management functions. Single-user operating system is a more specific stand-alone operating system, it is for a machine, a user's operating system, its basic features is an operation can only support one user to run, the system has all the resources of the users exclusively, the entire computer system the user has absolute control.From a functional point of view can be divided into batch systems, time-sharing system, real-time systems, network systems, distributed systems. Batch systems, time-sharing system and real-time systems are mostly computer system operating environment, and then run two operating systems environment isa multi-computer system.3.1 Batch SystemThe basic characteristics of the batch system is the "bulk." About to be handed over to a number of computer processing operations organized into the queue in batches to the computer automatically handled by the job queue sequentially. It can be divided into single-channel and multi-channel batch system batch system. Single-channel system can only transferred to a batch processing operations, including running in the computer, other operations on secondary storage, it is similar to single-user operating system. Computer processing operations in the run, the time the major consumption has two aspects, one is consumed in the CPUprogram execution, the other is consumed in the input output. As the speed of input and output devices the implementation of the program relative to CPU speed is much slower, cause the computer to input and output when the CPU is idle. In order to improve the efficient use of COU, there has been multi-channel batch system. It is a batch system with single-channel difference is in the computer memory can have more than one job exists, the scheduler according to a predetermined strategy, select a job to run the CPU processing resources allocated to it, when dealing with the job to enter the input and output operations When the possession of the release of the CPU, the scheduler's memory from other operations to be processed to select a CPU to execute, so to improve CPU utilization.3.2 Time-sharing SystemTime-sharing refers to two or more turns of events in time by one using the computer system resources. If in a system with multiple users sharing a computer, then such a system as time-sharing system. Time-sharing units of time called time slots, a time slice is usually tens of milliseconds. In a time-sharing systems, often to connect to dozens or even hundreds of terminals, each user terminal in their own control of their operations running. Through the operating system's management, will in turn be allocated to different CPU users, if a user job assigned to him another time in the film in to continue. At this time the CPU is assigned to another user operation.3.3 Real-time SystemsReal-time processing in real time and quickly give the results. Real-time systems are generally designed using the time-driven method, the system canOperating Systemtimely respond to the events at any time and timely manner. Real-time system is divided into real-time control system and real-time processing system. Real-time control systems commonly used in industrial control, and aircraft, missile launchers and other military aspects of automatic control. Real-time processing system commonly used in the book airline tickets, flight search and accounting transactions between banks and other systems.3.4 Network Operating SystemWith the rapid development of computer technology and network technology has improved steadily, in different regions with independent processing power of multiple computer systems connected by communication facilities, the sharing of resources to form computer networks, become a more open work environment. The network operating system also came into being. In addition to the network operating system has all the features of stand-alone operating system, it also has a network resource management capabilities to support network applications running.3.5 Distributed Operating SystemDistributed operating system is a distributed computer system configuration for the operating system. Distributed computer systems and computer networks, multiple computer systems connected by communication networks, sharing resources, but the difference is the system of each computer does not have primary and secondary points, the computer system has a relative autonomy, access to user sub ah shared resources, shared resources do not need to know which machine is located, if required, the system of multiple computers can collaborate togetherto complete a task, a task that can be divided into several sub-tasks to multiple computers distributed simultaneously executed in parallel. A commercial operating system often includes a batch system, time-sharing system, real-time systems, network systems, distributed systems, and many other features. Different operating systems according to their location and use of user-oriented, on the strength of the various functions will be different.References :[1]Wang Yuqin,et al. Computer Operating System[M]. Beijing University Press,2004.[2]Yao Aiguo,et al.Introduction Computer Science[M].Wuhan University Press,2006.[3]JIANG Tongqiang,et al. Computer English[M]. Tsinghua University Press,2009.。
Modular verification of software components in C
Modular Verification ofSoftware Components in CSagar Chaki Edmund Clarke Alex GroceCarnegie Mellon University{chaki|emc|agroce}@Somesh Jha,University of Wisconsinjha@Helmut Veith,Technische Universit¨a t M¨u nchenveith@in.tum.de(Invited Paper)This research was supported by the ONR under Grant No.N00014-01-1-0796,by the NSF under Grant R-9803774, CCR-0121547and CCR-0098072,by the ARO under Grant No.DAAD19-01-1-0485,the Austrian Science Fund Project NZ29-INF,the EU Research and Training Network GAMES and graduate student fellowships from Microsoft and NSF.Any opinions,findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of NSF or the United States Government.AbstractWe present a new methodology for automatic verification of C programs againstfinite state machine specifications.Our approach is compositional,naturally enabling us to decompose the verification oflarge software systems into subproblems of manageable complexity.The decomposition reflects themodularity in the software design.We use weak simulation as the notion of conformance between theprogram and its specification.Following the counterexample guided abstraction refinement(CEGAR)paradigm,our tool MAGICfirst extracts afinite model from C source code using predicate abstractionand theorem proving.Subsequently,weak simulation is checked via a reduction to Boolean satisfiability.MAGIC is able to interface with several publicly available theorem provers and SAT solvers.We reportexperimental results with procedures from the Linux kernel and the OpenSSL toolkit.Index TermsSoftware Engineering,Formal Methods,Verification.I.I NTRODUCTIONState machines have been recognized repeatedly as an important artifact in the software development process;in fact,variants of state machines have been proposed for virtually all software engineering methodologies,including,most notably,Statecharts[1]and the UML[2]. The sustained success of state machines in software engineering stems from the fact that state machines provide for both a concise mathematical theory,and an intuitive semantics of system behavior which naturally allows for visualization,hierarchy,and abstraction.Traditionally,state machines have been mainly used in the design phase of the software life-cycle;they are intended to guide and constrain the implementation and the test phase,and may later be reused for documentation purposes.In most cases,however,the assertion that a state machine safely abstracts an existing implementation is kept implicit and informal.With the rise of Internet-based technologies,the significance of state machines has only increased.In particular,security protocols and communication protocols are naturally specified in terms of state machines[3],[4],[5].Similar applications of state machines can be found in other safety-critical domains including medicine and aerospace.Moreover,the dramatic change of focus from relatively monolithic systems to highly distributed and heterogeneous systems whose development cycles are interdependent,callsfor new specification methodologies;for example,on August2002,IBM,Microsoft,and BEA announced the publication of three specifications(WS-Coordination,WS-Specification, BPEL4WS[6])which”collectively describe how to reliably define,create and connect multiple business processes in a Web services environment”.We foresee state machines being used for contracts describing software capabilities.In both cases–protocol specification and distributed computation–we observe that state machines are no longer just tools for internal use,but are being introduced increasingly into the public domain.In this paper,we describe our tool MAGIC(M odular A nalysis of pro G rams I n C)[7]which is capable of verifying whether a state machine(or,more precisely,a labeled transition system)is a safe abstraction of a C procedure;the C procedure in turn may invoke other procedures which are themselves specified in terms of state machines.Our approach has a number of tangible benefits:•Utility.The capability of MAGIC to formally verify the correctness of state-machine speci-fications closes an evident gap in many software development methodologies,most notably, but not only,for security-related system features.In the future,we envision that tools based on ideas from MAGIC will assist the contracting process with third party software providers.•Compositionality.MAGIC verification can be used early on during the development cycle,as specifications can be plugged in for missing system positionality evidently fosters concurrent development by independent groups of developers.•Complexity.State-space explosion[3]remains the bottleneck of most automated verification tools.Due to compositionality,the size of the individual system parts to be verified by MAGIC remains manageable,as demonstrated by our experiments.Moreover,the verification process in MAGIC is reduced to computing a weak simulation relation betweenfinite state systems, for which we can provide highly efficient algorithms.•Flexibility.Internally,MAGIC uses several theorem provers and SAT solvers.The open design of MAGIC facilitates the easy integration of new and improved tools from this quickly developing area.Consequently,we believe that MAGIC like tools have the potential to become indispensable in the software engineering process.In the rest of this section we describe the technical contributions of this paper.beled Transition Systems as Specification MechanismIn the literature,several variants of state machines have been investigated;purely state-based formalisms such as Kripke structures[3]are often used to model and specify systems.For the MAGIC framework,however,we employ labeled transition systems(LTS),which are similar to Kripke structures but for the fact that state transitions are labeled by actions.From a theoretical point of view the presence of actions does not increase the expressive power of LTS over Kripke structures.In our experience,however,it is more natural for designers and software engineers to express the desired behavior of systems using a combination of states and actions.For example,the fact that a lock has been acquired or released can be expressed naturally by lock and unlock actions.In the absence of actions,the natural alternative is to introduce a new variable indicating the status of the lock,and update it accordingly.The LTS approach certainly is more intuitive,and allows both for a simpler theory and for an easier specification process. Some sample LTSs used in our framework are shown in Figure4.A formal definition will be given in Section III.The use of LTSs is also motivated by work in concurrency.Process algebras like CCS[8], CSP[9]and theπ-calculus[10]have been used widely to formally reason about message passing concurrent systems.In these formalisms,actions are crucial for modeling the sending and receiving of messages across channels.Process algebras lead very naturally to LTSs.Thus, even though we currently only analyze sequential programs,we believe that the use of LTSs will facilitate a smooth transition to concurrent message-passing programs in the future.B.Procedure AbstractionsThe goal of MAGIC is to verify whether the implementation of a system is safely abstracted by its specification.To this end,MAGIC verifies individual procedures against the respective LTS. In our implementation,it is possible to handle a group of procedures with a dag-like call graph as a single procedure by inlining;therefore,for simplicity,we speak only of single procedures in this paper.In practice,it often happens that single procedures perform quite different tasks for certain settings of their parameters.In our approach,this phenomenon is accounted for by allowing multiple LTSs to represent a single procedure.The selection among these LTSs is achieved byguards,i.e.,formulas which describe the conditions on the procedure parameters under which a certain LTS is applicable.This gives rise to the notion of procedure abstraction(PA);formally a PA for a procedure proc is a tuple d,l where:•d is the declaration for proc,as it appears in a C headerfile.•l is afinite list g1,M1 ,..., g n,M n where each g i is a guard formula ranging over the parameters of proc,and each M i is an LTS with a single initial state.The procedure abstraction expresses that proc conforms to one LTS chosen among the M i’s. More precisely,proc conforms to M i if the corresponding guard g i evaluates to true over the actual arguments passed to proc.We require that the guard formulas g i be mutually exclusive so that the choice of M i is unambiguous.positionalityThe general goal of MAGIC is to prove that a user-defined PA for proc is valid.The role of PAs in this process is twofold:1)A target PA is used to describe the desired behavior of the procedure proc.2)To assist the verification process,we employ valid PAs(called the assumption PAs)forlibrary routines used by proc.Thus,PAs can be seen both as conclusions and as assumptions of the verification process. Consequently,our methodology yields a scalable and compositional approach for verifying large software systems.Figure1illustrates this by depicting the call graph of an implementation and the steps involved in verifying it.In order to verify baz we need only assumption PAs for the other library routines.For bar we additionally use the PA for baz as an assumption PA while for foo we employ the PAs of both bar and baz as assumptions.Note that due to the sound compositional principles on which MAGIC is based upon,no particular ordering of these verification steps is required.Assumption PAs are not only important for compositionality,they are in fact essential for handling recursive library routines.Since MAGIC inlines all library routines for which assumption PAs are unavailable,it would be unable to proceed if the assumption PA for a recursive library routine was absent.Without loss of generality we will assume throughout this paper that the targetPA contains only one guard G Spec and one LTS M Spec.To achieve the result in full generality,.the described algorithm can be iterated for each guard of M SpecD.Algorithms and Tool DescriptionThe MAGIC tool follows the CEGAR paradigm[11],[12],[13],[14]that can be summarized as follows:•Step1:Model Creation.Extract an LTS M Imp from proc using the assumed PAs,the guard G Spec and a set of predicates.In MAGIC,the model is computed from the controlflow graph(CFG)of the program in combination with an abstraction method called predicate abstraction[12],[15],[16].To decide properties such as equivalence of predicates,we use theorem provers.The details of this step are described in Section IV.•Step2:Verification.Check if M Spec safely abstracts M Imp.If this is the case,the verification successfully terminates;otherwise,extract a counterexample and perform step3.In MAGIC, the verification step amounts to checking whether a weak simulation relation(cf.Section III) holds between M Spec and M Imp.We reduce weak simulation to the satisfiability of a certain Boolean formula,thus utilizing highly efficient SAT procedures.The details of this step are described in Section V.•Step3:Validation.Check whether the counterexample extracted in step2is valid.If this is the case,then we have found an actual bug and the verification terminates unsuccessfully.Otherwise construct an explanation for the spuriousness of the counterexample and proceed to Step4.•Step4:Refie the explanation from the previous step to construct an improved set of predicates.Return to Step1to extract a more precise M Imp using the new set of predicates instead of the old one.The new predicate set is constructed in such a way as to guarantee that all spurious counterexamples encountered so far will not appear in any future iteration of this loop.At its current stage of development,MAGIC can perform all the above steps in an automated manner.The input to MAGIC consists of(i)a set of preprocessed ANSI-Cfiles representing proc and(ii)a set of specificationfiles containing textual descriptions of M Spec,G Spec and a set of predicates for abstraction.The textual descriptions of LTSs are given using an extended version of the FSP notation by Magee and Kramer[17].For example,the LTS Do A shown in Figure4 is described textually as follows:A1=(a->A2),A2=(return{}->STOP).E.Tool OverviewThe schematic in Figure2explains the software architecture of MAGIC.Model Creation is handled by Stage I of the program.In this stage,the inputfiles are parsed and the controlflow graph(CFG)of the C program is constructed.Simplifications are made so that the resulting CFG only has simple statements and side-effect free expressions.Finally,M Imp is extracted from the annotated CFG using the assumed PAs,G Spec and the predicates.As described later,this process requires the use of theorem provers.MAGIC can interact with several public domain theorem provers,such as Simplify[18],CVC[19],ICS[20],CVC Lite[21],and CPROVER[22].Verification is performed in Stage II.As mentioned above,weak simulation here is reduced to a form of Boolean satisfiability.MAGIC can interface with several publicly available SAT solvers, such as Chaff[23],FGRASP[24]and SATO[25].We also have our own efficient SAT solver implementation which leverages the specific nature of SAT formulas that arise in this stage toVerificationUnsuccessful Fig.2.Overall architecture of MAGIC.deliver better performance than the public domain solvers.The verification process is presented in Section V in more detail.If the verification step fails,MAGIC generates an appropriate counterexample and checks its validity in Stage III.If the counterexample is found to be spurious,an improved set of predicates is computed in Stage IV and the entire process is repeated from Stage I.Stages III and IV are completely automated and require the use of theorem provers.In this paper we focus on model creation and verification;details about counterexample validation and abstraction refinement are presented elsewhere[26].The rest of this paper is organized as follows:In Section II we present related work.This is followed in Section III by some basic definitions that are used in the rest of this article.In Section IV we describe in detail the model construction procedure used in MAGIC to extract LTS models from C programs.Section V describes how we check weak simulation between M Spec and M Imp using Boolean satisfiability.In Section VI we present a broad range of benchmarks and results that we have used to evaluate MAGIC.Finally,in Section VII we give an overviewof various ongoing and future research directions that are relevant to MAGIC.II.R ELATED W ORKDuring the last years advances in verification methodology as well as in computing power have promoted renewed interest in software verification.The resulting systems–most notably Bandera[27]and Java PathFinder[28],[29],ESC Java[30],SLAM[31],BLAST[32]and MC[33],[34]–are increasingly able to handle industrial software.Among the six mentioned systems,thefirst three focus on Java,while the last three all deal with C.Java verification is quite different from C,because object orientation,garbage collection and the logical memory model require specific analysis methods.Among the C verification tools,MC(which stands for meta-compilation)has a distinguished place because it amounts to a form of pattern matching on the source code,with surprisingly good results for scanning relatively simple errors in large amounts of code.SLAM and BLAST are closely related tools,whose technicalflavor is most akin to ours.SLAM is primarily optimized to analyze device drivers,and is going to be included in the Windows development cycle.In contrast to SLAM which uses symbolic algorithms,BLAST is an on-the-fly reachability analysis tool.MAGIC is the only tool which uses LTS as specification formalism,and weak simulation as the notion of conformance.This choice reflects the area of security currently being our primary application domain.Except for MC and ESC Java,the above-mentioned tools are based on variations of model checking[3],[35],and they all require abstraction methods to alleviate the state explosion problem,most notably data abstraction[36]and the more generally predicate abstraction[16]. The abstraction method used in SLAM and BLAST is closest to ours.However,due to compositionality,we can afford to invest more computing power into computing abstractions, and are therefore able to improve on Cartesian abstraction[37].Generally,we believe that the form of compositionality provided by MAGIC is unique among existing software verification systems.Virtually all systems that use abstraction interface with theorem provers for various purposes. The software architecture of MAGIC is designed as to facilitate the integration of various theorem provers.In addition,MAGIC is the only tool which leverages the enormous success of SAT procedures in hardware verification[38]in software verification.SAT procedures have been successfully used for checking validity of software specifications(expressed in a relationalcalculus)[39],[40],[41].III.D EFINITIONSIn this section we present some basic definitions that will be used in the rest of this article.beled Transition SystemsA labeled transition system(LTS)M is a4-tuple S,init,Σ,T ,where(i)S is afinite non-empty set of states,(ii)init∈S is the initial state,(iii)Σis afinite set of actions(alphabet), and(iv)T⊆S×Σ×S is the transition relation.We assume that there is a distinguished state STOP∈S which has no outgoing transitions, i.e.,∀s∈S,∀a∈Σ,(STOP,a,s)∈T.We will write s a−→t to mean(s,a,t)∈T and denotethe set{t|s a−→t}by Succ(s,a).B.ActionsIn accordance with existing practice,we use actions to denote externally visible behaviors of systems being analyzed,e.g.acquiring a lock.Actions are atomic,and are distinguished simply by their names.Often,the presence of an action indicates a certain behavior which is achieved by a sub-procedure in the implementation.Since we are analyzing C,a procedural language, we model the termination of a procedure(i.e.,a return from the procedure)by a special class of actions called return actions.Every return action r is associated with a unique return value RetVal(r).Return values are either integers or void.We denote the set of all return actions whose return values are integers by IntRet and the special return action whose return value is void by VoidRet.All actions which are not return actions are called basic actions.A distinguished basic action τdenotes the occurrence of an unobservable internal event.In this article we only consider procedures that terminate by returning.In particular,we do not handle constructs like setjmp and longjmp.Furthermore,since LTSs are used to model procedures,any LTS S,init,Σ,T must obey the following condition:∀s∈S,s a−→STOP iff a is a return action.C.Conformance via Weak SimulationIn the context of LTS,simulation[8]is the natural notion of conformance between a specification LTS and an implementation pared to conformance notions based on trace containment[11],simulation has the additional advantage that it is computationally less expensive to check.Among the many technical variants of simulation[8],we choose weak simulation as our notion of conformance because it allows for asynchrony between the LTSs, i.e.,one step of the specification LTS may correspond to multiple steps of the implementation. This feature of weak simulation is crucial to our approach,because one step in M Spec typically corresponds to multiple steps in M Imp.D.Weak SimulationLet M1= S1,init1,Σ,T1 and M2= S2,init2,Σ,T2 be two LTSs with the same alphabet.A relation R⊆S1×S2is called a weak simulation iff if obeys the following two conditions for all s1∈S1,t1∈S1and s2∈S2:1)If(s1,s2)∈R,a=τand s1a−→t1then there exists t2∈S2such that s2a−→t2and(t1,t2)∈R.2)If(s1,s2)∈R and s1τ−→t1then at least one of the following two conditions hold:a)(t1,s2)∈Rb)There exists t2∈S2such that s2τ−→t2and(t1,t2)∈RWe say that LTS M2weakly simulates M1(denoted by M1 M2)if there exists a weak simulation relation R⊆S1×S2such that(init1,init2)∈R.E.Algorithm for Computing Weak SimulationThe existence of a weak simulation relation between M1and M2can be checked efficiently by reducing the problem to an instance of Boolean satisfiability[42].Interestingly the SAT instances produced by this method always belong to a restricted class of SAT formulas known as the weakly negated HORN formulas.In contrast to general SAT(which has no known polynomial time algorithm),satisfiability of weakly negated HORN formulas can be solved in linear time[43]. As part of MAGIC,we have implemented an online linear time HORNSAT algorithm[44].MAGIC can also interface with public domain general SAT solvers like Chaff[23],FGRASP[24]and SATO[25].IV.M ODEL C ONSTRUCTIONLet M Spec= S Spec,init Spec,ΣSpec,T Spec and the assumption PAs be{PA1,...,PA k}.In this section we show how to extract M Imp from proc using the assumption PAs,the guard G Spec and the predicates.The extraction of M Imp relies on several principles:•Every state of M Imp models a state during the execution of proc;consequently every state is composed of a control and data component.•The control components intuitively represent values of the program counter,and are formally obtained from the CFG of proc.•The data components are abstract representations of the memory state of proc.These abstract representations are obtained using predicate abstraction.•The transitions between states in M Imp are derived from the transitions in the controlflow graph,taking into account the assumption PAs and the predicate abstraction.This process involves reasoning about C expressions,and therefore requires the use of a theorem prover.S0:int x,y=8;S1:if(x==0){S2:do a();S4:if(y<10){S6:return0;}else{S7:return1;}}else{S3:do b();S5:if(y>5){S8:return2;}else{S9:return3;}}Fig.3.A simple proc we use as a running example.Without loss of generality,we can assume that there are onlyfive kinds of statements in proc: assignments,call-sites,if-then-else branches,goto and return.In our implementation, we use the CIL[45]tool to transform arbitrary C programs into the above format.Note that call-sites correspond to library routines called by proc for which assumed PAs are available.We assume the absence of indirect function calls and pointer dereferences in the lhs’s of assignments.In reality,MAGIC handles these constructs by using aliasing information conservatively [26].We denote by Stmt the set of statements of proc and by Exp the set of all pure (side-effect free)C expressions over the variables of proc .As a running example of proc ,we use the C program shown in Figure 3.It invokes two library routines do a and do b .Let the guard and LTS list in the assumption PA for do a be TRUE ,Do A .This means that under all invocation conditions,do a is safely abstracted by theLTS Do A .Similarly the guard and LTS list in the assumption PA for do b is TRUE ,Do B .The LTSs Do A and Do B are described in Figure 4.Also we use G Spec =TRUE and M Spec =Spec (shown in Figure 4).STOP Do_B STOP C3C5C4C7C6C9C8C11C10C13C12STOPDo_Aa b return{}return{}A1A2B1B2C1C2ττττa ττττreturn{2}return{0}b ττSpec Fig.4.The LTSs in the assumption PAs for do a and do b .The VoidRet action is denoted by return {}.A.Initial abstraction with control flow automataThe construction of M Imp begins with the construction of the control flow automaton (CFA)of proc .The states of a CFA correspond to control points in the program.The transitions between states in the CFA correspond to the control flow between their associated control points in the program.Thus,a CFA of a program is a conservative abstraction of the program’s control flow,i.e.it allows a superset of the possible traces of the program.Formally the CFA is a 4-tuple S CF ,I CF ,T CF ,L where:•S CF is a set of states.•I CF ∈S CF is an initial state.•T CF ⊆S CF ×S CF is a set of transitions.•L :S CF \{FINAL }→Stmt is a labeling function.S CF contains a distinguished FINAL state.The transitions between states reflect the flow of control between their labeling statements:L (I CF )is the initial statement of proc and (s 1,s 2)∈T CF iff one of the following conditions hold:•L (s 1)is an assignment,call-site or goto with L (s 2)as its unique successor.•L (s 1)is a branch with L (s 2)as its then or else successor.•L (s 1)is a return statement and s 2=FINAL .Example 1:The CFA of our example program is shown in Figure 5.Each non-final state is labeled by the corresponding statement label (the FINAL state is labeled by FINAL ).Henceforth we will refer to each CFA state by its label.Therefore the states of the CFA in Figure 5are S0...S9,final with S0being the initial state.y = 8x == 0return 0a()y < 10y > 5b()1{p , p }21{p }1{p }{p }2{p }21{p , p }2return 1return 2return 3{ }{ }{ }{ }{ }S0S1S2S3S4S5S6S8S9S7FINAL FINAL Fig.5.The CFA for our example program.Each non-FINAL state is labeled the same as its corresponding statement.The initial state is labeled S0.The states are also labeled with inferred predicates when P ={p 1,p 2}where p 1=(y <10)andp 2=(y >5).B.Predicate inferenceSince the construction of M Imp from proc involves predicate abstraction,it is parameterized by a set of predicates P .The main challenge in predicate abstraction is to identify the set P that is necessary for proving the given property.In our framework we require P to be a subset of the branch predicates in proc .Therefore we sometimes refer to P or subsets of P simply as a set of branches.The construction of M Imp associates with each state s of the CFA a finite subset of Exp derived from P ,denoted by P s .The process of constructing the P s ’s from Pis known as predicate inference and is described by the algorithm PredInfer in Figure6.Note that P FINAL is always∅.The algorithm uses a procedure for computing the weakest precondition WP[11],[46],[47] of a predicate p relative to a given statement.Consider a C assignment statement a of the form v=e.Letϕbe a pure C expression(ϕ∈Exp).Then the weakest precondition ofϕwith respect to a,denoted by WP[a]{ϕ}is obtained fromϕby replacing every occurrence of v inϕwith e. Note that we need not consider the case where a pointer appears in the lhs of a since we have disallowed such constructs from appearing in proc.Input:Set of branch statements POutput:Set of P s’s associated with each CFA stateInitialize:∀s∈S CF,P s:=∅Forever doFor each s∈S CF doIf L(s)is an assignment statement and L(s )is its successorFor each p ∈P s add WP[L(s)]{p }to P sElse If L(s)is a branch statement with condition cIf L(s)∈P,then add c to P sIf L(s )is a‘then’or‘else’successor of L(s)P s:=P s∪P sElse If L(s)is a call-site or a‘goto’statement withsuccessor L(s )P s:=P s∪P sElse If L(s)returns expression e and r∈ΣSpec∩IntRetAdd the expression(e==RetVal(r))to P sIf no P s was modified in the For loop,then exitFig.6.The algorithm PredInfer that MAGIC uses for predicate inference.The weakest precondition is clearly an element of Exp as well.Note that PredInfer may not terminate in the presence of loops in the CFA.However,this does not mean that our approach is incapable of handling C programs containing loops.In practice,we force termination ofPredInfer by limiting the maximum size of any P ing the resulting P s’s,we can compute the states and transitions of the abstract model as described later.Irrespective of whether PredInfer was terminated forcefully or not,M Imp is guaranteed to be a safe abstraction of proc.We have found this approach to be very effective in practice.A similar algorithm was proposed by Dams and Namjoshi[48].Example2:Consider the CFA described in Example1.Suppose P contains the two branches S4and S5.Then PredInfer begins with P S4={(y<10)}and P S5={(y>5)}.From this it obtains P S2={(y<10)}and P S3={(y>5)}.This leads to P S1={(y<10),(y>5)}. Then P S0={WP[y=8]{y<10},WP[y=8]{y>5}}={(8<10),(8>5)}.Since we ignore predicates that are trivially TRUE or FALSE,P S0=∅.Since the return actions in Spec have return values{0,2},P S6={(0==0),(0==2)},which is again∅.Similarly, P S7=P S8=P S9=P FINAL=∅.Figure5shows the CFA with each state s labeled by P s.C.Predicate valuation and concretizationSo far we have described a method for computing the initial abstraction(the CFA)and a set of predicates associated with each location in the program.The states of the abstract system M Imp correspond to the various possible valuations of the predicates in each location.Formally,for a CFA node s suppose P s={p1,...,p k}.Then a valuation V of P s is a function from P s to theset{TRUE,FALSE}.Alternately,one can view the valuation V as a Boolean vector v1,...,v k of size k where each v i is the result of applying the function V to the predicate p i.Let V s be the set of all predicate valuations of P s.Note that the size of V s is exponential in the size of P s.The predicate concretization functionΓs:V s→Exp is defined as follows.Given a valuation V={v1,...,v k}∈V s,Γs(V)= k i=1p v i i where p TRUE i=p i and p FALSE i=¬p i. As a special case,if P s=∅,then V s={⊥}andΓs(⊥)=TRUE.Example3:Consider the CFA described in Example1and the inferred predicates as explained in Example2.Recall that P S1={(y<10),(y>5)}.Suppose V1={0,1}and V2={1,0}. ThenΓS1(V1)=(¬(y<10))∧(y>5)andΓS1(V2)=(y<10)∧(¬(y>5)).D.States of M ImpEach state s∈S CF gives rise to a set of states of M Imp,denoted by IS s.In addition,M Imp has an unique initial state INIT.The definition of IS s consists of the following sub-cases:。
Abstract An Architecture to Support Dynamic Composition of Service Components
An Architecture to Support Dynamic Composition of Service ComponentsDavid Mennie, Bernard PagurekSystems and Computer EngineeringCarleton University1125 Colonel By Drive, Ottawa, ON, Canada, K1S 5B6{dmennie, bernie}@sce.carleton.caAbstractThe creation of composite services from service components at runtime can be achieved using several different techniques. In the first approach, two or more components collaborate while each component remains distinct, and potentially distributed, within a network. To facilitate this, a new common interface must be constructed at runtime which allows other services to interact with this set of collaborating service components as if it was a single service. The construction of this interface can be realized with the support of a service composition architecture. In the second approach, a new composite service is formed where all of the functionality of that service is contained in a single new component. This new service must be a valid service, capable of the basic set of operations that all other services can carry out. Our goal is to design an architecture to support the runtime creation of composite services, within a specific service domain, using existing technologies and without the need for a complex compositional language. We make extensive use of a modified Jini infrastructure to overcome many of the shortcomings of the JavaBeans component model. In this paper, we compare techniques for dynamic service composition and discuss the requirements of an infrastructure that would be needed to support these approaches. We also provide some insight into the types of applications that would be enabled by this architecture.Keywords: Dynamic service composition, runtime component assembly, component-based services1IntroductionOne of the goals of component-oriented programming has traditionally been to facilitate the break up of cumbersome and often difficult to maintain applications into sets of smaller, more manageable components [7]. This can be done either statically at design-time or load-time, or dynamically at runtime. Selecting ready-made components to construct an application is sufficient for a relatively straightforward system with specific operations that are not likely to change frequently. However, if the system has a loosely defined set of operations to carry out, components must be able to be upgraded dynamically or composed at runtime. It is this need for dynamic software composition that we will examine in this paper. 2Defining the ProblemOur research group has previously approached the problem of dynamic software composition as it relates to high-availability systems. Before we define the approach to runtime composition taken in this paper, we will briefly describe our previous experiences in the area of software hot-swapping.2.1 Software Hot-swappingSoftware hot-swapping is defined as the process of upgrading software components at runtime in systems which cannot be brought down easily, cannot be switched offline for long periods of time, or cannot wait for software to be recompiled once changes are made [3]. These systems include critical high-availability systems such as control systems and many less-critical but still soft real-time, data-oriented systems such as telecommunications systems and network management applications. An infrastructure that supports software hot-swapping must take into account many factors. There are synchronization and timing issues such as when an upgrade can occur and the maximum time window allowed for an upgrade. The size and definition of the incremental swap unit or module must be defined. The series of transactions required to carry out how that unit can be dynamically introduced into a running system must be defined. The stateof the system must be known at all times. Placing the target system in a state where a swap can occur, capturing the state prior to the swap, swapping the module, restoring the system state, and then switching the system over to the new swapped module must all be handled. A failure recovery mechanism must also be in place to rollback an unsuccessful swap without affecting the execution of the running system. System performance must not be compromised and additional side-effects of the swapping process must be minimized. We have developed a prototype infrastructure to support this approach to dynamic software upgrading which takes into account the aforementioned issues. It is described in more detail in another paper [3]. Projects such as SOFA/DCUP [9] have also provided infrastructures to support dynamic component updating in running applications.2.2 MotivationWhile component composition at runtime in high-availability systems poses some interesting challenges, it is focussed on a specific type of system. Many systems cannot be classified as high-availability. For this reason, a more generic composition infrastructure that is not over-complicated with the difficult timing, synchronization, and transactional concerns of a hot-swapping solution would be more appropriate.The composition architecture described in this paper is dedicated to the creation of composite network services from service components. The research is motivated by that fact that in many areas of network computing the need for more complex services is rapidly increasing. As standardized means for service lookup and deployment become available, the ability to compose composite services out of service components becomes more realistic. The rest of this paper is devoted exclusively to defining and developing a tailored, dynamic service composition solution.3Dynamic Service CompositionDynamic service composition differs from other forms of software composition since it deals exclusively with network services. Network services are individual components, which can be distributed within a network environment, that provide a specific set of well-defined operations. There are two main alternative approaches to dynamic service composition. In the first approach, multiple service components communicate with one another as separate entities to provide an enhanced service that is accessible through a common interface. In the second approach, service components are combined at runtime to provide this enhanced service as a single, self-contained entity. The choice of “communication” or “integration” must be made at the time a composite service is requested by the user. Our architecture will provide some support for automating this choice based on the requirements data collected for a particular composition scenario.The first step in creating any composite service is to locate the service components that provide the functionality that is to be placed in the new service. To facilitate this process, all service components must be stored in a component directory that can be accessed at runtime. Searches of this directory must be tailored to the compositional attributes of the components. In other words, each component must have a clear description of the operations it can carry out, what methods (if any) can be extracted from it or used in the creation of a composite, and the input and output requirements of the component. Once the appropriate service components are located, we must determine the type of dynamic composition we will perform. There are various tradeoffs associated with the selection of a particular composition method. In many cases, more than one method is possible. However, the selection of the best method should be based on how the composite service will be used and the efficiency requirements of the resulting service. Another objective is to minimize unanticipated service behaviors once the composite is complete.3.1 Creating a Composite Service InterfaceThe following approach can be used if a high-level of software performance for the service composite is not required. The idea is to create a new interface that will make a set of collaborating services appear as a single composite service. We have made use of an extended Facade design pattern [4] to help us create this interface. The Facade pattern is intended to provide a unified interface to a set of components and handle the delegation of incoming requests to the appropriate component. However, this is done staticallyat design-time. Our Dynamic Facade pattern facilitates the updating of the composite service interface as components are added at runtime. If the services taking part in the composite service are not co-located on the same network node and are instead distributed throughout the network, messages will need to be sent between the components via the interface. We are currently examining the potential for a distributedDynamic Facade pattern to delegate incoming requests to the appropriate service components even if those components are not located on the same network node. In a distributed composite service, we would expect a slight decrease in response time or operational performance. Figure 1a illustrates the realization of a composite service interface.The primary advantage of this technique is the speed at which a composite can be created. This is due to the fact that a new component does not need to be constructed. In other words, no code needs to be moved or integrated from any of the components involved in order for the composite service to function.This technique is also referred to as interface fusion since the interfaces of each service componentinvolved are merged into a single new interface. However, the interfaces of services 1 to n cannot simply be “glued” together. Modifications will be required to properly direct incoming messages to the appropriate components and in the proper sequence.(a) Composite service interface (b) Stand-alone composite serviceFigure 1: Composite services3.2 Creating a Stand-alone Composite ServiceIf the performance of the composite service is more critical, creating a stand-alone composite service is a better solution than interface fusion. Performance may be improved since all of the code of the composite service is located on the same node (see Figure 1b). There are two primary means of creating a stand-alone composite service.One approach leads to the dynamic assembly of service components in a way that is analogous to an assembly of pipes and filters. Jackson and Zave also adopt this technique in their distributed featurecomposition (DFC) architecture [6]. As in DFC, our architecture has the typical advantages of the pipe-and-filter architectural style. The main advantage is all service components can remain independent. This means they do not need to share state information and they are not aware or dependent on other servicecomponents. They behave compositionally and the set of service components making up a composite service can be changed at runtime.Figure 2a shows a basic configuration for a set of service components to be assembled. The input to the composite service is sent to the first service component, which in turn, sends its output to the input of the next service component in the chain. Obviously, each service component must be capable of handling the input it is given. A different result may be obtained if the components are re-ordered. The order in which components are assembled and the input requirements and output results of each component arespecified in the service specification of each component. This service specification is physically stored with the component since the infrastructure will need to read it prior to determining if it will be required in a given composition scenario. Figure 2b shows the potential for more complex interconnections of service components. In this case, the operations performed by service component 2 are required several times in sequence. A loopback data flow can be used to achieve this without the need to chain several replications of the same component in sequence. Support for a loopback feature must be provided by the service component. This capability is also documented in the service specification of the component.The second approach to creating a new stand-alone service is shown in Figure 2c. Here, the service logic, or code, of each service component is assembled within a new composite service. In general, all of the code from each component cannot be reused since certain methods are specific to an individual service or are not useful in the context of a composite service. For this reason, composable methods are identifiedin the service specification of each component. The appropriate sections of the service specifications from each component involved are also assembled to form the service specification of the composite service.The runtime creation of a new and functional service specification ensures that this new composite service has all of the basic attributes of any other service. This upholds the widely accepted principal that the composite should itself be composable [12].(a) Serial chaining of service components(c) A new assembly of composablemethods in a single component(b) More complex component interconnectionFigure 2: Fig. 2a and 2b show various potential pipe-and-filter assemblies of service components within astand-alone composite service. Fig. 2c illustrates an assembly of composable methods fromseveral service components into a single new service containing a single body of code.The primary advantage of a stand-alone composite service is it can be reused and composed easily with other services. Reuse of a composite service interface is more difficult since the service components providing the functionality are not contained in a single entity. Another advantage is the new composite service will execute at a higher level of performance with regard to internal message transmission since all of the code is executing in the same location.Constructing a stand-alone composite service at runtime is a very complex undertaking. While many of the processes common to both forms of dynamic service composition are still present (refer back to section3), other challenges exist. The largest of these is to create a new functional service and successfully deploy it in a relatively short period of time. While the process of combining runtime services could be performed prior to when the service is actually needed, with the composite stored in a library for future use, we are more interested in determining to what extent the runtime construction of a composite service for immediate use is feasible.Now that we are familiar with the terminology and processes of dynamic service composition, we can determine the requirements of a service composition architecture that will support the creation of such services.4 Proposed ArchitectureWe stated earlier that our goal is to design an architecture that makes use of existing technologies. We can justify this choice, over implementing a proprietary solution, since our prototype will not support components that were not originally designed to function as part of a composite service. This does not mean that all possible compositions have to be envisioned before the component can be designed. We still allow the content of the composite service to be determined at runtime. We simply define a set of requirements that each service component must satisfy in order for it to be used in our architecture.4.1Basic RequirementsThe most critical element of a composable service component is the service specification. The service specification contains the inputs, outputs, dependencies and constraints of the service, in addition to a detailed description of the operations it performs. Another important feature is the composable service component must be easy to locate and retrieve. This is also facilitated by the service specification, which isexamined by the infrastructure during service lookup and retrieval. Finally, the code of each service component can be fully reusable or partially reusable. As we mentioned earlier, all composable methods must be clearly specified in the service specification of the component. It will be impossible for the infrastructure to determine at runtime if code within a service component can be reused unless it has been explicitly labeled as reusable.An architecture supporting dynamic service composition must have a repository or library of composable service components. This library must allow a service, matching a well-defined set of attributes, to be retrieved in a relatively short period of time. To achieve this, the architecture must have a means of examining the service specification of a service component. This will require that the service specification be written using a well-structured description language to allow for straightforward parsing of the specification file. Finally, the architecture will require a valid component model in order to support component composition.4.2Selection of Available Technologies and Required ExtensionsTechnologies are currently available that can be used to create an architecture to support dynamic service composition within a service domain. However, many of these technologies provide only a partial solution and must be extended and customized to work under the conditions outlined in this paper –conditions which were not necessarily anticipated during the design of these technologies.We have purposely avoided developing or using a compositional language similar to Lava [7] or CHAIMS [1] since these languages are more appropriate for solving the problem of generic software composition. A compositional language facilitates the following activities: it allows components to be defined within existing non-segmented code, it allows the state and behavior of a component to be inherited by another component, it allows components to be dynamically adapted, or it allows components to be modified at runtime [2]. However, we are not interested in “componentizing” a monolithic software system. This is a completely different body of research with challenges that we feel will hinder our particular interests in dynamic composition of components. Our approach does not facilitate the restructuring of a previously designed system into smaller, more manageable components but rather allows a new system to be designed so it can exploit the advantages of a dynamic component-based architecture.We have chosen to use Jini connection technology [11], developed by Sun Microsystems, as our service repository and retrieval system. We chose Jini because it is a distributed computing technology that facilitates the lookup and deployment of service components by providing the necessary networking infrastructure and distributed programming facilities. We have created a Jini service called the Composition Manager to oversee all aspects of dynamic service composition.We assume the reader is familiar with the basic concepts in the JavaBeans component model. However, we will briefly describe the key features of the Extensible Runtime Containment and Services Protocol (ERCSP) for JavaBeans [10] since it provides the facilities we will use for dynamic composition. The ERCSP standard extension provides an API that enables Beans to interconnect at runtime. It enables a Bean to interrogate its environment for certain capabilities and available services. This allows the Bean to dynamically adjust its behavior to the container or context in which it finds itself. The API consists of two parts: a logical containment hierarchy for Beans components and a method of discovering the services that are provided by Beans within such a hierarchy. The containment hierarchy enables grouping of Beans in a logical manner which can easily be navigated. This grouping is established through the use of a BeanContext container. A BeanContext can of course contain other BeanContexts thus allowing for any arbitrary grouping of components. The Services API within the ERCSP gives Beans a standard mechanism to discover which services other Beans may provide and to connect to these Beans to make use of those services. Beans can use introspection to find each other’s capabilities.Figure 3a shows how a JavaBean can be nested within a Jini service to create a service component. In this way we can use Jini for component storage and retrieval while taking advantage of the compositional features of the JavaBeans component model. We have shown the service specification in this diagram to highlight the enhancements we have made to the JavaBean. Figure 3b shows how a BeanContext can be introduced from one service component into another service component at runtime to create a stand-alone composite service. Figure 3c shows that a more reusable stand-alone composite service can be created where all of the code is contained within a single JavaBean.The eXtensible Markup Language (XML) is used to format data into structured information containing both content and semantic meaning [5]. XML provides a convenient and highly effective way to encode a service specification in such a way that the Composition Manager can quickly determine the attributes of that service and the operations it can perform. Figure 4a shows how we have enhanced the Jini LookupService to include an XML parser. This will allow us to read the service specification stored in each service component at runtime. A limited example of an XML service specification is shown in Figure 4b.(a)(b)Figure 4: Fig. 4a shows the addition of an XML parsing facility to the standard Jini Lookup Serviceneeded to interpret service specifications. Fig. 4b shows a simple XML service specification.5 ApplicationsThere are many potential applications for our proposed dynamic service composition architecture. Ofparticular interest to our research group is the creation of Internet Protocol (IP) telephony services and Intelligent Network (IN) services within the Public Switched Telephone Network (PSTN) [8]. Dynamic service composition has created exciting opportunities for the development of a new service architecture for these telecommunication infrastructures. IN services are designed to add new service logic and data to the existing forms of switched telecommunication networks. The IN platform provides greater flexibility for service creation and allows services to be customized to suit the exact requirements of a particular customer. IN-based services rely on service-independent building blocks (SIBs) which are the smallest units in service creation. SIBs are reusable and can be chained together in various combinations to create services. They are defined to be independent of the specific service they are performing and thetechnologies used in their realization. SIBs were not originally designed to take advantage of object-oriented concepts and this is one area where the Jini and JavaBean-based service components, described earlier, could be advantageous. Another enhancement our architecture will provide over a SIB-basedimplementation is runtime component assembly and deployment. Decisions on which service components<?XML version="1.0" ?><SERVICE><DESCRIPTION><NAME>Call Forward Unconditional</NAME><VENDOR>Carleton University</VENDOR><VERSION>1.3.2</VERSION><PROTOCOL>H.323</PROTOCOL></DESCRIPTION><PROPERTIES><COMPOSABLE>Yes</COMPOSABLE><INPUTS>Caller, Call Agent</INPUTS><OUTPUTS>Callee, Gatekeeper</OUTPUTS><CHAINING_ORDER>First</CHAINING_ORDER><COMPOSABLE_METHODS>Forward, Log Call Info</COMPOSABLE METHODS></PROPERTIES></SERVICE>will be assembled are made dynamically based on user requirements and are not predetermined at design time as in the SIB approach.It is a commonly held view within the telecommunication industry that IP-based networks will not replace the PSTN in the short-term [8]. The gradual migration towards IP networks, however, will require hybrid services that can operate in a variety of network environments. In order to accelerate the integration these new networks with the existing infrastructure, services may need to be provided by a variety of parties including the vendors of the network equipment. Vendors are aware of the issues related to protocol convergence and therefore will be involved in the development of the majority of hybrid services. The architecture proposed in this paper allows services developed by vendors, service providers, and individual users to be integrated together assuming they are compliant with a basic set of agreed upon compositional requirements.5.1ExampleIn order to illustrate the types of services that are enabled by our architecture, we will describe a composite IP telephony service that makes use of data, multimedia, and e-commerce service components.A single service provider will deploy this composite service to the display of a telephone handset or wireless phone. This service will allow a customer to use their phone to make a hotel reservation in a city with which they are unfamiliar.In this scenario, a customer interacts with a reservation service, which is downloaded by the hotel company, over the Internet using the phone. The user proceeds to search for a hotel in the city of interest. Once a hotel is found, the customer is asked if they are interested in seeing a map of the area surrounding the hotel or a list of attractions. This information may or may not be provided by the hotel company itself. If another service provider is able to offer these services, the user may want to use them without having to go through the hassle of signing up for each service individually. Ideally, the user is not even aware that a third party is providing the service.We assume that before these additional services can be provided, the hotel company will have to make agreements with a set of service component providers to gain access to certain types of information components. These information components could then be used to create a composite service that would be deployed to the customer. Alternatively, the customer could subscribe to a travel service that would provide a list of additional information services that the user could access. In the later case, one possibility would be for the dynamic service composition architecture provided by the travel service to deploy a composite service interface, involving the information components requested by the user, to the phone set. If large amounts of multimedia data are required, as would be the case with an interactive map, a stand-alone composite service may be created and deployed to the phone set instead. If the customer wants to make a reservation with the hotel or pay a deposit on the room, a secure e-commerce transaction component may also be a part of the composite service that is ultimately deployed.6Conclusions and Future WorkThis paper presents two approaches to dynamic composition of service components and a proposed software architecture to support these techniques. A composite service interface can be created if the composition needs to be carried out in a relatively short period of time. However, there is limited reuse potential for this service since the service components involved in the composite service may be distributed on several network nodes. The advantage of interface fusion is that it is relatively straightforward to assemble the interface and deploy the composite service.The second method is to create a stand-alone composite service. This is considerably more difficult since code must be physically moved from one component into another. It takes a much longer time to construct a stand-alone composite service since the service specification must be created from the service specifications of the member components and a completely new component must be assembled. The advantage, however, is that this new service can be stored in a service component directory for future use. It is a valid service component just like its member service components and therefore has reuse potential.Currently, a prototype of our composable service architecture is under development. We are instrumenting our system with performance metrics and plan to carry out a scalability analysis in an effort to quantify the limitations of our proposed solution. We realize that scalability is already an issue with Jini。
基于java的全文检索技术研究
(总)摘要:当前信息技术不断发展,人们对于信息系统的应用日益广泛,对于信息管理系统的要求也越来越高。
利用当前最流行编程语言JAVA 设计全文检索系统可以有效解决当前信息系统面临的问题。
首先通过对Lucene 架构的原理进行有效的分析,其次在其基础上设计出全文检索系统的框架,最后给出具体的检索模块的实现,并给出部分代码。
对于信息管理人员来说具有积极的推动作用。
关键词:Java ,全文检索,Lucene 中图分类号:TP399文献标识码:A基于Java 的全文检索技术研究雷燕瑞(海南软件职业技术学院,海南琼海571400)Research of Full-text Retrieval Based on JavaLEI Yan-rui(Hainan College of Software Technology,Qionghai 571400,china )Abstract :With the development of information technology,to use information system is more andmore widely,for information management system requirements are also getting more e the most popular programming language JAVA full -text retrieval system can effectively solve the problems of information system.The first,this paper analyzed effectively through the principle of the Lucene architecture,then based on the design of full text retrieval system framework,finally gives the retrievalmodule realization,and gives part of the code.Have a positive role in promoting the information management staff.Key words :Java ,full-text retrieval ,Lucene 文章编号:1003-5850(2013)05-0076-03收稿日期:2013-01-11,修回日期:2013-04-09作者简介:雷燕瑞,女,1980年生,讲师,研究方向:数据库应用,软件开发,高等职业教育等。
Abstract
AbstractThe present paper describes an intelligent system AUTOPROMOD developed for automatic modeling of progressive die. The proposed system utilizes interfacing of AutoCAD and AutoLISP for automatic modeling of die components and die assembly. The system comprises eight modules, namely DBMOD, STRPRMOD, BPMOD, PPMOD, BBDSMOD, TBDSMOD, BDAMOD and TDAMOD. The system modules work in tandem with knowledge-based system (KBS) modules developed for design of progressive die components. The system allows the user firstly to model the strip-layout and then utilizes output data files generated during the execution of KBS modules of die components for automatic modeling of progressive die. An illustrative example is included to demonstrate the usefulness of the proposed system. The main feature of the system is its facility of interfacing die design with modeling. A semiskilled and even an unskilled worker can easily operate the system for generation of drawings of die components and die assembly. System modules are implementable on a PC having AutoCAD software and thus its low cost of implementation makes it affordable for small and medium sized stamping industries.本文描述了一种智能系统AUTOPROMOD开发级进模的自动建模。
学术论文写作考试题精选全文完整版
可编辑修改精选全文完整版学术论文写作考试题1.What is term paper?In the university grade stage. It is usually accomplished under the guidance of experience teachers to gain the final credit.2.Define the readability of thesis.The text is smoothly, simple, clear chart, well-organized order and brief conclusion. 3.What are the principles and methods of selecting a subject of study?Focused up-to-date under control4.How is the first-hand source distinguished from the second-hand source?F is original opinions S is the original view reviews and comments5.What are the 4 kinds of note in the subject selection?Summary Paraphrase Direct Quotation Comment6.What are the two main kinds of outline? In what subjects do they cater to respectively?Mixed outline: used in humanities and social sciencesNumerical outline: used in science7.Give reasons of submitting a research proposalFirst, you have a good topic.Second, you have the ability to complete the paper.Third, you have a feasible research plan.8.How many components are there in the research proposal? What are they? Title Introduction Literature review Method Result Discussion Preliminary bibliography9.What is the use of literature review?Understand the background.Familiar the problemsHave a ability of preminary assessment and comprehensive the literature.10.What is abstract?Abstract is a concise and comprehensive summary or conclusion.11.What are the main components of abstract?Objective or purpose Process and methods Results Conclusion12.What is the use of conclusion in the thesis?It emphasized the most important ideas or conclusion clearly in this paper.13.What parties is the acknowledgment usually addressed to?For the tutor and teachers who give suggestion, help and support.For the sponsorFor the company or person which provide the dataFor other friends14.Specify MLA formatIt is widely used in the field of literature, history and so on.Pay attention in the original of the Reference.15.Specify Chicago formatThe subject of general format, used for books, magazines and so on.Divided into the humanities style and the author data system.16.Define footnotes.Also called the note at the end of the page. Appeared in the bottom of every page. 17.Define end-notes.Also called Concentrated note or end-notes appear in thetext.18.M:monographA: choose an article from the proceedings.J: academic journalD: academic dissertationR: research reportC: collected papersN: newspaper article19.Tell briefly about the distinctions between thesis and dissertation.Dissertation defined as a long essay that you do as part of a degree or other qualification. It refers to B.AThesis defined as a long piece of writing, based on your own ideas and research, that you do as part of a university degree. It refers to Ph.D.20.What are the general features of the thesis title?As much as possible use nouns, prep, general phrase and so on.The title can be used to express an Non-statement sentence.The first letter of the notional word in the title should be capital.Be cautious using abbreviations and try not to use punctuation marks.Remove unnecessary articles and extra descriptive words.21.What is the introduction of the research proposal concerned with?Research question Rationale Method FindingsDesign sample instruments22.How is abstract defined to American national standards institute?It is a concise summary of your work.Abstract should state the objectives of the project describe the methods used, summarize the significant findings and state the implications of the findings.23.How is thesis statement understood?It usually at the final part of the introduction in order that the readers could understood the central idea as quickly as possible. It is the point of view and attitude of the statement.1. Have a brief comment upon the study of ESPSpecial use English also called English for specific purpose. It includes tourism English, finance English, medical English, business English, engineering English, etc. In the 1960s, ESP is divided into scientific English, business English and social sciences, each branch can be divided into professional English and academic English.2. What is the research methods of literature?The external research : from society, history, age, environment and so on relationship to study.The internal research: from the works of rhyme, text, images, symbols and specific level to composed the text.3.Have a brief comment upon the study of interpretation.At present, people in the academia mainly focus on these topics, such as interpreting training, interpreting practices and so on. According to its mean of transfer, interpretation can be divided for simultaneous interpretation, consecutive interpretation, whispering interpretation; According to different occasions and interpretation, it can be divided into the meeting interpretation, contact interpretation, media interpretation,etc.4.What is the analytic method in the study of linguistics?In linguistics, analytic method means to make some analysisand decomposition on the various elements of a language according to different research purposes and requirements, and to separate them from the interconnected entirety respectively and extract general and special method.5.In what respects is phonetics studies in the current research?Study on the phonology remains to be further studied, such as Chinese language learning and English phonology, phonological number is still worth discussing. Comparative study of phonology is worth advocating. The combination of researching and teaching for phonetics is also a major focus of current research.6. What is the deductive in linguistics?Deduction is the method to deduce from the general to the special, namely from the general principles of known to conclusions about the individual objects. he deductive method is also known as the study of testing hypothesis.1.What is term paper?2.Define the readability of thesis.3.What are the principles and methods of selecting a subject of study?4.How is the first-hand source distinguished from the second-hand source?5.What are the 4 kinds of note in the subject selection?6.What are the two main kinds of outline? In what subjects do they cater to respectively?7.Give reasons of submitting a research proposal8.How many components are there in the research proposal? What are they?9.What is the use of literature review?10.What is abstract?11.What are the main components of abstract?12.What is the use of conclusion in the thesis?13.What parties is the acknowledgment usually addressed to?14.Specify MLA format15.Specify Chicago format16.Define footnotes.17.Define end-notes.18.Tell briefly about the distinctions between thesis and dissertation.19.What are the general features of the thesis title?20.What is the introduction of the research proposal concerned with?21.How is abstract defined to American national standards institute?22.How is thesis statement understood?。
SYSTEM, SYSTEM COMPONENTS THEREOF AND METHOD FOR C
专利名称:SYSTEM, SYSTEM COMPONENTS THEREOF AND METHOD FOR CARRYING OUT ANOBJECT RELATED ELECTRONIC DATATRANSFER发明人:RICHTER, WOLFGANG申请号:EP03730129.8申请日:20030527公开号:EP1512104A2公开日:20050309专利内容由知识产权出版社提供摘要:The invention relates to carrying out of an object-related data transfer electronically, especially in connection with the transfer of objects via a control point from a first area into a second area and the transfer and the use of objects, especially electronic devices, e.g. in the context of surrender of use. The aim of the invention is to provide solutions which can be used to carry out or ensure commitments with regard to the transfer/surrender of goods, services, proof, co-ordination processes or control process related thereto in an improved manner. According to the invention, a data transfer device associated into an object, especially connected and integrated with said object, produces a signal, which in a surrounding area of the products can be introduced into a second data transfer device, whereby the basis of a signal processing observation of a reception event detected by the second data transfer device, is generated for a data set which is indicative of said object, and a service related to the object can be co-ordinated by said data set.申请人:IDENT TECHNOLOGY AG地址:Hubertusstrasse 38 82131 Gauting DE 国籍:DE代理机构:Rössig, Rolf更多信息请下载全文后查看。
基于Android开发的外文文献
AndroidAndroid, as a system, is a Java-based operating system that runs on the Linux kernel. The system is very lightweight and full featured. Android applications are developed using Java and can be ported rather easily to the new platform. If you have not yet downloaded Java or are unsure about which version you need, I detail the installation of the development environment in Chapter 2. Other features of Android include an accelerated 3-D graphics engine based on hardware support, database support powered by SQLite, and an integrated web browser.If you are familiar with Java programming or are an OOP developer of any sort, you are likely used to programmatic user interface UI development—that is, UI placement which is handled directly within the program code. Android, while recognizing and allowing for programmatic UI development, also supports the newer, XML-based UI layout. XML UI layout is a fairly new concept to the average desktop developer. I will cover both the XML UI layout and the programmatic UI development in the supporting chapters of this book.One of the more exciting and compelling features of Android is that, because of its architecture, third-party applications—including those that are “home grown”—are executed with the same system priority as those that are bundled with the core system. This is a major departure from most systems, which give embedded system apps a greater execution priority than the thread priority available to apps created by third-party developers. Also, each application is executed within its own thread using a very lightweight virtual machine.Aside from the very generous SDK and the well-formed libraries that are available to us to develop with, the most exciting feature for Android developers is that we now have access to anything the operating system has access to. In other words, if you want to create an application that dials the phone, you have access to the phone’s dialer; if you want to create an application that utilizes the phone’s internalGPS if equipped, you have access to it. The potential for developers to create dynamic and intriguing applications is now wide open.On top of all the features that are available from the Android side of the equation, Google has thrown in some very tantalizing features of its own. Developers of Android applications will be able to tie their applications into existing Google offerings such as Google Maps and the omnipresent Google Search. Suppose you want to write an application that pulls up a Google map of where an incoming call is emanating from, or you want to be able to store common search results with your contacts; the doors of possibility have been flung wide open with Android.Chapter 2 begins your journey to Android development. You will learn the how’s and why’s of using specific development environments or integrated development environments IDE, and you will download and install the Java IDE Eclipse.Application ComponentsA central feature of Android is that one application can make use of elements of other applications provided those applications permit it. For example, if your application needs to display a scrolling list of images and another application has developed a suitable scroller and made it available to others, you can call upon that scroller to do the work, rather than develop your own. Your application doesn't incorporate the code of the other application or link to it. Rather, it simply starts up that piece of the other application when the need arises.For this to work, the system must be able to start an application process when any part of it is needed, and instantiate the Java objects for that part. Therefore, unlike applications on most other systems, Android applications don't have a single entry point for everything in the application no main function, for example. Rather, they have essential components that the system can instantiate and run as needed. There are four types of components:ActivitiesAn activity presents a visual user interface for one focused endeavor the user can undertake. For example, an activity might present a list of menu items users can choose from or it might display photographs along with their captions. A text messaging application might have one activity that shows a list of contacts to send messages to, a second activity to write the message to the chosen contact, and other activities to review old messages or change settings. Though they work together to form a cohesive user interface, each activity is independent of the others. Each one is implemented as a subclass of the Activity base class.An application might consist of just one activity or, like the text messaging application just mentioned, it may contain several. What the activities are, and how many there are depends, of course, on the application and its design. Typically, one of the activities is marked as the first one that should be presented to the user when the application is launched. Moving from one activity to another is accomplished by having the current activity start the next one.Each activity is given a default window to draw in. Typically, the window fills the screen, but it might be smaller than the screen and float on top of other windows. An activity can also make use of additional windows — for example, a pop-up dialog that calls for a user response in the midst of the activity, or a window that presents users with vital information when they select a particular item on-screen.The visual content of the window is provided by a hierarchy of views — objects derived from the base View class. Each view controls a particular rectangular space within the window. Parent views contain and organize the layout of their children. Leaf views those at the bottom of the hierarchy draw in the rectangles they control and respond to user actions directed at that space. Thus, views are where the activity's interaction with the user takes place.For example, a view might display a small image and initiate an action when the user taps that image. Android has a number of ready-made views that you can use — including buttons, text fields, scroll bars, menu items, check boxes, and more.A view hierarchy is placed within an activity's window by the method. The content view is the View object at the root of the hierarchy. See the separate User Interface document for more information on views and the hierarchy.ServicesA service doesn't have a visual user interface, but rather runs in the background for an indefinite period of time. For example, a service might play background music as the user attends to other matters, or it might fetch data over the network or calculate something and provide the result to activities that need it. Each service extends the Service base class.A prime example is a media player playing songs from a play list. The player application would probably have one or more activities that allow the user to choose songs and start playing them. However, the music playback itself would not be handled by an activity because users will expect the music to keep playing even after they leave the player and begin something different. To keep the music going, the media player activity could start a service to run in the background. The system would then keep the music playback service running even after the activity that started it leaves the screen.It's possible to connect to bind to an ongoing service and start the service if it's not already running. While connected, you can communicate with the service through an interface that the service exposes. For the music service, this interface might allow users to pause, rewind, stop, and restart the playback.Like activities and the other components, services run in the main thread of the application process. So that they won't block other components or the user interface, they often spawn another thread for time-consuming tasks like music playback. See Processes and Threads, later.Broadcast receiversA broadcast receiver is a component that does nothing but receive and react tobroadcast announcements. Many broadcasts originate in system code — for example, announcements that the timezone has changed, that the battery is low, that a picture has been taken, or that the user changed a language preference. Applications can also initiate broadcasts — for example, to let other applications know that some data has been downloaded to the device and is available for them to use.An application can have any number of broadcast receivers to respond to any announcements it considers important. All receivers extend the BroadcastReceiver base class.Broadcast receivers do not display a user interface. However, they may start an activity in response to the information they receive, or they may use the NotificationManager to alert the user. Notifications can get the user's attention in various ways — flashing the backlight, vibrating the device, playing a sound, and so on. They typically place a persistent icon in the status bar, which users can open to get the message.Content providersA content provider makes a specific set of the application's data available to other applications. The data can be stored in the file system, in an SQLite database, or in any other manner that makes sense. The content provider extends the ContentProvider base class to implement a standard set of methods that enable other applications to retrieve and store data of the type it controls. However, applications do not call these methods directly. Rather they use a ContentResolver object and call its methods instead. A ContentResolver can talk to any content provider; it cooperates with the provider to manage any interprocess communication that's involved.See the separate Content Providers document for more information on using content providers.Whenever there's a request that should be handled by a particular component, Android makes sure that the application process of the component is running, startingit if necessary, and that an appropriate instance of the component is available, creating the instance if necessary.Key Skills & Concepts●Creating new Android projects●Working with Views●Using a TextView●Modifying the fileCreating Your First Android Project in EclipseTo start your first Android project, open Eclipse. When you open Eclipse for the first time, it opens to an empty development environment see Figure 5-1, which is where you want to begin. Your first task is to set up and name the workspace for your application. Choose File | New | Android Project, which will launch the New Android Project wizard.CAUTION Do not select Java Project from the New menu. While Android applications are written in Java, and you are doing all of your development in Java projects, this option will create a standard Java application. Selecting Android Project enables you to create Android-specific you do not see the option for Android Project, this indicates that the Android plugin for Eclipse was not fully or correctly installed. Review the procedure in Chapter 3 for installing the Android plugin for Eclipse to correct this.The New Android Project wizard creates two things for youA shell application that ties into the Android SDK, using the file, and ties the project into the Android Emulator. This allows you to code using all of the Android libraries and packages, and also lets you debug your applications in the proper environment.Your first shell files for the new project. These shell files contain some of the vital application blocks upon which you will be building your programs. In much the same way as creating a Microsoft application in Visual Studio generates some Windows-created program code in your files, using the Android Project wizard in Eclipse generates your initial program files and some Android-created code. In addition, the New Android Project wizard contains a few options, shown next, that you must set to initiate your Android project. For the Project Name field, for purposes of this example, use the title HelloWorldText. This name sufficiently distinguishes this Hello World project from the others that you will be creating in this the Contents area, keep the default selections: the Create New Project in Workspace radio button should be selected and the Use Default Location check box should be checked. This will allow Eclipse to create your project in your default workspace directory. The advantage of keeping the default options is that your projects are kept in a central location, which makes ordering, managing, and finding these projects quite easy. For example, if you are working in a Unix-based environment, this path points to your $HOME directory.If you are working in a Microsoft Windows environment, the workspace path will be C:/Users/<username>/workspace, as shown in the previous illustration. However, for any number of reasons, you may want to uncheck the Use Default Location check box and select a different location for your project. One reason you may want to specify a different location here is simply if you want to choose a location for this specific project that is separate from other Android projects. For example, you may want to keep the projects that you create in this book in a different location from projects that you create in the future on your own. If so, simply override the Location option to specify your own custom location directory for this project.。
Composition of reactive system components
In this paper we will use examples from a case study of an established benchmark for formal methods, the steam boiler system, to illustrate the techniques of abstract and compositional speci cation using the Object Calculus. A description of the steam boiler system can be found in 2], together with di erent approaches to formal speci cation of it. The purpose of the system is to produce a ow of steam from the boiler water tank, without letting the tank boil dry or over ow. Failures in the measuring devices involved ( ow monitors on the water feed lines, steam level sensor and water level sensor) and the water pumps must be handled by an appropriate change of mode of the controller { in emergency situations this may involve a shutdown of the control system. Figure 1 shows the main components of the system.
系统工程之结构模型化技术
System Engineering
本讲提要
1. 结构模型化基础 2. 解释结构模型
1. 工作流程 2. 应用举例——人口控制综合策略问题
3. 案例分析
工业工程IE
2 Industrial Engineering
戈鹏 @ 四川大学工商管理学院
Business School of Sichuan University
《系统工程》
System Engineering,SE
第03讲:结构模型化技术
—— 搞清系统结构,找出问题原因 A
工业工程IE
Industrial Engineering
戈 鹏 工学博士
电子邮件: ge_peng@ 移动电话: 13183834137 办公地址: 工商管理大楼 516 室 通信地址: 四川大学工商管理学院 14#
静态结构模型化方法
1. 解释结构模型化(ISM)方法 2. 关联树法
问题树 目标树 决策树
动态结构模型化方法
1. 系统动力学(SD)结构模型化方法
结构模型形式多样
工业工程IE
6 Industrial Engineering
戈鹏 @ 四川大学工商管理学院
Business School of Sichuan University
2、解释结构模型法(ISM)
2.1、工作程序
要素 明细表
邻接矩 阵模型
可达矩 阵模型
决策 说明书
人脑
解释结 构模型
结构 模型
电脑
工业工程IE
8 Industrial Engineering
戈鹏 @ 四川大学工商管理学院
Business School of Sichuan University
Java的增强现实视频子系统的设计和实现
2009年6月信息技术收稿日期:2008-12-05作者简介:李畅(1978-),男,硕士研究生,研究方向:增强现实和机器人遥操作;张平(1964-),男,博士,教授,博士生导师,研究领域:机器人,基于网络的远程控制;黄小兵(1971-),男,在读博士,主要研究领域:机器人,软件工程。
基于Java 的增强现实视频子系统的设计和实现李畅1,2,张平2,黄小兵2(1.长沙理工大学计算机与通信工程学院计算机中心,湖南长沙4100042.华南理工大学计算机科学与工程学院,广东广州510640)摘要:负责虚实显示的视频子系统是增强现实系统一个重要部分。
视频子系统的性能是增强现实达到期望效果的关键因素。
文章介绍了一个基于软件常用设计模式并用高级Java 图形图像技术实现的增强现实系统的视频子系统。
通过应用合适的Java 多线程技术并优化显示逻辑的算法流程,从而实现了高效的虚实显示,并给出了实验效果和性能统计结果。
关键词:增强现实;视频子系统设计;Java 图形图像处理;Java 多线程中图分类号:TN919;TP391文献标识码:AThe Design and Implementation of Video Subsystem forAugmented Reality System by JavaLI Chang 1,2,ZHANG Ping 2,HUANG Xiao-bing 2(1.School of Computer and Communication,Changsha University of Science &Technology,Changsha 410004,China2.School of Computer Science and Engineering,South China University of Technology,Guangzhou 510640,China )Abstract :One of important components in AR system is video subsystem which displays real video-stream and virtual objects .AR System by Java often requires high performance of video subsystem.This article introduces the design ofimplementation of a video subsystem of AR by common design patterns and Java advanced graphics technologies .By choosing correct multithread programming technology,and better video display logic,the video subsystem can reach high effective performance of display.At last,experiments effects and statistics about is given.Key words:augmented reality;video subsystem;virtual objects display Java graphics;Java multithreadVol.17No.3Jun .2009第17卷第3期2009年6月电脑与Computer and Information Technology文章编号:1005-1228(2009)03-0026-04增强现实技术[1](Augmented Reality,简称AR 技术)将计算机产生的虚拟对象放置到反映真实世界的场景空间中,对真实世界起到补充、增强的作用。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
1.6
Memory Layout for a Simple Batch System
The OS was always resident in memory Its major task to transfer control automatically from one job to the next. Operators sort programs together into batches and run as a group. Output would be sent back to programmer. With direct access to several jobs, the OS could perform job scheduling to use resources and perform tasks efficiently.
Resource allocator – manages and allocates resources. Control program – controls the execution of user
programs and operations of I/O devices . Kernel – the one program running at all times (all else
Operating System Concepts
1.2
Computer System Components
1. Hardware – provides basic computing resources (CPU, memory, I/O devices).
2. Operating system – controls and coordinates the use of the hardware among the various application programs for the various users.
4. Users (people, machines, other computers).
Operating System Concepts
1.3
Abstract View of System Components
Operating System Concepts
1.4
Operating System Definitions
User convenience and responsiveness. Can adopt technology developed for larger operating
system’ often individuals have sole use of computer and do not need advanced CPU utilization of protection features. May run several different types of operating systems (Windows, MacOS, UNIX, Linux) PC OS were neither mutiuser nor multitasking Goal – maximizing user convenience and responsiveness
Operating System Concepts
1.7
Multiprogrammed Batch Systems
Several jobs are kept in main memory at the same time, and the CPU is multiplexed among them.
1.8
OS Features Needed for Multiprogramming
I/O routine supplied by the system. Memory management – the system must allocate the
memory to several jobs. CPU scheduling – the system must choose among
Operating system goals:
Execute user programs and make solving user problems easier.
Make the computer system convenient to use.
Use the computer hardware in an efficient manner. It is an important part of almost every computer system.
fail-soft systems (designed for graceful degradation)
Operating System Concepts
1.12
Parallel Systems (Cont.)
Sy Each processor runs and identical copy of the operating system.
Operating System Concepts
1.1
What is an Operating System?
A program that acts as an intermediary between a user of a computer and the computer hardware.
Introduction
What is an Operating System? Mainframe Systems Desktop Systems Multiprocessor Systems Distributed Systems Clustered System Real -Time Systems Handheld Systems Computing Environments
Resident monitor
initial control in monitor control transfers to job when job completes control transfers back to monitor
Operating System Concepts
The important aspect of job scheduling is the ability to multiprogram. Increased CPU utilization. Keeps several jobs simultaneously.
Operating System Concepts
Advantages of parallel system:
Increased throughput Economical (share peripherals, storage and power) Increased reliability
graceful degradation (ability to continue providing service proportional to the level or surviving H/W)
A job swapped in and out of memory to the disk. On-line communication between the user and the system
is provided; when the operating system finishes the execution of one command, it seeks the next “control statement” from the user’s keyboard. On-line system must be available for users to access data and code.
being application programs).
Operating System Concepts
1.5
Mainframe Systems
The first computers used to tackle many commercial and scientific applications.
Operating System Concepts
1.10
Desktop Systems
Personal computers – computer system dedicated to a single user.
I/O devices – keyboards, mice, display screens, small printers.
Batch Systems – runs only one application Reduce setup time by batching similar jobs (to speed up
processing)
Automatic job sequencing – automatically transfers control from one job to another.
several jobs ready to run. Allocation of devices.
Operating System Concepts
1.9
Time-Sharing Systems–Interactive Computing
The CPU is multiplexed among several jobs that are kept in memory and on disk (the CPU is allocated to a job only if the job is in memory).