软件测量程序中英文对照外文翻译文献
软件测试部分中英文对照
Test design : 测试设计
Test driver : 测试驱动
Testing environment : 测试环境
Artifact : 工件
Automated Testing : 自动化测试
Architecture : 构架
Assertion checking : 断言检查 (10)
Audit : 审计
Application under test (AUT) : 所测试的应用程序
Hotfix : 热补丁
G11N(Globalization) : 全球化
Gap analysis : 差距分析
Garbage characters : 乱码字符
Glossary : 术语表
Glass-box testing : 白箱测试或白盒测试
GUI(Graphical User Interface) : 图形用户界面
Decision coverage : 判定覆盖
Debug : 调试
Defect : 缺陷
defect density : 缺陷密度 (60)
Deployment : 部署
Desk checking : 桌前检查
Blocking bug : 阻碍性错误
Bottom-up testing : 自底向上测试 (20)
Branch coverage : 分支覆盖
Brute force testing : 强力测试
Bug : 错误
Bug report : 错误报告
Load testing : 负载测试
Maintenance : 维护
软件应用中英文对照外文翻译文献
软件应用中英文对照外文翻译文献(文档含英文原文和中文翻译)原文:The Design and Implementation of SingleSign-on Based on Hybrid ArchitectureAbstract—For the purpose of solving the problems of user repeated logon from various kinds of Application which based on hybrid architecture and in different domains, single sign-on architecture is proposed. On the basis of analyzing the advantages and disadvantages of existing single sign-on models, combined with the key technology like Web Service, Applet and reverse proxy, two core problems such as single sign-on architecture mix B/S and C/S structure applications and cross-domain single sign-on are resolved. Meanwhile, the security and performance of this architecture are well protected since the reverse proxy and related encryption technology are adopted. The results show that this architecture is high performance and it is widely applicable, and it will be applied to practical application soon.Index Terms—single sign-on, web service, cross domain, reverse proxy, B/S, C/SINTRODUCTIONWith the information society, people enjoy the progress in the huge interests, but at the same time also faced the test of information security. With all system users need to log in the system increased, users need to set a lot of user names and passwords, which are confused easily, so it will increase the possibility of error. But most users use the same user name and password, this makes the authentication information is illegally intercepted and destroyed the possibility of increased, and security will be reduced accordingly. For managers, the more systems need more corresponding user databases and database privileges, these will increase management complexity. Single sign-on system is proposed a solution to solve the problem. Using single sign-on, we can establish a unified identity authentication system and a unified rights management system. It not only improve system efficiency and safety, but also can use user-friendly and to reduce the burden on administrators.TABLE 1 The comparison of a variety of single sign-on toachieve modelsSSO Achieve- Action ability Manageability ModelBroker Model The large Enable centralizedtransformation of the managementold systemAgent Model Need to add a new Management moreagent for each of the difficult to controlold system,transplantation isAgent and relatively simpleTransplantation Enable centralizedBroker Model simple, managementtransformation of theold system withlimited capacityGateway Model Need to use a Easy to manage, butdedicated gateway to databases between theaccess various different gateways needapplications to be synchronized Token Model Implementation of Need to add newrelatively simple components andincrease themanagement burdenSingle sign-on refers to when the user needs to access a distributed environment which has different applications to provide the service, only sign on once in the environment,no need for the user to re-sign on the various application systems[1]. Now there are many products and solutions to implement SSO, such as Passport of Microsoft, IBM Web Sphere Portal Server although these SSO products could do well in the function of single sign-on, but most of them are complex and inflexible. Currently, the typical models to achieve SSO include broker model, agent model, agent and broker model, gateway model and token model [2]. In table 1, it analyses these models can be implemented and manageability. Based on the above comparison, agent and broker model has the advantages both centralized management and revised less original application service procedure. So I decide to adopt agent and broker model as the basis for this model. In order to integrate information and applications well and with the B/S mode in-depth application software, there has been the concept of enterprise portal, offer a best way to solve this problem. Enterprise portal provides business users access information andapplications, and complete or assist in a variety of interactive behavior of a single integrated access point. The appropriate system software portal provides a development, deployment and management of portal applications services. Enterprise information portal concerns portal, content management, data integration, single sign-on, and much other content.SYSTEM CONSTRUCTION WHICH REGISTERS BASED ON THE WEB SERVICE MIX CONSTRUCTION SINGLE SIGN-ONThe system consists of multiple trust domains. Each trust domain has much B/S architecture of the application servers; in addition to B/S architecture of the application servers also included C/S architecture application servers. All the applications are bound together through a unified portal to achieve functionality of single sign-on. You can see that this architecture is based on the agent and the broker model. A unified agent portal is playing a broker role, and various applications are playing an agent role. The B/S architecture applications are installed on the Client side of SSO Agent, and the unified portal is installed on the Server side of SSO Agent. Between them is through these two Agents to interact. In addition, in Fig 1, the external provision of authentication server is LDAP authentication interface. Token authentication Web Service server provides the interfaces of single sign-on token of the additions, deletions, editions and queries. But the permission Web Service server provides the appropriate authority information system, to achieve unified management authority for accessing unified portal application system.The system supports cross- domain access, that is, the domain D1 users can access the application domain D2, and the domain D2 users can access the application domain D1. At the same time, the system also supports the application of different structures between the single sign-on, that is, user after accessing the application A of the B/S structure access the application E of C/S structure without having to repeatedly enter user name and password, or user access the application A after the application E without re-enter login information.The whole structure of Single Sign-on is as Fig 1 shown.Figure 1: The Structure of Single Sign-onA. The login processThe whole single sign-on process is as Fig 2 shown: Below is the process specific steps description:1)User login in the client browser to access A application, SSO Client of A system intercept and redirect the URL to the landing page of Unified Portal System 2)Enter the user name and password, Unified Portal System submits to the authentication server for authentication. If the information is correct, Unified Portal System automatically generates, saves notes and the role of the user ID to a local, and calls the increate-note interface of Web Service to insert the information.3)Unified Portal System returns a list of application resources pages to the user. The user clicks any one application system (e.g. A system). The SSO Client-side of A application system read the notes information and call the query-notes interface of Web Service. If it is consistent and within the time limit, it will get the role information of the user in A application system and log in A application system. At the same time, it will call the update-note interface of Note Certification Web Service to update the log-in time of this current note. Then call the interface of user rights Web Service to get this user‟s permission information with corresponding application system.4)If user end to access A application system, exit and click on the link of B application system, system implementations will be are as the same as steps (3).5)If user complete all the required access-applications and need to do the log-off operation, it will mainly call the deletion-note interface to destroy the corresponding note information.Figure 2: The whole process of Single Sign-onB.The solution of Cross-domain problemsIn the traditional implementation of single sign-on system will be generally used cookie as storage of client-side notes, but because of restrictions on cookie itself properties make it only on the host under the same domain effective, and distributed application system always can not guarantee that all hosts under the same domain. The current system does not store the note information in the client-side but placed various application parameters of the link directly. The note-verification is through the application of the SSO Client-side call to the corresponding interface of Web Service to complete.Through the Simple Object Access Protocol (SOAP) to provide software service in the Web, use WSDL file to illuminate and register by UDDI [3]. Shown in Fig 3, after the user through the application of UDDI to find a WSDL description of the document, he can call the application which through SOAP to provide by one or more operations of Web services. The biggest characteristic of Web Service is its cross-platform, whether it is the application of B/S structure or C/S structure, whether it is the application using J2EE or .NET to implement, it can access Web Service as long as to give Web Service server's I:P and interface name.The following is this system process of achieving cross-domain access:1)User log in Unified Portal system successfully.2)User accesses A application system within the trusted domain D1, complete the access and then exit this application.3)User clicks the URL of B application system within trusted domain D2 of the resources list of Unified Portal.4)SSO Client of B application intercepts the request, gets the note behind URL, and calls the query-note interface of Web Service.5)Query interface of Web Service gets back the legal information of this note to the SSO Client.6)SSO Client redirect to B application system, the user access B application.Figure 3: Web Service StructureC. The Solution of Single Sign-on between B/S and C/S StructuresAs we know, the implementation principles of applications are quite different between B/S and C/S structures. In this system, the applications of B/S structure can be accessed through by clicking URL of the application-resources-list page of Unified Portal. Since the browser security restrictions, the page does not allow users to directly call the local exe files, so need to adopt an indirect way to call C / S architecture applications. This article uses the way of Applet to call local exe files, the implementations as below:For all C/S structures, create a common Agent. This Agent's role is an interceptor, which means it need browsers to access after the C/S structure joined up Unified Portal system. (Please note that: Since the original B/S architecture and C/S structure is not using the same authentication method. For the C/S application access to the unified portal framework to achieve single sign-on system, the need for a unified authentication management, and in order to change the amount of compression to a minimum. Implementation of this system is to create a needless user name and password authentication code for all applications which are accessed a unified portal, and land on the unified portal system certified landing page. When a user uses browser to log into the unified portal system successfully and then can access any application, including the B/S architecture and C/S structure of the application. To be ensure the security of C/S application framework, when the user clicks directly to the desktop shortcut to open applications still using the original authentication.)Applications of C/S architecture are all using the same Applet of URL. The received parameters of this common Applet include bills, application name, unified login-name and password. When a user does not do the login operation before, the first visit a C/S application will be intercepted to the login-page of Unified Portal system for sign-on. If a user logged in before, when visiting a C/S application, this Agent will call the interface of Web Service note-validation to validate the note which was transferred. If the validation issuccessful, Applet object will be downloaded to the user's local to implement. In order to transform the original applications as little as possible, the method of this article is to open the login window of the corresponding application through by Applet. Below are the codes: public void OpenExe(String appName){ Runtime rn=Runtime.getRuntime();Process p=null;p=rn.exec(“c:\.” + appName + “.exe”);}After opening the log-in window of the application, the operation steps of this Applet as follows:1)Applet needs to call the bottom API of windows to get the user-name of login window, password-input box and the handle of login button through by JNI.2)Locate the user-name-input box to send unified login name. Locate password-input box to send the password. (Password information is arbitrary and in order to distinguish it from the user clicks on a shortcut directly landing system, also need to send a code that uses a unified portal access without a password authentication system.) Locate the login button to send the click event.3)At last, Applet will minimize the IE window, the related windows of applications will be placed to the forefront.These are the implementation process of C/S architecture application single sign-on. The application codes which have not been changed at all before will join up the Unified Portal system using a loosely coupled way. Need to explain that, due to the Applet JVM security restrictions, cause Applet can not directly call the user's System32 directory of local native windows dll. Now the method is first to start to use C or C + + to write the class which got the corresponding input box and button of the login window, and generate a JNIWindowUtil.dll file (JNIWindowUtil is a user-defined dll's name). And it is to place the dll in the same directory with the Applet. When the Applet is downloaded to the client side, dll is also downloaded to the user's System32 directory of local at the same time. Applet process also needs to execute statement: System.loadLibrary("JNIWindowUtil"). After completing these above steps, it can really use JNI in Applet internal to achieve the corresponding functions.D. Authentication serverThe old system user authentication information is usually stored in a database, but this architecture used LDAP to store user information. LDAP, short for Lightweight Directory Access Protocol, is the standard directory access protocol based on a simplified form. It also defines the way data organization; it is based on TCP/IP protocol of the de facto standard directory service, and has distributed information access and data manipulation functions. LDAP uses distributed directory information tree structure. It can organize and manage various users‟ information effectively and provide safe and efficient directory access.Compared with the database, LDAP is the application for reading operation more than writing operation, and database is known to support a large number of writing operations. LDAP supports a relatively simple transaction, but the database is designed to handle a large number of various transactions. When the query in Cross-domain data is mainly read data, modify the frequency is very low. When Cross-domain access to the transaction, it does not require a large load, so in comparison with the database, LDAP is the ideal choice. It is more effective and simple. This framework is applied to a large bank, the bank's systems can belong to different regions, and use of personnel may come from different geographies. In order to achieve distributed management, the use of three-level management, respectively named the Bank headquarter, Provincial and City branches of the three levels of branches, as shown in Fig 4:Figure 4: LDAP Authentication StructureDirectory replication and directory reference is the most important technology in LDAP protocol. It can be seen from the figure, Provincial and City branches of the LDAP server branch data are copied from the floor, but not a simple copy of all information, just copy the relevant data with their own information. Because for a particular application system, its users are mostly belong to the sameregion, so that implementation can greatly simplify the management of directory services and to improve the efficiency of information retrieval When a user outside the region to use this system, because of its user information in the region can not retrieve LDAP server, you need to other regions of the LDAP server to query, and therefore requires a way to use up the reference queries,first Provincial branches of the server search, without further reference to Bank headquarter of the server up until the search to the appropriate user information.The management of the regionalCitybranch, using the LDAP directory replicationmodel of Single Master/Multi Slave. When a directory user queries the directory information, Master LDAP Server and Slave LDAP Server (Slave server can have more than one) can provide services to the directory,depending on the directory user makes a request to which the directory server. When the user requests the directory update directory information, in order to ensure the Master LDAP Server and Slave LDAP Server in the same directory information content, the need for replication of directory information, thisis achieved through the LDAP Replica server data ing directory replication, when the directory number of users increases or the need to improve system performance, only simply add Slave LDAP server to the systemand then can immediatelyeffective in improving system performance,and the whole directory service system can have a good load balancing.E.Permissions Web ServerAccess Controltechnology began in the computer age of providing shared data. Previously, the way people use computers is mainly to submit the run-code written by user or run the user profile data. Users do not have much data sharing, and do not exist to control access to data. When computer comes into user's shared data, the subject of access control is nature to put on the desktop.Currently, the widely used access control models is using or reference to the early nineties of last century the rise of role-based access control model (Role-Based Access Control -RBAC). RBAC model's success is that it is inserted the "role" concept between the subject and object, decouples effectively between subject and the corresponding object (permission), and well adapts to the subject and object associated with the instability.RBAC model includes four basic elements, namely the user (User -U), roles (Roles -R), session (Session -S) and permission (Permission -P), also in the derived model also includes constraints (Constrains -C).The basic idea is to assign access rights to roles, and then the roles are assigned to users. In one session, users can gain the access rights through roles. The relationship between the elements:a user can have multiple roles, a role can be granted to multiple users; a role can have multiple permissions, a permission can be granted multiple roles;user can have multiple conversations, but a conversationis only to bind a user; a conversationcan have multiple roles, a role can share to multiple conversations at the same time; Constraints are that act on specific constraints on these relationships. As shown in Fig 5:This system is to use this very sophisticated permission access control model.Rights management, not only protects the safety of system, but also facilitates management. Currently most using the manner of code reuse and database structure reuse, rights management module is integrated into business systems.Such a framework has the following shortcomings.1)Once the permissions system has been modified, the maintenance costs will be very high.This is the general shortcoming of using code reuse and database structure reuse. Once revised, we will have to update the code in all business system and database structure, and also to ensure that existing data can smooth the transition.Some processes may require manual intervention, which is a "painful" thing for the developers and maintenance personnel.2)Did not facilitate management of Permission data.Need to enter permission management module of various business systems to manage the corresponding rights. It is complex operation, and not intuitive.3)For different architectures, different software operating environment, we must develop and maintain different permissions system. For example, B/S and C/S architecture system must each develop their own rights management system.This paper argues that most commonfunction of the permission system can abstracted from business systems to form an independent system -"unified rights system". Business system only retains the rights inquiries,read common data system and the control rights function of this system specific fine degree (such as menus, buttons, links and so on). As shown Fig 1.How to achieve a unified rights management? This paper argues that there are two implementations, one way is to use Web services to provide rights data; the other is using Mobile Agent to provided permissions data. However, the secondone run, maintenance costs are higher, and implement is more difficultythan Web services. So this architecture using Web services to provide authority data of the various systems in a unifiedway.Business system using Web services client interface to query data and obtain system privileges to share data. The client is just a port, and specific implementation code is placed in "unified rights system". These client interfaces introduced to the business system by package. If we keep the client interfaces unchanged, modify and upgrade of the unified authority system will not affect the business ers and permissions through Web pages of "unified rights system" to unify management and to achieve the user's single sign-on. The biggest advantage of Web services is the integration of data between heterogeneous systems. This breaks the restrictions of B/S, C/S structure;there is no difference between Windows and Linux platform.SYSTEM SECURITY ANALYSIS1)The interception of user name and password. The system for authentication of the user login and send the user name and password to Applet objects are used SSL protocol. And make sure that information during transmission confidentiality and integrity.Meanwhile, due to the key which is hard to get and time limited, so it can effectively prevent that intermediary attack tothe transmission of information.2)Replay attack. Many systems will use the ways of time stamp to avoid duplication attacks. However, this approach requires thecomputer clocks of communication parties to be synchronization. But it is difficult to achieve, while also appears the following situation: the two sides‟clocks which are connecting with each other, if they are out of synchronization occasionally, the correct information may be mistaken to discard for replay information, but the incorrect replay information may be as the latest one to receive. Base on the above, this system needs a simple method F of an appointment between query interfaces of Web Service provided and SSO Client of each application system or Agent.This system‟s parameter value is a random string X. The whole process of bill validation as shownin Fig6:a)When the user accesses to application system A, the SSO Client of system A intercept and call the query interface of Web Service provided, and the input parameters are a random string X and the corresponding note.b)Web Service server receives system A‟s call, intercepts note to compare with the note‟s informationof Session queue. If the queue contains the note, it will return the value of F(X) for showing validation is successful. If not, it will r eturn …failed‟ for showing validation is failed.c)SSO Client of the application A receives the return information of Web Service server, and then compares the return value with F(X) of this system. If the two are the same, it will redirect to system A, otherwise it will not be allowed to visit.The random string is different, which each interact with Web Service server. So you can limit replay attacks very well. 3)Use reverse proxy technology. Reverse proxy technologyis a substitute, which is a reverse proxy server as to N identical application servers. When external access to this application, it just knows the reverse proxy server and can not see the back multiple application servers. This improves the security of this application system.Through the above analysis, this system can provide users with a good safety Web environment.SYSTEM PERFORMANCE ANALYZESFirst, this system in addition to use SSL encryption in the transmission of user name and password, the interactions of between other servers and between user and servers are based on HTTP protocol to transmit. SSL encryption and decryption process requiresa lot of system cost, severely reduces the performance of the machine, so we should not be use this protocolto transmit data too much. Since the data which need to encrypt is small, only a userID value (note), so the performance of using MD5 to encrypt is quite satisfactory.Second, when user accessesany application system of each domain, they will be redirected to Unified Portal system for identity authentication, or directed to Web Service server for note validation. User need to sign on the system only when he is certification first time. When the visitor volume is larger, the user switch to the new application system will easily handle an interruption, which issingle sign-failure phenomenon. This phenomenon has two reasons, one is the server load is too large, the other one is network bandwidth is not enough. Among them, the method which is resolved the server load is too large is to use server cluster. Cluster is made up of multiple servers. As a unified resource, it provides a single system service to external. In this system, except for using reverse proxy technology to improve the security of accessing the applications, the more important is capability which can help to implement cluster technology of load balancing. The whole structure of reverse proxy is shown in Fig7:Fig7, reverse proxy server R provides the correspondinginterface to implement the algorithm of load balancing except for providing cache for the behind A1, A2 and A3 application. That is, it can consider the arrival request to distribute to the server which has the best performance through by scanning the conditions of CPU, memory and I/O of A1, A2, A3 server.By LoadRunner8.1, the use of reverse proxy system before and after was related to stress testing. The test results are shown in Fig 8:It can be seen from Figure 8,at the beginning, when the number of concurrent users is not large, use the reverse proxy and out of use proxy is similar. But with the gradual increase of concurrent users, the performance difference between the twois more and more evident.To 100 concurrent users to access,the system response time of using the reverse proxy is almost twice as fast as the one out of use proxy.System Web Service server needs to store the info rmation of note, so using Web Service server cluster to pay attention to this problem: the different Servers of cluster use different JVM, so an object of JVM can not be accessed by other JVM directly. For this problem, there are two methodsto resolve:1)Put the object in Session, and then configure cluster to the copy model of Session.2) Use Memcache, put the object in Memcache, and then all Server get this object from Memcache. To be equivalent to open a public memory area, which everyone can access.Any more, business system requires get rights information data through the Web services frequently. This performance of the system put forward higher requirements. The system has been taken two measures to improve performance:1)It receives a request by using time-sharing patterns of authority data server. After that, if always be calculated in real-time data, it will not certainly respond in time as the server limited resources. This will cause the system to slow down.A "time-sharing patterns of authority data" can solve this problem.When the system data changes (such as a new operation is authorized tothe role, etc), the system automatically calculates the affected user, and then re-calculate the relevant authority data, save to the specified fieldof database.When the business system requests data, only run "to read the database designated field corresponding to the specified data" such a simple action, you can greatly speed up the system response speed.2)Designed the cache structure to rely solely on time-sharing model is not enough to。
【计算机专业文献翻译】软件测试
附录2 英文文献及其翻译原文:Software testingThe chapter is about establishment of the system defects objectives according to test programs. You will understand the followings: testing techniques that are geared to discover program faults, guidelines for interface testing, specific approaches to object-oriented testing, and the principles of CASE tool support for testing. Topics covered include Defect testing, Integration testing, Object-oriented testing and Testing workbenches.The goal of defect testing is to discover defects in programs. A successful defect test is a test which causes a program to behave in an anomalous way. Tests show the presence not the absence of defects. Only exhaustive testing can show a program is free from defects. However, exhaustive testing is impossible.Tests should exercise a system's capabilities rather than its components. Testing old capabilities is more important than testing new capabilities. Testing typical situations is more important than boundary value cases.An approach to te sting where the program is considered as a ‘black-box’. The program test cases are based on the system specification. Test planning can begin early in the software process. Inputs causing anomalous behaviour. Outputs which reveal the presence of defects.Equivalence partitioning. Input data and output results often fall into different classes where all members of a class are related. Each of these classes is an equivalence partition where the program behaves in an equivalent way for each class member. Test cases should be chosen from each partition.Structural testing. Sometime it is called white-box testing. Derivation of test cases according to program structure. Knowledge of the program is used to identify additional test cases. Objective is to exercise all program statements, not all path combinations.Path testing. The objective of path testing is to ensure that the set of test cases is such that each path through the program is executed at least once. The starting point for path testing is a program flow graph that shows nodes representing program decisions and arcs representing the flow of control.Statements with conditions are therefore nodes in the flow graph. Describes the program control flow. Each branch is shown as a separate path and loops are shown by arrows looping back to the loop condition node. Used as a basis for computing the cyclomatic complexity.Cyclomatic complexity = Number of edges -Number of nodes +2The number of tests to test all control statements equals the cyclomatic complexity.Cyclomatic complexity equals number ofconditions in a program. Useful if used with care. Does not imply adequacy of testing. Although all paths are executed, all combinations of paths are not executed.Cyclomatic complexity: Test cases should be derived so that all of these paths are executed.A dynamic program analyser may be used to check that paths have been executed.Integration testing.Tests complete systems or subsystems composed of integrated components. Integration testing should be black-box testing with tests derived from the specification.Main difficulty is localising errors. Incremental integration testing reduces this problem. Tetsing approaches. Architectural validation. Top-down integration testing is better at discovering errors in the system architecture.System demonstration.Top-down integration testing allows a limited demonstration at an early stage in the development. Test observation: Problems with both approaches. Extra code may be required to observe tests. Takes place when modules or sub-systems are integrated to create larger systems. Objectives are to detect faults due to interface errors or invalid assumptions about interfaces. Particularly important for object-oriented development as objects are defined by their interfaces.A calling component calls another component and makes an error in its use of its interface e.g. parameters in the wrong order.Interface misunderstanding. A calling component embeds assumptions about the behaviour of the called component which are incorrectTiming errors: The called and the calling component operate at different speeds and out-of-date information is accessed.Interface testing guidelines: Design tests so that parameters to a called procedure are at the extreme ends of their ranges. Always test pointer parameters with null pointers. Design tests which cause the component to fail. Use stress testing in message passing systems. In shared memory systems, vary the order in which components are activated.Stress testingExercises the system beyond its maximum design load. Stressing the system often causes defects to come to light. Stressing the system test failure behaviour. Systems should not fail catastrophically. Stress testing checks for unacceptable loss of service or data. Particularly relevant to distributed systems which can exhibit severe degradation as a network becomes overloaded.The components to be tested are object classes that are instantiated as objects. Larger grain than individual functions so approaches to white-box testing have to be extended. No obvious ‘top’ to the system for top-down integration and testing. Object-oriented testing. Testing levels.Testing operations associated with objects.Testing object classes. Testing clusters of cooperating objects. Testing the complete OO systemObject class testing. Complete test coverage of a class involves. Testing all operations associated with an object. Setting and interrogating all object attributes. Exercising the object in all possible states.Inheritance makes it more difficult to design object class tests as the information to be tested is not localized. Weather station object interface. Test cases are needed for all e a state model toidentify state transiti.ons for testing.Object integration. Levels of integration are less distinct in objectoriented systems. Cluster testing is concerned with integrating and testing clusters of cooperating objects. Identify clusters using knowledge of the operation of objects and the system features that are implemented by these clusters. Testing is based on a user interactions with the system. Has the advantage that it tests system features as experienced byusers.Thread testing.Tests the systems response to events as processing threads through the system. Object interaction testing. Tests sequences of object interactions that stop when an object operation does not call on services from another objectScenario-based testing. Identify scenarios from use-cases and supplement these with interaction diagrams that show the objects involved in the scenario. Consider the scenario in the weather station system where a report is generated.Input of report request with associated acknowledge and a final output of a report. Can be tested by creating raw data and ensuring that it is summarised properly. Use the same raw data to test the WeatherData object.Testing workbenches.Testing is an expensive process phase. Testing workbenches provide a range of tools to reduce the time required and total testing costs. Most testing workbenches are open systems because testing needs are organisation-specific. Difficult to integrate with closed design and analysis workbenchesTetsing workbench adaptation. Scripts may be developed for user interface simulators and patterns for test data generators. Test outputs may have to be prepared manually for comparison. Special-purpose file comparators may be developed.Key points:Test parts of a system which are commonly used rather than those which are rarely executed. Equivalence partitions are sets of test cases where the program should behave in an equivalent way. Black-box testing is based on the system specification. Structural testing identifies test cases which cause all paths through the program to be executed.Test coverage measures ensure that all statements have been executed at least once. Interface defects arise because of specification misreading, misunderstanding, errors or invalid timing assumptions. To test object classes, test all operations, attributes and states.Integrate object-oriented systems around clusters of objects.译文:软件测试本章的目标是介绍通过测试程序发现程序中的缺陷的相关技术。
测试操作软件中英文对照
测试操作软件中英文对照目录:1 Satcon电源逆变系统操作主界面---------------------------------------------------------------------------------------------------------- 32 写入界面/ writer -------------------------------------------------------------------------------------------------------------------------- 52.1 控制/Control ---------------------------------------------------------------------------------------------------------------------------- 52.2 测试/Test ----------------------------------------------------------------------------------------------------------------------------------- 52.3 功能/Feature ----------------------------------------------------------------------------------------------------------------------------- 62.4 光伏/PV ------------------------------------------------------------------------------------------------------------------------------------- 62.5 组件/Components ------------------------------------------------------------------------------------------------------------------------ 72.6 额定值/Ratings ---------------------------------------------------------------------------------------------------------------------------- 82.7 直流电压保护/Vdc Prot ------------------------------------------------------------------------------------------------------ 82.8 直流电流保护/Idc Prot -------------------------------------------------------------------------------------------------------------- 92.9 交流电压保护/Vac Prot ------------------------------------------------------------------------------------------------------------- 102.10 交流电压保护2/Vac Prot 2 ------------------------------------------------------------------------------------------------------- 102.11 交流电流保护/Iac Prot ------------------------------------------------------------------------------------------------------------ 112.12 交流电流保护2/Iac2 Prot -------------------------------------------------------------------------------------------------------- 112.13 接地保护/Gnd Prot ---------------------------------------------------------------------------------------------------------------- 122.14 接地保护2/Gnd Prot 2 ------------------------------------------------------------------------------------------------------------ 122.15 频率保护/Frequency Prot -------------------------------------------------------------------------------------------------------- 132.16 寄存器/Regs ---------------------------------------------------------------------------------------------------------------------- 132.17 寄存器2/Regs 2 -------------------------------------------------------------------------------------------------------------------- 142.18 电压比例/V Scaling ---------------------------------------------------------------------------------------------------------------142.19 电流比例/I Scaling ----------------------------------------------------------------------------------------------------------------- 152.20 其他比例/Misc.Scaling ------------------------------------------------------------------------------------------------------------152.21 校准/Calibrate ---------------------------------------------------------------------------------------------------------------------- 162.22 温度/Thermal ------------------------------------------------------------------------------------------------------------------------ 162.23 温度2/Thermal 2 ------------------------------------------------------------------------------------------------------------------- 172.24 modbus通讯/Modbus -------------------------------------------------------------------------------------------------------------- 172.25 系列号/Serial ------------------------------------------------------------------------------------------------------------------------- 182.26 外部控制/Ext control --------------------------------------------------------------------------------------------------------------- 183 读取界面/reader ----------------------------------------------------------------------------------------------------------------------------193.1 测量/Meters ----------------------------------------------------------------------------------------------------------------------------- 203.2 相/Phases -------------------------------------------------------------------------------------------------------------------------------- 203.3 每个机器/Per Unit -------------------------------------------------------------------------------------------------------------------- 213.4 电流调节/I Regs ---------------------------------------------------------------------------------------------------------------------- 213.5 电压调节/V Regs --------------------------------------------------------------------------------------------------------------------223.6 软件/Software ------------------------------------------------------------------------------------------------------------------------- 223.7 计时器/Timers ------------------------------------------------------------------------------------------------------------------------ 233.8 温度/Thermal ------------------------------------------------------------------------------------------------------------------------- 233.9 直流输入路数1/String 1 ----------------------------------------------------------------------------------------------------------- 243.10 直流输入路数2/String 2 ---------------------------------------------------------------------------------------------------------- 24 3.11 直流输入每路千瓦时1/String kWHr 1 --------------------------------------------------------------------------------------- 25 3.12 直流输入每路千瓦时2/String kWHr 2 --------------------------------------------------------------------------------------- 25 3.13 数字信号输入/DIN ---------------------------------------------------------------------------------------------------------------25 3.14 数字信号输出/DOUT ------------------------------------------------------------------------------------------------------------263.15 模拟信号输入/AIN --------------------------------------------------------------------------------------------------------------264 点阵显示/Bit Show ---------------------------------------------------------------------------------------------------------------------- 27 4.1 状态/Status --------------------------------------------------------------------------------------------------------------------------- 27 4.2 现场可编程门阵列/FPGA --------------------------------------------------------------------------------------------------------27 4.3 输入/输出/INPUTS/OUTPUTS -------------------------------------------------------------------------------------------------- 28 4.4 故障1-3/Fault 1-3 ------------------------------------------------------------------------------------------------------------------- 29 4.5 故障4-6/Fault 4-6 ------------------------------------------------------------------------------------------------------------------- 30 4.6 故障7/Fault7 ------------------------------------------------------------------------------------------------------------------------ 30 4.7 比例/CAL ----------------------------------------------------------------------------------------------------------------------------- 31 4.8 点阵显示8/BitShow 8 -------------------------------------------------------------------------------------------------------------- 311 Satcon 电源逆变系统操作主界面简写注释:PCS : power conditioning system 电源逆变系统 PLL :phase locked loop 锁相环路 volts : voltages 做名称是电压 做单位是伏特 amps : amperes 安培 rdy : ready 准备 pwr :power 功率操作Satcon 电源逆变系统操作软件点阵显示写入界面读取界面其他窗口停机 重启 启动 停止 转到本地控制 转到远程控制 唤醒待机 直流输入电压 直流连接电流 没有故障 锁相环路失效 输入功率锁相环路没锁定输出电流输出视在功率输出有功功率功率因素直流输入没准备好输出电网端没准备好 没准备好CR1开路CR2开路 无功率输出门极关闭AC 断路器断开PI (Kp ,Ti ): P=proportional 比例 对应参数为Kp , I=intergral 积分 对应参数为Ti DSP :Digital Signal Processor 数字信号处理器停机 重启 启动 停止连接电源逆变系统 断开电源逆变系统 电源逆变系统通讯设置用户级别服务端级别系统级别 退出软件比例积分调节(Kp ,Ti ) 设置与DSP 通讯的参数从DSP 上传设置到电脑从电脑下载设置到DSP2 写入界面/ writer 2.1 控制/Controlcmd : command 指令,控制 2.2 测试/Test功率控制模式指令单位详细步骤保存运行使能无功功率控制模式 有功功率控制 功率因素控制无功功率控制 直流电压控制最大dq 电流 出厂默认100%有功功率升降速度无功功率升降速度保存所有设置到电脑和DSP 从电脑读取设置 从DSP 读取设置 清除设置测试模式选择闭合输出门极测试_选择 外部输入掩码开环测试中电压控制 闭环测试中电压控制风扇转速测试有功电流控制cct : circuit 电路open cct test 开环测试 short cct test 闭环测试 ext :external 外部 2.3 功能/featurebw :bandwidth 带宽estop :emergency stop 紧急停止 2.4 光伏电压/PV急停_重置自动重置间隔时间锁相环路带宽自动重置最大尝试次数自动重置锁死时间输出接触器断开延时 保存参数 初始化指令重新接入电网延时min :minimum 最小值,最小 PV :photovoltage 光电压 光伏mppt :maximum power point tracking 最大功率点跟踪 2.5 组件/components直流输入门槛电压 超过门槛电压后直流输入延时低功率限定 低功率延时预充电电压最小值 功率最小改变值直流电压操作每步间隔时间 功率改变延时Mppt 中直流电压改变最小值 Mppt 中直流电压改变最大值风扇转速可(1)否(0)调节 逆变桥数IGBT 开关频率死区时间输出滤波电感值输出滤波电容大小有(0)无(1)变压器三角型电压反馈1/星型接0直流输入路数freq:frequency 频率cap:capacity电容容量num:number 数量2.6 额定值/ratings额定输出功率额定输出频率额定输出电压额定逆变器输出电压欧洲机型CE 1,UL 0 变压器电压抽头比例语言接地类型:1接地,0 浮地inv:inverter 逆变器这里指逆变模块IGBTIGBT:Insulated Gate Bipolar Transistor 绝缘栅双极型晶体管euro:europe 欧洲2.7 直流电压保护/Vdc Protvolt :voltage 电压inst :instantaneous 瞬间,即刻的 Prot :protection 保护 2.8 直流电流保护/Idc Prot直流输入电压过压保护直流输入过压保护延时直流输入电压欠压保护门限直流输入欠压保护延时直流过压保护门限 直流过压保护延时直流欠压保护门限直流欠压保护延时直流过压瞬间保护门限直流欠压瞬间保护下限直流输入过流保护门限 直流输入过流保护延时直流输入瞬间过流保护门限2.9 交流电压保护/Vac Prot2.10 交流电压保护2/Vac Prot 2输出电压瞬间过压保护门限输出电压过压快速保护门限 输出电压过压快速保护延时输出过压慢速保护门限输出过压慢速保护延时输出欠压快速保护下限 输出欠压快速保护延时输出欠压慢速保护下限 输出欠压慢速保护延时输出电压不平衡保护门限 输出电压不平衡保护延时2.11 交流电流保护/Iac Prot2.12 交流电流保护2/Iav2 ProtUL :Underwriters Laboratories 美国保险商实验室 说这个机器是UL 的是说这个机器要符合UL 认证标准 同理CE 的是欧盟标准输出过流保护门限 输出过流保护延时中线过流保护门限 中线过流保护延时电流不平衡保护门限电流不平衡保护延时IGBT 过流保护门限 IGBT 过流保护延时UL 型号触发保护最小输出功率UL 型号触发保护的电流不平衡百分比值IGBT 硬件过流(短路)保护门限2.13 对地保护/Gnd ProtGFDI :2.14 对地保护2/Gnd Prot 2漏电流快速保护门限漏电流快速保护延时漏电流慢速保护门限 漏电流慢速保护延时接地直流保险保护动作电压 接地直流保险保护动作延时对地阻抗保护门限阻值 对地阻抗保护触发延时接地漏电流保护门限电流接地漏电流保护触发延时浮地阻抗测量传感器失效判定门限值Gnd:ground 地,接地Max:maximum 最大值2.15 频率保护/Frequency Prot超频保护门限超频保护延时欠频保护门限低频保护延时频率瞬间变动保护限定频率瞬间变动保护时间限定lmt:limit 限制限定tmlmt:time limit 时间限定2.16 寄存器/Regs(一般不用动)直流输入电压调节器增益直流输入电压调节器时间常量输出线电流调节器增益输出线电流调节器时间常量抗孤岛效应增益系数直流电流正反馈系数IGBT电流调节器增益IGBT电流调节器时间常量Reg:regulator 调节器应该指的是运放Cur:current 电流2.17 调节2/Regs 2(一般不用动)temp :temperature 温度 2.18 电压比例/V Scalingfdbk :feedback 反馈 2.19 电流比例/I Scaling功率调节器比例增益 功率调节器积分增益散热片温度调节器比例增益 散热片温度调节器积分增益散热片过温调节器比例增益 散热片温度调节器积分增益直流输入电压反馈串联电阻 IGBT 电压反馈串联电阻 输出电压反馈串联电阻IGBT 电压反馈比率输出电压反馈比率输出电压反馈并联电阻IGBT 电压反馈并联电阻直流输入电压反馈并联电阻2.20 其他比例/Misc.ScalingMisc.: miscellaneous 其他,各式各样 2.21 校准/Calibrate接地电流LEM 反馈比率 接地电流LEM 反馈负载电阻直流电流LEM 反馈比直流电流反馈负载电阻IGBT 电流LEM 反馈比IGBT 电流反馈负载电阻输出电流CT 反馈比率 中线电流CT 反馈比率 输出电流反馈负载电阻中线电流反馈负载电阻温度反馈串联电阻 温度反馈并联电阻直流输入路数电流LEM 反馈比率 直流输入路数电流反馈负载电阻Vdgim 浮地阻抗取样反馈串联电阻 Vdgim 反馈并联电阻接地保险电压取样串联电阻 接地保险电压取样并联电阻第二浮地阻抗取样反馈串联电阻 第二浮地阻抗取样反馈并联电阻2.22 温度/Thermal2.23 温度2/Thermal 2校准反馈 实际测量到的IGBT 的电压 实际测量到的直流输入电压 实际测量到的A 相电压实际测量到的B 相电压实际测量到的C 相电压温度过低保护门限风扇空气温度过低保护门限 散热片温度过低保护门限 风扇停止温度设定温度反馈路数温度过高保护门限风扇空气温度过高保护门限散热片温度过高保护门限风扇运转温度设定风扇最小转速PWR: power 功率FANON:fan on 风扇运转2.24 Modbus通讯/Modbus 2.25 系列号/serial 100%功率下风扇运转温度5%功率下风扇运转温度Modbus传输波特率Modbus访问代码Modbus数据位访问代码1 Modbus奇偶校验Modbus停止位从站地址nv: nonvolatile 非易失性 chksum :checksum 校验和 prgm: program 程序 2.26 外部控制/Ext control系列号1输出频率跟踪 允许非易失性写入F206 校验和保存 程序校验和保存模拟功率因素控制1:外部继电器和4-20ma 功率因素控制 0:禁止外部控制最大功率因数范围功率控制开关1 功率控制开关2功率控制开关3功率控制开关4Ext: external 外部的 PF: power factor 功率因素 rng:range 范围 SW: switch 开关3 读取界面(只读参数)3.1 测量/meters3.2 相/Phases直流输入电压 直流连接电压 直流连接电流 输入功率IGBT 电压 IGBT 电流 输出线电压 输出线电流输出有功功率 输出无功功率 总兆瓦小时 输出视在功率 功率因素总千瓦小时 总瓦小时 直流输入准备时间中线电流 直流对地电流 对地阻抗 交流侧准备时间操作状态 电脑指令 远程指令 接地保险电压IGBT a相电流IGBT b相电流IGBT c相电流风扇转速2组IGBT a相电流2组IGBT b相电流2组IGBT c相电流输出线电流a相输出线电流c相输出线电流b相电流不平衡输出电压a相输出电压b相输出电压c相电压不平衡ab相之间输出电压bc相之间输出电压ca相之间输出电压锁相环路错误锁相环路频率3.3 每个机器/Per Unit平均直流输入电压平均直流电压平均直流电流平均IGBT电压平均IGBT电流d轴平均IGBT电流q轴平均IGBT电流平均输出电压平均输出电流d轴平均输出电流q轴平均输出电流平均输入功率平均输出功率平均无功功率平均视在功率平均输出频率平均频率错误平均中线电流功率因素d轴平均IGBT电压q轴平均IGBT电压简写注释avg: average 平均值err :error 错误3.4 电流调节/I Regs3.5 电压调节/V RegsRef :reference 参考 3.6 软件/Softwared 轴输出电流指令 d 轴输出电流错误 d 轴输出电流反馈 d 轴d 轴输出维持电流 d 轴电压控制 q 轴输出电流指令 q 轴输出电流指令q 轴输出电流错误 q 轴 q 轴输出维持电流 q 轴电压控制d 轴IGBT 电流错误 d 轴IGBT 平均电流 d 轴IGBT 电流错误 d 轴IGBTd 轴IGBT 维持电流 d 轴 q 轴IGBT 控制 q 轴IGBT 平均电流q 轴IGBT 电流错误 q 轴IGBT q 轴IGBT 维持电流 q 轴参考电压 反馈电压 错误电压 维持电压瞬间电压 直流参考电流d 轴输出电压控制q 轴输出电压控制 平均输入功率平均输出功率直流电压最低值 直流电压最高值REV :revision 版本 param :parameter 参数 calc :calculation 计算 ini :initial 初始化3.7 计时器/Timersecs: seconds 秒 cntr: control 控制FPGA 版本 F206版本 F240版本程序校验和计算 206程序校验和计算 参数校验和计算 数据校验和计算程序校验和保F206校验和保存参数校验和读取数据校验和读取最大dq 电流 仪表校验和保存 参数校验和保存d 轴电流升降速度控制 数据库初始化OK1数据库初始化OK2千瓦时控制AC 侧准备时间 直流输入准备时间 时间控制 尝试次数锁定标记 校验和错误 故障总和计时器1 风扇重置计时器 串行通讯接口校验和错误错误5 PWM 载波计数Modbus 超时 0功率时间限制串行通讯接口超时sci :Serial Communications Interface 串行通讯接口 pwm :Pulse Width Modulation 脉宽调制 3.8 温度/Thermalcab :cabinet 箱,橱柜3.9 输入路数1/Strings 13.10 输入路数2/Strings 2风扇室空气温度 控制箱温度 散热片温度1风扇空气温度信号电压控制箱温度信号电压散热片温度信号电压1散热片最大温度 风扇转速 温度限定直流输入第一路电流直流输入所有路数平均电流直流输入第17路电流直流输入所有路数平均电流3.11路数千瓦小时1/String kWHr 1第一路千瓦时3.12 路数千瓦小时2/String kWHr 23.13 数字信号输入/DINDIN :digital signal in 数字信号输入 DS :disconnector 断路器 CR :contactor 接触器 STAT :status 状态LAC :AC reactor 交流电抗器 TX: transformer 变压器 CB :AC breaker 交流断路器 TSW :temperature switch 温度开关 SURGE: surge suppressor 浪涌抑制 3.14数字信号输出/DOUT直流断路器 门开关或急停 直流接触器输入 接地保护监测风扇2状态 电抗器1温度开关 变压器温度开关 1组IGBT 保险2组IGBT 保险 交流接触器 电抗器2温度开关 交流断路器风扇状态 急停 1组逆变模块温度开关 1组逆变模块温度开关直流浪涌抑制器交流浪涌抑制器 操作开关选择 风扇1温度开关风扇2温度开关数字信号输入功率 数字信号输入功率因素DOUT: digital signal out 数字信号输出 3.15 模拟信号输入/AINAIN: analog signal in 模拟信号输入 NEUT: neutral 中线4 点阵显示界面4.1状态/Status数字信号输出1A 相输出电压模拟信号 A 相输出电流模拟信号IGBT 输入电压模拟信号 A 相输入电流模拟信号对地漏电流模拟信号 中线电流模拟信号 直流侧火线电流模拟信号直流输入电压浮地阻抗电压直流连接电压 路数1路数2 温度测量电压测试10:没有初始化1 :锁相环路被禁止2 :锁相环路没锁定3:正序(逆变发电)4 :输出端没有准备好5 :直流输入没有准备好6:没有准备好7:没有故障8:没有停机9 :没有运行10 :无功率输出11 :没有接入电网12:门极信号测试关闭13 :开环测试模式关闭14 :闭环测试模式关闭15:无参数4.2 现场可编程门阵列/FPGA0:IGBT光纤驱动信号反馈11:IGBT光纤驱动信号反馈22:IGBT光纤驱动信号反馈33:IGBT光纤驱动信号反馈4 4 :IGBT光纤驱动信号反馈5 5:IGBT光纤驱动信号反馈620:直流电源OK 21:硬件过流1 OK 22:硬件过流2 OK23:24:IGBT 光纤驱动信号A25:IGBT光纤驱动信号B 26:IGBT光纤驱动信号C 27:IGBT光纤驱动信号A2 28:IGBT光纤驱动信号B2 29:IGBT光纤驱动信号C232:重启33:门极信号允许34:门极测试35:运行指示灯36:故障指示灯37:38:逆变模块2错误掩码39:逆变器模块2允许简写注释:HW: hardware 硬件OC:over current 过流en: enable 使能flt: fault 故障,错误msk:mask 掩码4.3 输入/输出/INPUTS/OUTPUTS0:直流断路器断开1:门打开2: 直流接触器断开3:GFDI接地错误4:风扇2 OK 5:电抗器温度过高6:隔离变压器温度过高7:逆变模块1烧保险8:逆变模块2烧保险9:交流接触器断开10:电抗器2温度过高11:电路切断器12:风扇1 OK 13:急停14:逆变模块1温度过高15:逆变模块2温度过高16:直流浪涌抑制器17:交流浪涌抑制器18:操作选择开关19:风扇1温度开关20:风扇2温度开关32:门极信号复位指令33:断开交流接触器指令34:断开预充电回路指令35:断开直流接触器指令36:没有发电37:故障38:切断电路保护指令39:GFDI复位信号40:风扇继电器41:风扇2脉宽调制42:门极信号关闭43:辅助多路传输144:辅助多路传输245:多路模拟开关3 46:多路模拟开关4 47:风扇1脉宽调制简写注释:xfmr:transformer 变压器brbk:breaker 断路器SS:surge suppressor 浪涌抑制器Prechg:precharge 预充电Amux:auxiliary multiplex 多路复用4.4 故障1-3/Fault 1-30:直流输入没准备好1:联网输出端没有准备好2:停止指令3:停机指令4:急停5:供电电源太低被停止6:电流太小被停止7:保护延时故障8:门被打开9:直流断路器断开10:电路切断器断开11:DPCB板故障12:硬件故障13:IGBT故障14:温度故障16:直流输入过压17:直流输入欠压18:直流过压19:直流欠压20:直流输入接地故障21:联网输出端慢速过压22:联网输出端快速过压23:联网输出端慢速欠压24:联网输出端快速欠压25:电压不平衡26:频率过高27:频率慢速过低28:频率快速过低29:中线过流30:联网输出端瞬间过压32:程序校验和33:fpga版本34:数据拷贝135:数据拷贝2 36:参数A拷贝1 37:参数A拷贝2 38:参数A拷贝239:参数B拷贝240:电压反馈比例41:电流反馈比例42:IGBT电流反馈43:额定值改变44:F206版本45:f206校验和46:非易失性ram故障47 :fpga故障简写注释:OV:over voltage 过压UV: under voltage 欠压4.5 故障4-6/Fault 4-60:+5V隔离电源1: +5V DPCB板电源2:+15V DPCB板电源3:-15V DPCB板电源4:看门狗5:浪涌抑制器6:1组逆变模块保险7:2组逆变模块保险8:1组逆变模块温度9:2组逆变模块温度10:变压器温度11:电抗器温度12:预充电故障13:测试模式14:开环测试15:闭环测试16:A相门极信号反馈17:B相门极信号反馈18:C相门极信号反馈19:A2相门极信号反馈20:B2相门极信号反馈21:C2相门极信号反馈22:直流输入过流23:直流输入瞬间过流24:直流瞬间欠压25:直流瞬间过压26:IGBT软件过流27:1组IGBT硬件过流28:2组IGBT硬件过流29:联网输出端过流30:电流不平衡32:控制箱温度过高33:风扇室空气温度过高34:散热片1温度过高35:散热片2温度过高36:散热片3温度过高37:散热片4温度过高38:散热片5温度过高39:散热片6温度过高40:控制箱温度过低41:风扇空气温度过低42:散热片1温度过低43:散热片2温度过低44:散热片3温度过低45:散热片4温度过低46:散热片5温度过低47:散热片6温度过低简写注释:iso:insolated power 隔离电源DPCB:digital power control board 数字电源控制板SW: software软件hi:high 高htsnk:heatsink散热片lo:low 低4.6 故障7/Fault 70:风扇1故障1:风扇2故障2:直流接触器没有断开3:直流接触器没有闭合4:交流接触器没有断开5:交流接触器没有闭合7:8:4.7 比例/CAL0:A相输出感应电压和实际测量电压相差过大1:IGBT电压检测相位错误2:输出电流检测CT相位错误3:IGBT电流检测LEM相位错误9:B相输出感应电压和实际测量电压相差过大10:C相输出感应电压和实际测量电压相差过大11:IGBT感应电压和实际测量电压相差过大12:直流感应电压和实际测量电压相差过大13:直流输入感应电压和实际测量电压相差过大4.8 点阵显示8/Bit Show 80:功率因素0 1:功率因素12:功率因素2 3:功率因素34:功率控制指令0 5:功率控制指令1 6:功率控制指令2 7:功率控制指令3简写注释:PF: power factor 功率因素Pcmd:power command 功率控制。
关于软件测试的外国文献
关于软件测试的外国文献软件测试是软件开发过程中至关重要的一环,而外国文献中关于软件测试的研究和实践也非常丰富。
下面我将从不同角度介绍一些相关的外国文献,以便更全面地了解软件测试的最新发展。
1. "Software Testing Techniques" by Boris Beizer:这本经典著作详细介绍了软件测试的各种技术和方法,包括黑盒测试、白盒测试、基于模型的测试等。
它提供了许多实用的指导和案例,对软件测试的理论和实践都有很深入的探讨。
2. "Testing Computer Software" by Cem Kaner, Jack Falk, and Hung Q. Nguyen:这本书介绍了软件测试的基础知识和常用技术,包括测试计划的编写、测试用例设计、缺陷管理等。
它强调了测试的全过程管理和质量保证,对于软件测试初学者来说是一本很好的入门指南。
3. "The Art of Software Testing" by Glenford J. Myers, Corey Sandler, and Tom Badgett:这本书从理论和实践的角度探讨了软件测试的艺术。
它介绍了测试的基本原则和策略,以及如何设计有效的测试用例和评估测试覆盖率。
这本书对于提高测试人员的思维和技巧非常有帮助。
4. "Foundations of Software Testing" by Aditya P. Mathur:这本书系统地介绍了软件测试的基本概念、技术和方法。
它涵盖了测试过程的各个阶段,包括需求分析、测试设计、执行和评估。
这本书还提供了丰富的案例和练习,帮助读者深入理解和应用软件测试的原理和技术。
5. "Software Testing: Principles and Practices" by Srinivasan Desikan and Gopalaswamy Ramesh:这本书介绍了软件测试的原则、实践和工具。
软件系统开发中英文对照外文翻译文献
软件系统开发中英文对照外文翻译文献(文档含英文原文和中文翻译)软件工程中的过程处理模型斯卡基沃尔特摘要软件系统从起初的开发,维护,再到一个版本升级到另一个版本,经历了一系列阶段。
这篇文章归纳和整理了一些描述如何开发软件系统的方法。
从传统的软件生命周期的背景和定义出发,即大多数教科书所讨论的,并且目前的软件开发实践所遵循的软件生命周期,接着讨论作为目前软件工程技术基石的更全面的软件开发模型。
关键词:软件生命周期;模型;原型1 前言软件业的发展最早可追溯到开发大型软件项目的显式模型,那是在二十世纪五十年代和六十年代间。
总体而言,这些早期的软件生命周期模型的唯一目的就是提供一个合理的概念计划来管理软件系统的开发。
因此,这种计划可以作为一个基础规划,组织,人员配备,协调,预算编制,并指导软件开发活动。
自20世纪60年代,出现了许多经典的软件生命周期的描述(例如,霍西尔1961年,劳斯莱斯1970年,1976年博伊姆,迪斯塔索1980年,1984年斯卡基,萨默维尔1999年)。
罗伊斯(1970)使用现在生活中熟悉的“瀑布”图表,提出了周期的概念,这个图表概括了开发大型软件系统是多么的困难,因为它涉及复杂的工程任务,而这些任务在完成之前可能需要不断地返工。
这些图表也通常在介绍性发言中被采用,主要针对开发大型软件系统的人们(例如,定制软件的客户),他们可能不熟悉各种各样的技术问题但还是要必须解决这些问题。
这些经典的软件生命周期模型通常包括以下活动一些内容:系统启动/规划:系统从何而来?在大多数情况下,不论是现有的信息处理机制以前是自动的,手工的,还是非正式的,新系统都会取代或补充它们。
● 需求分析和说明书:阐述一个新的软件系统将要开发的问题:其业务能力,其所达到的性能特点,支持系统运行和维护所需的条件。
● 功能或原型说明:潜在确定计算的对象,它们的属性和关系,改变这些对象的操作,约束系统行为的限制等。
●划分与选择:给出需求和功能说明书,将系统分为可管理的模块,它们是逻辑子系统的标志,然后确定是否有对应于这些模块的新的,现有的,或可重复使用的软件系统可以复用。
软件开发中英文对照外文翻译文献
软件开发中英文对照外文翻译文献(文档含英文原文和中文翻译)译文:仿真软件开发低大型复杂腔基于UG的二次开发摘要---射击和弹跳射线(SBR)二次开发的基础软件是由国标库(UG)。
射线跟踪的核心算法是基于优化的非均匀有理b样(NURBS)曲线表面相交算法建立在UG,导致非常高的射线路径跟踪的准确性没有啮合从而保持原有的空腔模型的准确性。
它也是有效的避免同任何复杂的蛀牙,因为即使工作屏蔽的过程。
两腔的几何建模及其散射模拟成一个统一的平台,形成一个易用的综合和环球环境电磁建模复杂的蛀牙。
在本文开发的软件对复杂腔散射建模引入了一些数值结果显示的准确性和效率关键词--电大型复杂cavit; 雷达截面; UG的二次开发; 射击和弹跳射线(SBR); 射线跟踪I.介绍雷达截面(RCS)的分析电等大型复杂洞进口或出口,双面或三面角反射器等,是计算电磁学中最重要的主题之一。
低大型复杂的空腔结构,只有基于高频方法如射击和弹跳射线(SBR)[1][2][3]是合适的。
传统上,为三步骤采用SBR首先,模型腔的CAD软件和网格表面的内墙,然后出口信息网格的结果;其次发现表面上的光线的反射点ray-surface十字路口和屏蔽计算;最后计算RCS即将离任的射线从腔。
虽然这些网基于射线跟踪可用于任意形状的蛀牙从理论上讲,它有不准确的缺点路径建立在复杂的蛀牙导致贫穷的RCS计算精度。
电大型复杂的蛀牙,射线跟踪的效率很低,由于分离腔建模与RCS计算复杂的仿真过程。
为了解决这些问题, 一个强大的CAD软件,模拟电大型复杂腔并计算其RCS在同一平台。
开发的软件具有以下优势: 1)腔建模和RCS计算在UG集成,因此仿真过程大大简化。
2)表面啮合没有必要而射线可以追踪精度高和效率在任何任意形状的空腔。
3)开发的软件是通用的电磁散射的凹面反射镜结构,如蛀牙和角落。
小说射线追踪方法的新的先进的软件是基于UG的二次开发将讨论下一步,和RCS仿真结果。
软件系统开发中英文对照外文翻译文献
软件系统开发中英文对照外文翻译文献(文档含英文原文和中文翻译)软件工程中的过程处理模型斯卡基沃尔特摘要软件系统从起初的开发,维护,再到一个版本升级到另一个版本,经历了一系列阶段。
这篇文章归纳和整理了一些描述如何开发软件系统的方法。
从传统的软件生命周期的背景和定义出发,即大多数教科书所讨论的,并且目前的软件开发实践所遵循的软件生命周期,接着讨论作为目前软件工程技术基石的更全面的软件开发模型。
关键词:软件生命周期;模型;原型1 前言软件业的发展最早可追溯到开发大型软件项目的显式模型,那是在二十世纪五十年代和六十年代间。
总体而言,这些早期的软件生命周期模型的唯一目的就是提供一个合理的概念计划来管理软件系统的开发。
因此,这种计划可以作为一个基础规划,组织,人员配备,协调,预算编制,并指导软件开发活动。
自20世纪60年代,出现了许多经典的软件生命周期的描述(例如,霍西尔1961年,劳斯莱斯1970年,1976年博伊姆,迪斯塔索1980年,1984年斯卡基,萨默维尔1999年)。
罗伊斯(1970)使用现在生活中熟悉的“瀑布”图表,提出了周期的概念,这个图表概括了开发大型软件系统是多么的困难,因为它涉及复杂的工程任务,而这些任务在完成之前可能需要不断地返工。
这些图表也通常在介绍性发言中被采用,主要针对开发大型软件系统的人们(例如,定制软件的客户),他们可能不熟悉各种各样的技术问题但还是要必须解决这些问题。
这些经典的软件生命周期模型通常包括以下活动一些内容:系统启动/规划:系统从何而来?在大多数情况下,不论是现有的信息处理机制以前是自动的,手工的,还是非正式的,新系统都会取代或补充它们。
● 需求分析和说明书:阐述一个新的软件系统将要开发的问题:其业务能力,其所达到的性能特点,支持系统运行和维护所需的条件。
● 功能或原型说明:潜在确定计算的对象,它们的属性和关系,改变这些对象的操作,约束系统行为的限制等。
●划分与选择:给出需求和功能说明书,将系统分为可管理的模块,它们是逻辑子系统的标志,然后确定是否有对应于这些模块的新的,现有的,或可重复使用的软件系统可以复用。
GPS水准测量外文翻译文献
GPS水准测量外文翻译文献水准测量外文翻译文献GPS水准测量外文翻译文献(文档含中英文对照即英文原文和中文翻译)Analyzing the Deformations of a BridgeUsing GPS and Leveling DataAbstract. The aim of this study is analyzing 1D (in vertical) and 3D movements of a highway viaduct, which crosses over a lake, using GPS and leveling measurement data separately and their combination as well. The data are acquired from the measurement campaigns which include GPS sessions andsix––month intervals for two precise leveling measurements, performed with sixyears.In 1D analysis of the (vertical) deformations, the height differences derived from GPS and leveling data were evaluated. While combining theheight height––difference sets (GPS derived and leveling derived) in the third stage of 1D analysis, V ariance Component Estimation (VCE) techniques according to Helmert’s approach and Rao’s Minimum Norm Quadratic Unbiased Estimation (MINQUE) approach have been used.In 3D analysis of the deformations with only GPS data classical S –transformation method was employed.The theoretical aspects of each method that was used in the data analyses of this study are summarized. The analysis results of the deformation inspections of the highway viaduct are discussed and from the results, an optimal way of combining GPS data and leveling data to provide reliable inputs to deformation investigations is investigated in this study.Keywords . GPS, Leveling, Deformation Analysis, V ariance Component Estimation, S Estimation, S––Transformation1 IntroductionIt has a considerable importance to have the movements of an engineering structure within certain limits for the safety of the community depending on it. To determine whether an engineering structure is safe to use or not, their movements are monitored and possible deformations are detected from the analysis of observations. An appropriate observation technique, which can be geodetic or non geodetic or non––geodetic (geotechnical geodetic (geotechnical––structural) according to classification in Chrzanowski and Chrzanowski (1995), is chosen with considering the physical conditions of the observed structure (its shape, size location and so on), environmental conditions (the geologic properties of the based ground, tectonic activities of the region, common atmospheric phenomena around the structure and so on), the type of monitoring (continuous or static) and the requiredmeasuring accuracy for being able to recognize the significant movements. Until the beginning of the 1980’s, conventional measurement techniques have been used for detecting the deformations in large engineering structures. After that the advances in space technologies and their geodetic applications provided impetus for their use in deformation measurements (Erol and Ayan (2003)). GPS positioning technique has the biggest benefit of high accuracy 3D positioning; however, the vertical position is the least accurately determined component due to inherent geometric weakness of the system and atmospheric errors (Featherstone et al. (1998)).Therefore, using GPS measurement technique in deformation measurements at millimeter level accuracy requires some special precautions, such as using forced centering equipment, applying special measuring techniques like the rapid static method for short baselines or designing special equipment for precise antenna height readings (see Erol and Ayan (2003) for their uses in practice). In some cases, even these special precautions remain insufficient and hence, the GPS measurements need to be combined with another measurement technique to improve its accuracy in height component.In geodetic evaluation of deformations, static observations obtained by terrestrial and/or GPS technique are subject to a two –epoch analysis . The two two––epoch analysis basically consists of independent Least Squares Estimation (LSE) of the single epochs and geometrical detection of deformations between epochs. Detailed explanations of the methods based on this fundamental idea are found in Niemeier et al. (1982), Chen (1983), Gründig et al. (1985), Fraser and GrüGründig (1985), Chrzanowski and Chen (1986), Caspary (1987), Cooper (1987), ndig (1985), Chrzanowski and Chen (1986), Caspary (1987), Cooper (1987), Biacs (1989), Teskey and Biacs (1990), Chrzanowski et al. (1991).Here, the aim is analyzing 1D and 3D deformations of an engineering structure using GPS and leveling measurements data. During the 1D deformation analysis, three different approaches were performed separately. In the first and second approaches, height differences from precise leveling measurements and GPS measurements respectively were input in the analyzing algorithm. In the third approach the combination of height differences from both techniques were evaluated for vertical deformation. While combining the two measurement sets, Helmert Variance Component Estimation (HVCE) and Minimum Norm Quadratic Unbiased Estimation (MINQUE) techniques wereused. 3D deformation analysis only with GPS measurements was accomplished using S-transformation technique. The theories behind the used deformation analysis and variance component estimation methods are summarized in thefollowing. Thereafter the optimal solution for combining the GPS and precise leveling data to improve the GPS derived heights and hence to provide reliable inputs via the optimal solution for the deformation investigations are discussed.The highway viaduct of which deformations were inspected in this study is 2160 meter long and crosses over a lake on 110 piers. It is located in active tectonic region very close to the North Anatolian Fault (NAF). With the aim of monitoring its deformations, four measurement campaigns including GPSsix––month sessions and precise leveling measurements were carried out with six intervals. The session plans were prepared appropriately for each campaign on a pre––positioned deformation network.pre2 Deformation Analysis Using Height DifferencesIn general, the classical (geometrical) deformation analysis is evaluated in three steps in a geodetic network. In the first step, the observations, which wererecorded at epoch t1 and epoch t2, are adjustedseparately according to free network adjustment approach. During the computations, all point heights are assumed to be subject to change and the same point heights, which are approximate values, are used in adjustment computation of each epoch. Computations are repeated until all outliers are eliminated.In the second step, global test procedure is applied to ensure the stability assumptions of network points during the interval . In the global test, the combined free adjustment is applied to both epoch measurements ( and ). In thispartial––trace minimum solution is applied on the adjustment computation, the partialstable points (see Erol and Ayan (2003)).; (1); (2); (3) where signify the degree of freedom after first, second and third adjustment computations respectively. Equation (1) and equation (2) represent free adjustment computations of the first and second epochs and equation (3) describes combined free adjustment. From the results found in equation (1), (2) and (3), the test value is determined as in the following.;; (4) This test value is independent from the datum and it is in F–distribution. test value is compared with the critical value which is selected from the Fisher––distribution table according to r (rank) and (degrees of freedom) for FisherS=1- (0.95) confidence level. If , the null hypothesis which implies is true for the points of which heights were assumed not to be changed.On the other hand, if ,in the group of points which had been assumed as stable in the global test procedure, there is/are instable point(s). Then the necessity of localization of the deformations is understood and the combined free adjustment and global test are repeated until only the stable points are left out in the setIn the last step of the analysis, the following testing procedure is applied to the height changes. Similar to previous steps, test values are calculated for all network points except the stable ones and compared with the critical value of F from the Fisher––distribution table.from the Fisher;; (5) If , it is concluded that the change in height is significant. Otherwise, it is concluded that the height change d is not significant and it is caused by the random measurement errors.2.1 Variance Component EstimationIn the method of Least Squares (LS), weights of the observations are the essential prerequisite in order to correctly estimate the unknown parameters. The purpose of variance component estimation (VCE) is basically to find realistic and reliable variances of the measurements for constructing the appropriate a–priori covariance matrix of the observations. Improper stochastic modeling can lead to systematic deviations in the results and these results may appear to include significant deformations. Methods for estimating variance and covariance components, within the context of the LS adjustment, have been intensively investigated in the statistical and geodetic literature. The methods developed so far can be categorized as follows (see Crocetto et al (2000)):Functional modelsStochastic modelsEstimation approachesWhen the variance component estimation is concerned, a first solution to the problem was provided by Helmert in 1924, who proposed a method for unbiased variance estimates (Helmert (1924)). In 1970, an independent solution was derived by Rao, who was unaware of Helmert's method, and was called the minimum norm quadratic unbiased estimation (MINQUE) method (Rao (1970)). Under the assumption of normally distributed observations, both Helmert andRao's MINQUE approaches are equivalent.2.1.1 Helmert Approach in VCEA full derivation of the Helmert technique and computational model of variance component estimation is given in Grafarend (1984). A summary of the mathematical model is given below (Kıızılsu (1998)). The Helmert equation, mathematical model is given below (K(6)The matrix expression of equation (6) is given in (7)(7) where, u is the number of measurement groups.(8)(9)(10)tr(·) ) is the trace operator, N is the global normal equation matrix where, tr(·including all measurements, ni, Pi, Ni, vi are the number of measurements, assigned weight matrix, normal equation matrix and residuals of the group measurements respectively; and is estimated variance factor.It can be seen that ci is a function of Pi. On the other hand Pi is also a function of . Because of this hierarchy, Helmert solution is an iterative computation.The step by step computation algorithm of Helmert V ariance Component Estimation is given as below:1. Before the adjustment, a unique weight is selected for each of the measurement groups. At the start of iterative procedure, the weights for each measurement groups can be chosen equal to one (P1 = P2=……= Pu=measurement groups can be chosen equal to one (P1 = P2=……= Pu= 1) 1)2. By using the a –priori weights, separate normal equations (N1,N2 ,……,Nu ) for each measurement group and the general normal equation (N) are composed. Here, the general normal equation isthe summation of normal equations such as: N = N1 +N2 + ……+Nu .3. The adjustment process is started, in which the unknowns and residuals are calculated.,(11)…………(12)4. Helmert equation is generated (equation (6)).5. V ariance components in Helmert equation () and new weights are calculated.(13) 6. If the variance component for all groups (i=1,2,……u)is equal to one, then the iteration is stopped. If not, the procedure is repeated from the second step using the new weights. The iterations are continued until the variancesreach one.2.1.2 MINQUE Approach in VCEThe general theory and algorithms of the minimum norm quadratic unbiased estimation procedure are described in Rao (1971); Rao and Kleffe (1988). This statistical estimation method has been implemented and proven useful in various applications not only for evaluating the variance– covariance matrix of the observations, but also for modelling the error structure of the observations (Fotopoulos (2003)).The theory of Minimum Norm Quadratic Estimation is widely regarded asone of the best estimators. Its application to a levelling network has been explained in Chen and Chrzanowski (1985), and it was used for GPS baseline data processing in Wang et al. (1998).quadratic––based approach where a quadratic MINQUE is classified as a quadraticestimator is sought that satisfies the minimum norm optimality criterion. Given the Gauss-Markov functional model v=Ax–b, where b and v are vectors of theobservations and residuals, respectively, the selected stochastic model for the data, and the variance covariance matrix are expressed as follows.(14)(15) where, only variance components are to be estimated. Such a model is used extensively for many applications, including (1984), Caspary (1987) and Fotopoulos and Sideris (2003).The MINQUE problem is reduced to the solution of the following system(16) S is a kxk symmetric matrix and each element in the matrix S is computedfrom the expression; i,j = ; i,j = 1,2,….k 1,2,….k(17) where, tr(·where, tr(·) is the trace operator, Q(·) is the trace operator, Q(·) is a positive definite cofactor matrix for each group of observations. R is a symmetric matrix defined by(18) where, I is an identity matrix, A is an appropriate design matrix of full column column––rank and Cb is the covariance matrix of the observations. The vector q contains the quadratic forms;(19) where, vi are the estimated observational residuals for each group of observations bi. As a result we can generate equation (16) as below.(20) The computed values resulting from a first run through the MINQUE algorithm is can be chosen equal to one as a –priori values for each variance factor. The resulting estimates can be used as ‘new’ a–a–priori priori values and the MINQUE procedure is repeated. Performing this process iteratively is referred to as iterative MINQUE (IMINQUE). The iteration is repeated until all variance factor estimates approach unity. The final estimated variance component values can be calculated byÕ==n a a i 022i )(s r (21)3 Deformation Analysis Using GPS Results3.1 S–TransformationThe datum consistency between different epochs can be obtained byemploying the S––Transformation and also the moving points are determined by employing the Sapplying this transformation (see Baarda (1973); Strang van Hees (1982)).S–transformation is an operation that is used for transition from one datum toanother datum without using a new adjustment computation. In other words,S–transformation is a transformation computation of the unknown parameters,which were determined in one datum, from the current datum to the new datumwith their cofactor matrix. S–transformation is similar to the free adjustmentcomputations. The equations that give the transition from the datum i to datum kare given below (Demirel (1987), Welsch (1993)).(22)(23)(24)where, I is the identity matrix, Ek is the datum determining matrix whose diagonal elements are 1 for the datum determining points and 0 for the other points.(25)(26) where, xi0 , yi0 , zi0 are the shifted approximate coordinates of points to the mass center of control network and i=1,2,3 … p is the number of points.In the conventional 3D geodetic networks, the number of datum defects due to outer parameters is 7. However, in GPS networks the number of datum defects is 3 as the number of shifts on the three axis directions (Welsch (1993)). On the other hand, the number of datum defects is exactly known in the conventional networks as related to measurements that are performed on the network, where as this number can not be known exactly in the GPS networks because of error sources such as atmospheric effects, using different antenna types, using different satellite ephemerides in very long baseline measurements and directing the antennas to the local north (see Blewitt (1990)).3.2 Global Test Using S-TransformationA control network is composed of the datum points and deformation points. With the help of the datum points, the control network, which is measured in ti and tj epochs, is transformed to the same datum.While inspecting the significant movements of the points, continuous datum transformation is necessary. Because of this, first of all, the networks, which are going to be compared to each other, are adjusted in any datum such as using free adjustment technique.After applying this technique, the coordinates of the control network points,which were measured in ti epoch, are divided into two groups as f (datum) points and n (deformation) points.(27)(28)where, xi is the vector of parameters and is cofactor matrix in the datum i. The transformation is accomplished from datum i to datum k by the help of Ek ,datum determining matrix. This transformation is as given in equations (29) and(30),(29)(30)The operations, which are given in equations (27)––(30), are repeated for the The operations, which are given in equations (27)transformation from datum j to datum k. In this way, datum i and datum j can be transformed into the same datum k with the help of the datum points. As the result, the vectors of coordinate unknowns and and also their cofactor matrix and are found for the datum points in the same k datum. With the global (congruency) test, it is determined if there are any significant movements in the datum points or not. For the global test of the datum points, the H0 null hypothesis and T test value are (Pelzer (1971), Caspary (1987), Fraser and Grundig (1985))(H0 null hypothesis) (31)(displacement vector) (32)(cofactor matrix of df) (33)(quadratic form) (34)(pooled variance factor) (35)(test value) (36) where, the degree of freedom of Rf is ,uf is the number of unknowns for the datum points, d is datum defect ,is pseudo inverse of ,such that,If (),it is decided that there is a significant deformation in the part of datum points of the control network.After the result of global test, if it is decided that there is deformation in one part of the datum points, the determination of the significant point movements using S–transformation (localization of the deformations) step isstarted (Chrzanowski and Chen (1986), Fraser and Grüstarted (Chrzanowski and Chen (1986), Fraser and Gründig (1985)). In this step, ndig (1985)). In this step, it is assumed that each of the datum points might have undergone change in position. For each point, the group of these points is divided into two parts: the first part includes the datum points, which are assumed as stable and the second part includes one point, which is assumed as instable. All the computation steps that are explained above are repeated one by one for each datum points. In this way, all of the points are tested according to whether they are stable or not. In the end, the exact datum points are derived (Caspary (1987), Demirel (1987)).3.3 Determining the Deformation ValuesAfter determining the significant point movements as in section 3.2., the block of datum points, which do not have any deformations, is to be determined. With the help of these datum points, both epochs are shifted to the same datum and deformation values are computed as explained below (Cooper (1987)).The deformation vector for point P is:(37) the magnitude of the vector is:(38) To determine the significance of these deformation vectors, which are computed according to above equations, the H0 null hypothesis is carried out as given below.(39) and the test value:(40) This test value is compared with the critical value,. If , it is concluded that there is a significant 3D deformation in the point P.4 Numerical Exampleviaduct–– called as Karasu, In this study, the deformations of a highway viaductwere investigated using GPS and precise leveling data. It is 2160 m long and located in the west of Istanbul, Turkey in one part of the European Transit Motorway. The first 1000 meter of this viaduct crosses over the Buyukmece Lake, the piers of the structure were constructed in to this lake (see Figure 1).The viaduct consists of two separate tracks (northern and southern) and was constructed on 110 piers (each track has 55 piers). The distance between two piers is 40 meters and there is one deformation point in every 5 piers.The deformation measurements of the viaduct involved four measurement campaigns, which include GPS measurements and precise levelingsix––month intervals. Before performing the measurements, performed with sixmeasurement campaigns, a well designed local geodetic network had been established in order to investigate the deformations of the structure (see Figure 1). It has 6 reference points around and 24 deformation points on the viaduct. The deformation points are established exactly at the top of piers, where are expected as the most stable locations on the body of viaduct. The network was measured using GPS in each of the sessions, which were planned carefully for each campaign, and precise leveling was applied in between network points.During GPS sessions, static measurement method was applied and dual frequency receivers were used with forced centering equipments. Leveling measurements were carried out using Koni 007 precise level with two invar rods. The relative accuracies of point positions are in millimeter level. The heights derived from precise leveling measurements have the accuracy between 0.2–0.8 millimeters.The results of evaluations for the three height difference sets usingconventional deformation analysis are seen in the Figures 3, 4 and 5.In the third evaluation, height differences from both GPS and leveling techniques are used and deformation analysis is applied. Having different accuracies for both height sets derived from both techniques is a very important consideration at this point. Therefore, the stochastic information between these measurement groups (relative to each other) has to be derived. For computingthe weights of measurement groups, MINQUE and Helmert Variance Component Estimation (HVCE) techniques have been employed. Figures 2a and 2b show the results of employing the VCE techniques.In order to both of VCE techniques, although that the similar results were reached after the same number of iteration, the MINQUE results were chosen and applied in combining the data sets, because MINQUE technique provided smoother trend in comparison to Helmert technique.It should be noted that the deformations in both tracks represented the same character in analyses results, therefore the only height differences belonging to northern northern––track points are given in the graphs here.The results of evaluations for three height difference sets usingconventional deformation analysis are confirmed each other. According to figures 3, 4 and 5, all reference and deformation points have similar characteristic of movements between consecutive epochs. Maximum movements were realized in point 2 and point 4 and these movements were interpreted as deformation.In figure 6, geoid undulation changes at some of the viaduct points are seen. However, when the figures 3 and 4 were investigated, it is understood that these changes were caused by errors stem from GPS observations and does not give any idea related to deformations of the points. Because, leveling derived heightdifferences between consecutive epochs don’t show any height change for these points but GPS derived height differences do.As the result, it was seen that leveling measurements provide check for GPS derived heights and possible antenna height problems occurred during GPS sessions, and thus benefit to GPS measurements. This is considerable contribution of leveling measurements to GPS measurements in deformationmonitoring and analyzing.After the 1D deformation analysis, 3D deformation analysis wasaccomplished as mentioned in previous sections. The horizontal displacements, which were found in the 3D analysis results, can be seen in figure 7.5 Results and ConclusionIt is well known from numerous scientific researches that the weakest component in GPS derived position is the height component, mainly because of the geometric structure of GPS. Therefore, in determining vertical deformations, GPS derived heights need to be supported by precise leveling measurements in order to improve their accuracies.Herein, the 1D and 3D deformations of a large engineering structure has been investigated and analyzed using GPS and leveling data separately and also,their combinations. On the other hand an optimal algorithm for combining GPS derived and leveling derived heights in order to improve GPS quality in deformation investigations were analyzed using the case study results.When the figures 3, 4 and 5 were investigated, surprisingly the maximum height changes were seen in point 2 and point 4 even though they are pillars andthey had been assumed to be stable at the beginning of the study. According to analysis results of the GPS observations, height changes for some deformation points on the viaduct were recognized. However, while the evaluations with the leveling and combined data were considered, it is understood that these changes, seemed to be deformations on the deformation points, are not significant and caused by the error sources in GPS measurements.In the second stage of the study, 1D analysis results have supplied input into the 3D analysis of the deformations, while determining whether the network points are stable or not. Horizontal displacements were detected in points 2 and4 in the result of 3D analysis, (see figure 7) whereas points 1, 3 and5 were stable. At a first glance, these displacements in 2 and 4 were unexpected. However after geological and geophysical investigations, the origin of theseresults was understood. The area is a marsh area and this characteristic might widen also underneath of these two reference points. The uppermost soil layer in the region does not seem to be stable and the foundations of the constructions ofthe reference points 2 and 4 are not founded as deep as the piers of the viaduct. So, they are affected by the environmental conditions easily. Also Points 1, 3 and 5 are not goes as deep as the piers, but their foundations are not similar with point 2 and 4 and are steel marks on the 3x3x3 meters concrete block. The variety of soil layer in the region, known according to geological investigations, also might have role in this result.On the other hand, it is possible to mention about the correlation between vertical movements in two reference points, 2 and 4, and wet/ dry seasons, because the uplift and sinking movements of these reference points seems to be very synchronous when compared to seasonal changes in the amount of water.The results of this study, experienced with measurements of the viaduct, arethought to be important remarks for deformation analysis studies using GPSmeasurements. As the first remark, GPS measurement technique can be used fordetermining deformations with some special precautions like using forcedcentering mechanisms to avoid centering errors, using special equipments forprecision antenna height readings, using special antenna types to avoidmultipath effects etc. However, even though these precautions are taken toprovide better results in 1D and 3D deformation analysis, GPS measurementshave to be supported with Precise Leveling measurements.译文:利用GPS 和水准测量数据分析桥梁的变形土耳其土耳其 伊斯坦布尔伊斯坦布尔 伊斯坦布尔技术大学土木工程学院伊斯坦布尔技术大学土木工程学院大地测量与摄影测量系大地测量与摄影测量系 S. Erol, B. Erol, T. Ayan S. Erol, B. Erol, T. Ayan摘要本次研究的主要目的是利用GPS 测量数据和水准测量数据以及它们的组合形式分析跨越湖泊的高架桥在一维(垂直方向)和三维空间内的变形。
Labview图形化编程语言中英文对照外文翻译文献
Labview图形化编程语⾔中英⽂对照外⽂翻译⽂献中英⽂资料外⽂翻译National Instruments LabVIEW: A Programming Environment for Laboratory Automation and Measurement .National Instruments LabVIEW is a graphical programming language that has its roots in automation control and data acquisition. Its graphical representation, similar to a process flow diagram, was created to provide an intuitive programming environment for scientists and engineers. The language has matured over the last 20 years to become a general purpose programming environment. LabVIEW has several key features which make it a good choice in an automation environment. These include simple network communication, turnkey implementation of common communication protocols (RS232, GPIB, etc.), powerful toolsets for process control and data fitting, fast and easy user interface construction, and an efficient code execution environment. We discuss the merits of the language and provide an example application suite written in-house which is used in integrating and controlling automation platforms.Keywords: NI LabVIEW; graphical programming; system integration; instrument control; component based architecture; robotics; automation; static scheduling; dynamic scheduling; databaseIntroductionCytokinetics is a biopharmaceutical company focused on the discovery of small molecule therapeutics that target the cytoskeleton. Since inception we have developed a robust technology infrastructure to support our drug discovery efforts. The infrastructure provides capacity to screen millions of compounds per year in tests ranging from multiprotein biochemical assays that mimic biological function to automated image-based cellular assays with phenotypic readouts. The requirements for processing these numbers and diversity of assays have mandated deployment of multiple integrated automation systems. For example, we have several platforms for biochemical screening, systems for live cell processing, automated microscopy systems, and an automated compound storage and retrieval system. Each in-house integrated system is designed around a robotic arm and contains an optimal set of plate-processing peripherals (such as pipetting devices, plate readers, and carousels) depending on its intended range of use. To create the most flexible, high performance, and cost-effective systems, we have taken the approach of building our own systems in-house. This has given us the ability to integrate the most appropriate hardware and software solutions regardless of whether they are purchased from a vendor or engineered de novo, and hence we can rapidly modify systems as assay requirements change.To maximize platform consistency and modularity, each of our 10 automated platforms is controlled by a common, distributed application suite that we developed using National Instruments (NI) LabVIEW. This application suite described in detail below, enables our end users to create and manage their own process models (assayscripts) in a common modeling environment, to use these process models on any automation system with the required devices, and allows easy and rapid device reconfiguration. The platform is supported by a central Oracle database and can run either statically or dynamically scheduled processes.NI LabVIEW BackgroundLabVIEW, which stands for Laboratory Virtual Instrumentation Engineering Workbench is a graphical programming language first released in 1986 by National Instruments (Austin, TX). LabVIEW implements a dataflow paradigm in which the code is not written, but rather drawn or represented graphically similar to a flowchart diagram Program execution follows connector wires linking processing nodes together. Each function or routine is stored as a virtual instrument (VI) having three main components: the front panel which is essentially a form containing inputs and controls and can be displayed at run time, a block diagram where the code is edited and represented graphically, and a connector pane which serves as an interface to the VI when it is imbedded as a sub-VI.The top panel (A) shows the front panel of the VI. Input data are passed through “Controls” which are shown to the left. Included here are number inputs, a file path box, and a general error propagation cluster. When the VI runs, the “Indicator”outputs on the right of the panel are populated with output data. In this example, data include numbers (both as scalar and array), a graph, and the output of the error cluster. In the bottom panel (B) the block diagram for the VI is shown. The outer case structure executes in the “No Error” case (VIs can make internal errors o r if called as a sub-VI the caller may propagate an error through the connector pane).Unlike most programming languages, LabVIEW compiles code as it is created thereby providing immediate syntactic and semantic feedback and reducing the time required for development and testing.2Writing code is as simple as dragging and droppingfunctions or VIs from a functions palette onto the block diagram within process structures (such as For Loops, or Case Structures) and wiring terminals (passing input values, or references). Unit testing is simplified because each function is separately encapsulated; input values can be set directly on the front panel without having to test the containing module or create a separate test harness. The functions that generate data take care of managing the storage for the data.NI LabVIEW supports multithreaded application design and executes code in an inherently parallel rather than sequential manner; as soon as a function or sub-VI receives all of its required inputs, it can begin execution. In Figure 1b, all the sub-VIs receive the array input simultaneously as soon as the For Loop is complete, and thus they execute in parallel. This is unique from a typical text-based environment where the control flows line by line within a function. When sequential execution is required, control flow can be enforced by use of structures such as Sequences, Events, or by chaining sub-VIs where output data from one VI is passed to the input of the next VI.Similar to most programming languages, LabVIEW supports all common data types such as integers, floats, strings, and clusters (structures) and can readily interface with external libraries, ActiveX components, and .NET framework. As shown in Figure 1b, each data type is graphically represented by wires of different colors and thickness. LabVIEW also supports common configuration management applications such as Visual SourceSafe making multideveloper projects reasonable to manage.Applications may be compiled as executables or as Dynamic Link Libraries (DLLs) that execute using a run-time engine similar to the Java Runtime Environment. The development environment provides a variety of debugging tools such as break-points, trace (trace), and single-step. Applications can be developed using a variety of design patterns such as Client-Server, Consumer-Producer, andState-Machine. There are also UML (Unified Modeling Language) modeling tools that allow automated generation of code from UML diagrams and state diagrams.Over the years, LabVIEW has matured into a general purpose programming language with a wider user base.NI LabVIEW as a Platform for Automation and InstrumentationOur experience creating benchtop instrumentation and integrated automation systems has validated our choice of LabVIEW as an appropriate tool. LabVIEW enables rapid development of functionally rich applications appropriate for both benchtop applications and larger integrated systems. On many occasions we have found that project requirements are initially ill defined or change as new measurements or new assays are developed.. There are several key features of the language that make it particularly useful in an automation environment for creating applications to control and integrate instrumentation, manage process flow, and enable data acquisition.Turnkey Measurement and Control FunctionLabVIEW was originally developed for scientists and engineers .The language includes a rich set of process control and data analysis functions as well as COM, .NET, and shared DLL support. Out of the box, it provides turnkey solutions to a variety of communication protocols including RS232, GPIB, and TCP/IP. Control structures such as timed While Loops allow synchronized and timed data acquisition from a variety of hardware interfaces such as PCI, USB, and PXI. DataSocket and VI ServerDeployment of an integrated system with multiple control computers requires the automation control application to communicate remotely with instrument drivers existing on remote computers. LabVIEW supports a distributed architecture by virtue of enabling seamless network communication through technologies such as VI Server and DSTP (data sockets transfer protocol). DSTP is an application layer protocol similar to http based on Transmission Control Protocol/Internet Protocol (TCP/IP). Data sockets allow easy transfer of data between remote computers with basic read and write functions. Through VI server technology, function calls can be made to VIs residing on remote computers as though they are residing on the local computer. Both Datasockets and VI server can be configured to control accesses privileges.Simple User Interface (UI) ImplementationIn addition to common interface controls such as text boxes, menu rings, and check-boxes, LabVIEW provides a rich set of UI controls (switches, LEDs, gauges, array controls, etc.) that are pertinent to laboratory equipment. These have their origins in LabVIEWs laboratory roots and help in development of interfaces which give scientists a clear understanding of a system's state. LabVIEW supports UI concepts including subpanels (similar to the Multiple Document Interface), splitter bars, and XControls (analogous to OCX controls).Multithreaded Programming EnvironmentThe inherent parallel environment of LabVIEW is extremely useful in the control of laboratory equipment. Functions can have multiple continuous While Loops where one loop is acquiring data rapidly and the other loop processes the data at a much slower rate. Implementing such a paradigm in other languages requires triggering an independent function thread for each process and developing logic to manage synchronization. Through timed While Loops, multiple independent While Loops can be easily synchronized to process at a desired period and phase relative to one another. LabVIEW allows invoking multiple instances of the same function witheach maintaining its own data space. For instance, we could drag many instances of the Mean sub-VI onto the block diagramin Figure 1b and they would all run in parallel, independent of one another. To synchronize or enforce control flow within the dataflow environment, LabVIEW also provides functions such as queues, semaphores, and notification functions.NI LabVIEW Application Example: The Open System Control Architecture (OSCAR)OSCAR is a LabVIEW-based (v7.1) automation integration framework and task execution engine designed and implemented at Cytokinetics to support application development for systems requiring robotic task management. OSCAR is organized around a centralized Oracle database which stores all instrumentation configuration information used to logically group devices together to create integrated systems (Fig. 2). The database also maintains Process Model information from which tasks and parameters required to run a particular process on a system can be generated and stored to the database. When a job is started, task order and parameter data are polled by the Execution Engine which marshals tasks to each device and updates task status in the database in real time. Maintaining and persisting task information for each system has two clear benefits. It allows easy job recovery in the event of a system error, and it also provides a process audit trail that can be useful for quality management and for troubleshooting process errors or problems.Each OSCAR component is distributed across the company intranet and communicates with a central database. Collections of physical devices controlled through OSCAR Instrument packages (OIP) make up systems. Users interact with systems through one of the several applications built on OSCAR. Each application calls the RTM which marshals tasks from the database to each OIP. OSCAR has sets of tools for managing system configurations, creating Process Models, monitoring running processes, recovering error-state systems, and managing plate inventory in storage devices.OSCAR uses a loosely coupled distributed component architecture, enabled in large part by LabVIEWs DSTP and remote VI technologies that allow system control to be extended beyond the confines of the traditional central control CPU model. Any networked computer or device can be integrated and controlled in an OSCAR system regardless of its physical location. This removes the proximity constraints of traditional integrated systems and allows for the utilization of remote data crunchers, devices, or even systems. The messaging paradigm used shares many similarities with current Service Oriented Architectures or Enterprise Service Bus implementations without a lot of required programming overhead or middleware; a centralized server is not required to direct the XML packets across the network. An additional benefit to this loosely coupled architecture is the flexibility in front-end application design. OSCAR encapsulates and manages all functionality related to task execution and device control, which frees the developer to focus on the unique requirements of a given application. For example, an application being created for the purpose of compound storage and retrieval can be limited in scope to requirements such as inventory management and LIMS integration rather than device control, resource allocation, and task synchronization.The OSCAR integration framework consists of multiple components that enable device and system configuration, process modeling, process execution, and process monitoring. Below are descriptions of key components of the framework. Integration PlatformThe Oscar Instrument Package (OIP) is the low level control component responsible for communicating with individual devices. It can support any number of devices on a system (including multiple independent instances of the same type of device) and communicates to the Runtime Manager (RTM) via serialized XMLstrings over DSTP. This allows the device controller and RTM components to exist on separate networked computers if necessary. Additionally, the OIP controller communicates with a device instance via LabVIEW remote VI calls which provide a lower level of distribution and allow the device drivers to exist on a separate networked computer from the controller. At Cytokinetics, we currently support approximately 100 device instances of 30 device types which are distributed across 10 integrated systems.System ManagementAn OSCAR system is a named collection of device instances which is logically represented in the database. The interface for each device (commands and parameters) is stored in the database along with the configuration settings for each device instance (i.e., COM port, capacity). The System Manager component provides the functionality to easily manipulate this information (given appropriate permissions). When a physical device is moved from one system to another, or a processing bottleneck alleviated by addition of another similar device, system configuration information is changed without affecting the processes that may be run on the system.Process ModelingA process model is the logical progression of a sequence of tasks. For example, a biochemical assay might include the following steps (1) remove plate from incubator, (2) move plate to pipettor, (3) add reagent, (4) move plate to fluorescent reader, (5) read plate, and (6) move plate to waste. The Process Modeler component allows the end user to choose functions associated with devices and organize them into a sequence of logical tasks. The resulting process model is then scheduled via a static schedule optimization algorithm or saved for dynamic execution (Fig. 3). Aprocess model is not associated with a physical system, but rather a required collection of devices. This has two importantbenefits: (1) the scientist is free to experiment with virtual system configurations to optimize the design of a future system or the reconfiguration of an existing system, and (2) any existing process model can be executed on any system equipped with the appropriate resources.The top panel (A) shows the Process Schedule Modeler, an application that graphically displays statically scheduled processes. Each horizontal band represents a task group which is the collection of required tasks used by a process; tasks are color coded by device. The bottom panel (B) shows the UI from the Automated Imaging System application. The tree structure depicts the job hierarchy for an imaging run. Jobs (here AIS_Retrieval and AIS_Imaging) are composed of task groups. As the systems runs, the tasks in the task group are executed and their status is updated in the database.Process ExecutionProcess execution occurs by invoking the OSCAR RTM. The RTM is capable of running multiple differing processes on a system at the same time allowing multiple job types to be run in parallel. The RTM has an application programming interface (API) which allows external applications to invoke its functionality and consists of two main components, the Task Generator Module (TGM) and the Execution Engine. External applications invoke an instance of a Process Model through the TGM at which point a set of tasks and task parameters are populated in the OSCAR database. The Execution Engine continually monitors the database for valid tasks and if a valid task is found it is sent to the appropriate device via the OIP. The OSCAR system supports running these jobs in either a static or dynamic mode. For processes which must meet strict time constraints (often due to assay requirements), or require the availability of a given resource, a static schedule is calculated and stored for reuse.The system is capable of optimizing the schedule based on actual task operation times (stored in the database).Other types of unconstrained processes benefit more from a dynamic mode of operation where events trigger the progress of task execution as resources become available in real-time. When operating dynamically, intelligent queuing of tasks among multiple jobs allows optimal use of resources minimizing execution time while allowing for robust error handling.Process MonitoringAll systems and jobs can be monitored remotely by a distributed application known as the Process Monitor. This application allows multiple users to monitor active jobs across all systems for status and faults and provides email notification for fault situations.ConclusionCytokinetics has built and maintains an automation software infrastructure using NI LabVIEW. The language has proven to be a powerful tool to create both rapid prototype applications as well as an entire framework for system integration and process execution. LabVIEW's roots in measurement instrumentation and seamless network communication protocols have allowed systems to be deployed containing multiple control computers linked only via the network. The language continues to evolve and improve as a general purpose programming language and develop a broad user base.。
测量系统分析控制程序中英文版本
文件名称
Measurement System Analysis ControlProcedure
测量系统分析控制程序
File NO.
文件编号
MP/Q 15-L
Edition
版次
A/1
Page
NO.页次
1/6
1、Purpose/目的
It is toanalyze and evaluate the variation of measurement system, so that we can make sure whether the measurement system is satisfied with the requirement and insure accuracy of the measurement data.
测量系统分析控制程序
File NO.
文件
Page
NO.页次
2/6
measuring the same datum or a single characteristic of parts during a sustained period by measurement system. That is, bias changing with time
对测量系统的变差进行分析评估,以确定测量系统是否满足规定的要求,确保测量数据的准确性。.
2、Scope/适用范围
It is applied to all the measurement systems, which are used to verify whether products meet the requirements.
3.4品质部计量工程师负责组织,生产部、中央研究院、工程部负责新产品/发生设计更改/新购
软件工程毕业论文文献翻译中英文对照
软件工程毕业论文文献翻译中英文对照学生毕业设计(论文)外文译文学生姓名: 学号专业名称:软件工程译文标题(中英文):Qt Creator白皮书(Qt Creator Whitepaper)译文出处:Qt network 指导教师审阅签名: 外文译文正文:Qt Creator白皮书Qt Creator是一个完整的集成开发环境(IDE),用于创建Qt应用程序框架的应用。
Qt是专为应用程序和用户界面,一次开发和部署跨多个桌面和移动操作系统。
本文提供了一个推出的Qt Creator和提供Qt开发人员在应用开发生命周期的特点。
Qt Creator的简介Qt Creator的主要优点之一是它允许一个开发团队共享一个项目不同的开发平台(微软Windows?的Mac OS X?和Linux?)共同为开发和调试工具。
Qt Creator的主要目标是满足Qt开发人员正在寻找简单,易用性,生产力,可扩展性和开放的发展需要,而旨在降低进入新来乍到Qt的屏障。
Qt Creator 的主要功能,让开发商完成以下任务: , 快速,轻松地开始使用Qt应用开发项目向导,快速访问最近的项目和会议。
, 设计Qt物件为基础的应用与集成的编辑器的用户界面,Qt Designer中。
, 开发与应用的先进的C + +代码编辑器,提供新的强大的功能完成的代码片段,重构代码,查看文件的轮廓(即,象征着一个文件层次)。
, 建立,运行和部署Qt项目,目标多个桌面和移动平台,如微软Windows,Mac OS X中,Linux的,诺基亚的MeeGo,和Maemo。
, GNU和CDB使用Qt类结构的认识,增加了图形用户界面的调试器的调试。
, 使用代码分析工具,以检查你的应用程序中的内存管理问题。
, 应用程序部署到移动设备的MeeGo,为Symbian和Maemo设备创建应用程序安装包,可以在Ovi商店和其他渠道发布的。
, 轻松地访问信息集成的上下文敏感的Qt帮助系统。
大学毕业论文---软件专业外文文献中英文翻译
while Ifyou a ]. Every软件专业毕业论文外文文献中英文翻译Object landscapes and lifetimesTechnically, OOP is just about abstract data typing, inheritance, and polymorphism, but otheissues can be at least as important. The remainder of this section will cover these issues.One of the most important factors is the way objects are created and destroyed. Where is thedata for an object and how is the lifetime of the object controlled? There are different philosoat work here. C++ takes the approach that control of efficiency is the most important issue, sogivesthe programmer a choice.For maximum run-timespeed,the s torageand lifetimecan bedeterminedwhile the program isbeing written,by placingthe objectson the stack(thesearesometimes called automatic or scoped variables) or in the static storage area. This places a prion the speed of storage allocation and release, and control of these can be very valuable in somsituations. However, you sacrifice flexibility because you must know the exact quantity, lifetimand type of objectsyou're writing the program. are trying to solvemore generalproblem such as computer-aided design, warehouse management, or air-traffic control, this is toorestrictive.The second approach is to create objects dynamically in a pool of memory called the heap. Inthis approach, you don't know until run-time how many objects you need, what their lifetime is,what theirexacttypeis.Those aredeterminedatthespurof themoment whiletheprogram isrunning. If you need a new object, you simply make it on the heap at the point that you need it.Because the storage is managed dynamically, at run-time, the amount of time required to allocatestorage on the heap is significantly longer than the time to create storage on the stack. (Creatstorage on the stack is often a single assembly instruction to move the stack pointer down, andanother to move it back up.) The dynamic approach makes the generally logical assumption thatobjects tend to be complicated, so the extra overhead of finding storage and releasing that storwill not have an important impact on the creation of an object. In addition, the greater flexibiessential to solve the general programming problem.Java uses the second approach, exclusively timeyou want to create an object, you useIn you when th but any new youin C (suchthe new keyword to build a dynamic instance of that object.There's another issue, however, and that's the lifetime of an object. With languages that alobjectsto be createdon the stack,the compilerdetermineshow long the objectlastsand canautomatically destroy it. However, if you create it on the heap the compiler has no knowledge ofits lifetime.alanguage like C++,must determine programmatically to destroy theobject, which can lead to memory leaks if you don’t do it correctly (and this is a common problin C++ programs). Java provides a feature called a garbage collector that automatically discoverwhen an object is no longer in use and destroys it. A garbage collector is much more convenientbecause it reduces the number of issues that you must track and the code you must write. Moreimportant, the garbage collector provides a much higher level of insurance against the insidiousproblem of memory leaks (which has brought many a C++ project to its knees).The rest of this section looks at additional factors concerning object lifetimes and landsca1. The singly rooted hierarchyOne of the issues in OOP that has become especially prominent since the introduction of C++iswhether a llclassesshouldultimatelybe inheritedfrom a singlebase class.In Java (aswithvirtually all other OOP languages)answer is “yes” and the name of this ultimate base class issimply Object. It turns out that the benefits of the singly rooted hierarchy are many.All objectsin a singlyrootedhierarchyhave an interfacein common, so they are allultimatelythe same type.The alternative(providedby C++) is thatyou don’tknow thateverything is the same fundamental type. From a backward-compatibility standpoint this fits themodel of C betterand can be thoughtof as lessrestrictive, when you want to do full-onobject-orientedprogramming you must then buildyour own hierarchyto providethe sameconvenience that’s built into other OOP languages. And in library classacquire, someother incompatible interface will be used. It requires effort (and possibly multiple inheritancework the new interface into your design. Is the extra “flexibility” of C++ worth it? If you neit —if you have a large investment —it’s quite valuable. If you’re starting from scratch, otheralternatives such as Java can often be more productive.All objects in a singly rooted hierarchyas Java provides) can be guaranteed to havecertain functionality. You know you can perform certain basic operations on every object in yoursystem. A singly rooted hierarchy, along with creating all objects on the heap, greatly simplifitoso ’s that is a it’sargument passing (one of the more complex topics in C++).A singly rooted hierarchy makes it much easier to implement a garbage collector (which isconvenientlybuiltintoJava).The necessarysupportcan be installed the b ase class,and thegarbage collector can thus send the appropriate messages to every object in the system. Withoutsinglyrootedhierarchyand a system to manipulatean objectvia a reference,itisdifficultimplement a garbage collector.Since run-time type information is guaranteed to be in all objects, you’ll never end up withobject whose type you cannot determine. This is especially important with system level operationsuch as exception handling, and to allow greater flexibility in programming.2 .Collection libraries and support for easy collection useBecause a container is a tool that you’ll use frequently, it makes sense to have a librarycontainersthatare builtin a reusablefashion,so you can take one off the shelfBecause acontainer is a tool that you’ll use frequently, it makes sense to have a library of containersbuilt in a reusable fashion, so you can take one off the shelf and plug it into your program. Japrovides such a library, which should satisfy most needs.Downcasting vs. templates/genericsTo make thesecontainersreusable,they hold the one universaltype in Java thatwaspreviously mentioned: Object. The singly rooted hierarchy means that everything is an Object, soa container that holds Objects can hold anything. This makes containers easy to reuse.To use such a container, you simply add object references to it, and later ask for them backBut, since the container holds only Objects, when you add your object reference into the containit is upcast to Object, thus losing its identity. When you fetch it back, you get an Object refeand not a reference to the type that you put in. So how do you turn it back into something thatthe useful interface of the object that you put into the container?Here, the cast is used again, but this time you’re not casting up the inheritance hierarchymore general type, you cast down the hierarchy to a more specific type. This manner of casting icalled downcasting. With upcasting, you know, for example, that a Circle is aittype of Shapesafe to upcast, but you don’t knowObject an necessarily a Circle or so Shape hardly’t itit to get by safe to downcast unless you know that’s what you’re dealing with.It’s not completely dangerous, however, because if you downcast to the wrong thing you’llget a run-time error called an exception, which will be described shortly. When you fetch objectreferences from a container, though, you must have some way to remember exactly what they areso you can perform a proper downcast.Downcasting and the run-time checks require extra time for the running program, and extraeffort from the programmer. Wouldnmake sense tosomehow create the container so that itknows the types that it holds, eliminating the need for the downcast and a possible mistake? Thesolution is parameterized types, which are classes that the compiler can automatically customizework with particulartypes.For example,with a parameterizedcontainer,the compilercouldcustomize that container so that it would accept only Shapes and fetch only Shapes.Parameterized types are an important part of C++, partly because C++ has no singly rootedhierarchy. In C++, the keyword that implements parameterized types is “template.” Java current has no parameterized types since it is possible for —however awkwardly —using thesingly rooted hierarchy. However, a current proposal for parameterized types uses a syntax thatstrikingly similar to C++ templates.。
毕业设计之文献翻译(计算机软件测试)[管理资料]
Software Testing: Black-Box TechniquesSmirnov SergeyAbstract – Software systems play a key role in different parts of modern life. Software is used in every financial, business, educational etc. organization. Therefore, there is a demand for high quality software. It means software should be proper tested and verified before system-integration time. This work concentrated on so-called black-box technique for software testing. The several black-box methods were considered with their strengths and weaknesses. Also, the potential of automated black-box techniques for better performance in testing of reusable components was studied. Finally, the topic related to software security testing was discussed.1. IntroductionComputer technologies plays an important role in the modern society. Computers and Software that drives them affect more people and more businesses than ever today. Therefore, there is a pressure for software developers not only to build software systems quickly, but to focus on quality issues too. Low quality software that can cause loss of life or money is no longer acceptable. In order to achieve a production of highquality software the whole process of developing and maintaining of the software has to be changed and developers have to be correspondingly educated and trained. Testing takes an important part in any software development process (Fig. ). As a process by itself it is related to two other processes verification and validation.Validation is a process of evaluation a software system or component during or, at the end of, the development cycle in order to determine whether it satisfies specified requirements [8]. Verification is the process of evaluating a software system or component to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase [8].Software testing is a process or several processes designed to make sure computer code does what it was designed to do and that it does not do anything unexpected [2].The software testers are responsible to design tests that reveal defects and can be used to evaluate usability and reliability of the software performance. To achieve these goals testers must select a finite number of test cases [1]. There are two basic techniques that can be used to design test cases:–Black-box (sometimes called functional or specification);–– White-box (sometimes called clear or glass box). The white-box technique focuses on the inner structure of the software under test (SUT). To be able to design test cases using this approach a tester has to have a knowledge of the software structure.The source code or suitable pseudo code must be available [1].Figure : A Software Development ProcessBy using the black-box approach the software is viewed as a black box. The goal of a tester is to be completely unconcerned about inner structure of the software. Instead, concentrate on software behavior and functionality (Table 1).Table 1: Two basic testing techniquesWhy do we need black-box testing? First, this approach is useful for revealing requirements and specification defects. Another reason is a testing of reusable software components. Many companies use components from an outside vendors that specialize in the development of specific types of software, so-called Commercial Offthe- Shell Components (COTS).Using such components can save time and money. However, the components 1 have to be evaluated before becoming a part of any developed system. In most cases when a COTS component is purchased from a vendor, no source code is available and even if there is some, it is very expensive to buy. Usually just an executable version of the component is in the hands. In this case black-box testing might be very useful.Next sections of the work present the black-box methods and some issues related to an automation of the methods and software security testing.2. Black-Box Software Testing MethodsBy using black-box approach we are considering only inputs and outputs as a basis for designing test cases. However, we should keep in mind that due to finite time and resources an exhaustive test of all possible inputs is not possible. Therefore, it is a goal of a tester by using available resources to produce the test cases that give a maximum number of found defects. There are several methods that can help to achieve the above mentioned goal.Random TestingEach software system has an input domain from which input data is selected for testing. If inputs are randomly selected this is called random testing. The advantage of the method is that it can save time and effort that more detailed and thoughtful test input selection methods require. On the other hand, random test inputs in many cases can not produce effective set of test data [2].Equivalence Class PartitioningAn Equivalence Class Partitioning (ECP) approach divides the input domain of a software to be tested into the finite number of partitions or eqivalence classes. This method can be used to partition the output domain as well, but it is not commonly used.The result of the partitioning allows a tester to select one member of each class and based on it create test cases. It is assumed that all other members of the same equivalence class are processed the same way by the software under test. Therefore, if one test case based on chosen member detects a defect, all the other test cases based on that class would be expected to detect the same defect. And vice versa, if the test case did not detect a defect, we would expect that no other test cases in the equivalence class would produce an error.This approach has the following advantages [1]:–Elimination of the needs for exhaustive testing through the whole input/output domain, that is not possible;––Following the approach a tester selects a subset of test inputs with a high probability of detecting the defects.A test case design by ECP has two steps:1) Identifying the equivalence classes;2) Defining the test cases.We identify the equivalence classes by taking each input condition and partitioning it into two or more groups: valid equivalence classes, that include valid input to the software, and invalid equivalence classes, that represent all other possible states [2]. There are a set of rules that can be used to identify equivalence classes [2]:–If an input condition specifies a range of values, identify one valid equivalence classwithin this range, and two invalid classes out of range on the left and right side respectively.––If an input condition specify a number of values, specify one valid equivalence class within the values, and two invalid equivalence classes out of the number.––If an input condition specify a set of input values and there is a believe that the software handles each value differently, identify a valid equivalence class for each and one invalid equivalence class.––If an input condition specify a …must be“ situation, identify one valid equivalence class and one invalid equivalence class.However, there are no fast rules for identification of equivalence classes. With experience a tester is able to select equivalence class more effectively and with confindence.If there is a doubt, that the software does not process the members of the equivalence class identically, the equivalence class should be split into smaller classes. The second step of defining the test cases is as following [2]:1. Assign an unique number to each equivalence class;2. Write a new test case trying to cover all valid equivalence class;3. Write a new test case for each invalid equivalence class.Boundary Value AnalisisThe Equivalence Class Partitioning can be supplemented by another method called Boundary Value Analysis (BV A). A tester selects elements close to the edges of the input, so that the test case covers both upper and lower edges of an equivalence class [1].The ability of creating a high-quality test cases with the use of the Boundary Value Analysis depends greatly on the tester's experience as in case of the Equivalence Class Partitioning approach.Cause-Effect GraphingThe major weakness of Equivalence Class Partitioning and Boundary Value Analysis is that the methods do not allow to combine conditions. Furthermore, the number of possible combination is usually very large. Therefore, there must be a systematic way of selectiong a subset of input combinations.Cause-Effect Graphing provides a systematic approach for selecting a set of test cases. The natural-language specification is translated into a formal language – a cause-effect graph. The graph is a digital-logic circuit, but in order to build a graph no knowledge of electronics is necessary. The tester should understand only the boolean logic. The following steps are used to produce test cases [2]:–Divide the specification into workable parts. Large specifications make a cause-effectgraph difficult to manage.Figure : Simple Cause-Effect Graphs–Identify the causes and effects in the specification. A cause is a distinct input condition or an equivalence class of input conditions. An effect is an output condition or a system transformation. The causes and effects are identified by reading the specification.Once identified, each cause and effect is assigned an unique number.–From cause and effect information a boolean causeeffect graph that links causes and effects together is created.–Annotations with constraints are added, that describe combinations of causes and/or effects which are impossible.–The graph is converted to a decision table.–The colomns of the decision table are converted into test cases.The simple examples of cause-effects graphs are shown in Figure . The more detailed description with examples of this method can be found in [1] and [2].Error GuessingDesign of test cases using error guessing method is based on the tester's past experience and intuition. It is impossible to give a procedure for an error guessing approach since it is more intuitive and ad hoc process. The basic idea behind is to enumerate a list of possible errors and then write test cases based on this list.State Transition Testing StateTransition Testing can be used for both objectoriented and procedural software development. The approach is based on the concept of finite-state machine and states. It views the softwareunder test in term of its states, transitions between states, and the inputs or events that trigger state changes.A state is an internal configuration of a system. It is defined in terms of the values assumed at a particular time for the variables that characterize the system or component [1].A finite-state machine is an abstract machine that can be represented by a state graph having a finite number of states and a finite number of transitions between states [1].A State Transition Graph (STG) can be designed for the whole software system or for its specific modules. The STG graph consists of nodes (circles, ovals, rounded rectangles) that represent states and arrows between nodes that indicate what input (event) will cause a transition between two linked states. The Figure shows a simple state transition graph [1].Figure : A simple state transition graphS1 and S2 are two states. The black dot is a pointer to an initial state from outside. The arrows represent inputs/actions that cause the state transformations. It is useful to attach to the graph the system variables that are effected by state changes. The state transition graph can become very complex for large systems. One way to simplify it to use a state table representation. A state table for the graph in Figure is shown in Table 2 [1]. The State Table lists all inputs that cause the state transitions. For each state and each input the next state and action taken are shown.Table 2: A state table for the state transition graph in Fig.The STG should be prepared by developers as a part of the requirements specification. Once the graph was designed it must be reviewed. The review should ensure that –the proper number of states is represented;–each state transition (input/output/action) is correct;–equivalent states are identified;–unreachable and dead states are identified.Unreachable states are states that will never be reached with any input sequence and may indicate missing transitions. Dead states are states that once entered can not be exited [1]. After the review the test cases should be planed. One practical approach is to test every possible state transition [4].3. Automated Black-Box TestingA few black-box methods were listed above. The problem with those methods is that often the performance of testing depends greatly on experience and intuition of the tester. Therefore, there is a question if black-box testing can be automated to make testing more thorough and cost-effective.Furthermore, there is need in black-box methods, that can be used for testing reusable software components before integration into a system under development. The reusable components can be independently developed or commercially purchased. The quality of these components can vary from one vendor to another.The general strategy for automated black-box testing of software components was proposed in [5] . The strategy is based on combination of three techniques: automatic generation of component test drivers, automatic generation of test data, and automatic or semi-automatic generation of wrappers serving the role of test oracles.An approach that allows testers to take advantage of the combinatorial explosion of expected results was developed in [6]. There is a possibility to generate and check the correctness of a relatively small sets of test cases by using software Input/Output relationships. Then the expected results can be generated for the much larger combinatorial test data set. It allows a fully automated execution.In [3] Richard Torkar made a comparison of the main black-box methods in order to find their weaknesses and strengths. It was mentioned that the methods such as Cause-Effect Graphing and Error-Guessing are not suitable for automation. The difficulty in case of Equivalence Class Partitioning would be to automate the partitioning of the input domain in a satisfactory way.Since the effectiveness of black-box techniques is close connected to experience of the tester, in our opinion they can be automated by using artificial intellegence methods such as neural networks and fuzzy logic. More information about research in this area can be found in [7]. 4. Black Box Testing and Software SecurityAt the present there is a pressure on software developers to produce high quality software. Thesecurity aspects are highly related to a software quality. Security testing should be integrated in the testing process, but in reality it is not true in most cases. Usually the developers test the software just for functional requirements and do not consider security issues.One way to check software for secure vulnerabilites is to study known security problems in similar systems and generate test cases based on it. Then applying black-box techniques to run these test cases. The black-box methods play an important part in securtity testing. They allow the testers to look at the software under test from the side of attackers, which usually do not have any information about attacked system and therefore consider it as a black-box. Security testing is important for e-commerce software systems such as corporate web-sites. Furthermore, since buffer overflow is a result of bad constructed software programs, security testing can reveal such vulnerabilities, what is helpful for checking both local programs such as games, calculators, office software etc. and remote software such as e-mail servers, FTP, DNS and Internet web servers.ConclusionSoftware testing became an essential part of the software development process. The well designed test cases can significantly increase the quantity of found faults and errors.The mentioned above black-box methods provide an effective way of testing with no knowledge of inside structure of the software to be tested. Nevertheless, the quality of the black-box testing depends in general on the experience and intuition of the tester. Therefore, it is hard to automate this process. In spite of this fact, there were made a several attempts to develop approaches for automated black-box testing.The black-box testing helps the developers and testers to check software under test for secure vulnerabilities. The secure testing is a matter of importance for e-commerce applications, that are available in the Internet for a wide range of people, and for revealing buffer overflow vulnerabilities in different local and remote applications.软件测试:黑盒技术Smirnov Sergey摘要:在现代社会中,软件系统占了一个重要的位子。
测绘外文翻译外文文献英文文献水准尺和水准仪
Level Rods and LenelsThere are many kinds of lenel rods available.Some are in one piece and others (for ease of transporting) are either telescoping or hinged.Level rods are usually made of wood and are graduated from zero at the bottom.They may be either selfreading rods that are read directly through the telescope or targetrods where the rodman sets a sliding target on the rod and takes the reading directly. Most rods serve as either self-reading or as target rods.Among the several types of level rods available are the Philadelphia rod,the Chicago rod, and the Florida rod. The Philadelphia rod, the most common one, is made in two sections. It has a rear section that slides on the front section. For readings between 0 and 7 ft, the rear section is not extended; for reading between 7 and 13 ft, it is necessary to extended the rod. When the rod is extended,it is called a high rod. The Philadelphia rod is distinctly divided into feet, tenths, and hundredths by means of alternating black and white spaces painted on the rod.The Chicago rod is 12 ft long and is graduated in the same way as the Philadelphia rod, but it consists of three sliding section. The Florida rod is 10 ft long and is graduated in white an red stripes, each stripe being 0.10 ft wide. Also available for ease of transportation are tapes or ribbons of waterproofed fabric which are marked in the same way that a regular level rod is marked and which can be attached to ordinary wood strips. Once a job is completed, the ribbon can, be removed and rolled up. The wood strip can be thrown away. The instrumentman can clearly read these various level rods through his telescope for distances up to 200 or 300 ft, but for greater distances he must use a target. A target is a small red and white piece of metal attached to the rod. The target has a vemier that enables the rodman to take a reading to the nearest 0.001 ft.If the rodman is taking the readings with a target and if the line of sight of the telescope is above the 7-ft mark, it is obvious that he cannot take the reading directly in the normal fashion. Therefore, the back face of the rod is numbered downward from 7 to 13 ft. The target is set at acertain mark on the front face of the rod and as the back section is pushed upward, it runs under an index scale and a vernier whichenables the rodman to take the reading on the front.Before setting up the level the instrumentman should give some though to where he must stand in orde to make his sights. In other words, he will consider how to place the tripod legs so that he can stand comfortably between them for the lay-out of the work that he has in mind.The tripod is desirably placed in solid ground where the instrument will not settle as it mose certainly will in muddy or swampy areas. It may be necessary to provide some special support for the instrument, such as stakes or a platform. The tripod legs should be well spread apart and adjustde so that the footplate under the leveling screws is approximately level. The insatrumentman walks around the instrument and pushes each leg frimly into the ground. On hillsides it is usually convenient to place ong leg uphill and two downhill.After the instrument has been levelde as much as possible by adjusting the tripod legs, the telescope is turned over a pair of opposite leveling screws if a four-screw instrument is being used.Then the bubble is roughly centered by turning that pair of screw in opposite directions to each other. The bubble will move in the direction of the left thumb. Next, the telescope is turned over the other pair of leveling screws and the bubble is again roughly centered. The telescope is turned back iver the first pair and the bubble is again roughly centered, and so on. This process is repeated a few more times with increasing care untill the bubble is centered with the telescope turned over either pair of screws. If the level is properly sdjusted, the bubble should remain centered when the telescopeis turued in any direction. It is to be expected that there will be a slight maladjustment of the instrument that will result in a slight movement of the bubble; however, the precision of thework should not be adversely affected if the bubble is centered each time a rod reading is taken.The first step in leveling a three-screw instrument is to turn the telescope untill the bubble tube is parallel to two of the screws. The bubble is centered by turning these two screws in opposite directions.Next, the telescope is turned so that the bubble tube is perpendicular to a line through screws. The bubble is centered by turning screw .These steps are repeated untill the bubble stays centered when the telescopeis turned back and forth.Electronic Distance MeasurementsA major advance in surveying in recent years has been the development of electronic distance-measuring instruments (ED-MIs). These devices determine lengths based on phase changes that occur as eletromagnetic energy of known wavelength travels from one end of a line to the other and returns.The first EDM instrument was intronduced in 1948 by Swedish physicist Erik Bergstrand. His device, called the geodimeter(an acronym for geodetic distance meter), resulted from attempts to improve methods for measuring the velocity of light. The instrument transmetted visible light and was capable of accurately measuring distances up to about 25 mi (40km) at night. In 1957 a second EDM apparatus. the tellurometer. Designed by Dr.D.L.Wadley and introduced in South Africa, transmitted invisible microwaves and was capable of measuring distances up to 50 mi (80km) or more.day or night.The potential value of these early EDM models to the Surveying profession was immediately recognized: houever, they were expensive and not readily portable for field operations. Furthermore, measuring procedures were lengthy and mathematical reductions to obtain distances from observed values were difficult and time-consuming. In addition. The range of operation of the first geodimeter was limited in daytime use. Continued research and development have overcome all these deficiencies.The chief advantagesof electronic surveying are the speed and accuracy with which distances can be measured. If a line of sight is available, long or short lengths can be measured over bodies of water or terrain that is inaccessible for taping. With modern EDM equipment, distance are automatically displayed in digital form in feetor meters, and many have built-in microcomputers that give results internally reduced to horizontal and vertical components. Their many significant advantages have revolutionized surveying procedures and gained worldwide acceptance. The long-distance measurementspossible with EDM equipment make use of radios forcommunication, which is an absolute necessity in modern practice.One syetem for classifying EDMIs is by wavelength of transmitted electromagnetic energy ; the following categories exist :Electro-optical instruments Which transmit either modulatedlaser or infrared light having wavelengths within or slightly beyond the visible region of the spectrum.Microwave equipments Which transmits microwaves with frequencies in the range of 3 to 35 GHz corresponding to wavelengths of about 1.0 to 8.6 mm. Another classification system for EDMIs is by operational range . It is rathersubjective , but in general two divisions fit into this system : short and medium range .The short-range group includes those devices whose macimum measuring capability does not exceed about 5km . Most equipment in this division is the electro –optical type and uses infrared light . These instruments are small, portable, easy to operate, suitable for a wide variety of field surveying work, and used by many practitioners.Instruments in the medium-range group have measuring capabilities extending to about 100 km and are either the electro-optical (using laser light) or microwave type. Although frequently used in precise geodetic they are also suitable for land and engineering surveys. Longer-range device also available can measure lines longer than 100km,but they are nit generally used in ordinary surveying work. Most operateby trasmitting long radio waves, but some employ microwaves. They are used primarily in oceanogaraphic and hydrograpgic surving and navigation.In general, EDM equiment measuresdistances by comparing aline of unkown length to the known wavelength of modulated electromagnetic energy. This is similar to relating a needed distance to the calibrated length of a steel tape.Electromagnetic energy propagates through the atmosphere in accordances with the following equation:V=f λ(1)Where Vis the velocity of electromanetic energy, in meters per second;f the modulated frequency of the energy ,in hertz, andλ the wavelenth, in meteres.With EDMIs frequency can be precisely controlled but velocity varies withatmophere temperature, pressure,and humidity. Thus wavelength and frequencymust vary in conformance with EQ.(1). For accurate electronic distance measuement, therefor., the atmosphere must be sampled and corrctios made accordingly.The generalizedprocedure of measuring distance electronically is depicted in Fig.8-1. an edm device, centered by means of a plumb bob or optical plummit over staton A, trasmits a carrier signal of electromagnetic energy upon which a reference frequency has been superimposed or modulated. The signal is returned from staionB to the revevier, so its trvel path is double the slope distance AB. In Fig.8-1,the modulated electromagnetic energy is represented by a series of sine waves having wave-length λ . Any position along a givenj wave can be specified by its phase angle, which is 0 at°its beginning, 180 at the °midpoint, and 360 at its end°.EDM devices used in surveying operate by measuring phase shift. In this procedure, the returned energy undergoes a complete 360phase change°for each even multiple of exactly one-half the wavelength separating the line-s endpoints. If, therefore, the distance is precisely equal to a full multiple of the half-wave-length, the indicated phase change will be zero. In Fig.8-1.for example, stations A and B are exactly eight half-wavelengths apart : hence, the phase change is zero. When a line is not exactly an even meltiple of the halfwavelength (the usual case) , the fractional part is measured by the instrument as a nonzero phase angle or phase change. If the precise length of a wave is known, the fractional part can be converted to distance.EDMIs directly resolve the fractional wavelength bu do not count the full cycles undergoneby the returned energy in traveling its double path. This ambiguity is resolved, however, by transmetting additional signals of lower frequency and longer wavelengths.中文翻译水准尺和水准仪有许多类型的有价值的水准尺,一些是一体的,另一些(为了运输的安全)要么是需安装望远镜,要么是得安装绞链,水准尺通常是由木材制成的,并且在底端刻度从零开始,他们可以通过望远镜或者通过司尺员在尺上设置的觇标来直接读数。
测量工具中英文对照表大全
测量工具中英文对照表大全测量工具中英文对照表heodolite 经纬仪Water Level 水位仪Level Ruler 水平尺Casing gradienterCoating thicknessMeasurer 涂层测厚仪Ultrasonic thicknessmeasurer 超声波测厚仪Ultrasonic crackdetector 超声波裂纹测试仪Digital thermometer 数字温度计radiation thermometer 辐射温度计Gradient Reader 坡度读数器Volometer 万用表MegaOhmmeter 兆欧表Earthing resistanceReader 接地电阻读数表Plug gauge 圆柱塞规Magnifying glass 放大镜Plummet 铅锤Profile projector 投影仪Pin Gauge 针规(不知道和plug gauge 的区别在哪里,知道的请指正)Gauge block 块规Bore gauge 百分表A vernier caliper 游标卡尺Coordinate Measureing Machine(CMM) 三尺元Pressure gague寸压力计电度厚度测试仪(Electroplating THK.Tester)转(扭)力仪(Twisting Meter) 螺纹规(Thread Gauge)块规(Block Gauge)环规(Ring Gauge)力矩计(Torque Meter)塞规(Plug gage)高度仪(Altitude gauge)塞尺/间隙规(Clearance gauge)千分卡尺(Micrometer Calipers ) “过” ——“不过” 验规(通-止规)[go—no-go gauge]游标卡尺(Vernier Caliper)电子卡尺(Digital caliper)深度千分尺(Depth Micrometer)销(针)规(Pin Gauge)投影仪(Projector )数字高度测量仪(Digital Height Gauge)表面处理测试仪(Surface Finish Tester)内/外径千分尺(Inside/outer Micrometer)洛(威)氏硬度仪[(HRC/HV) Hardness Tester)]温度计(Thermometer)孔规(Bore Gauge)电子称(Electric/digital Balance) 三坐标测试仪(CMM)万用表(Multimeter)1。
软件测试中英文对照外文翻译文献
STUDY PAPER ON TEST CASE GENERATION FOR GUI BASED TESTINGABSTRACTWith the advent of WWW and outburst in technology and software development, testing the softwarebecame a major concern. Due to the importance of the testing phase in a software development lifecycle,testing has been divided into graphical user interface (GUI) based testing, logical testing, integrationtesting, etc.GUI Testing has become very important as it provides more sophisticated way to interact withthe software. The complexity of testing GUI increased over time. The testing needs to be performed in away that it provides effectiveness, efficiency, increased fault detection rate and good path coverage. Tocover all use cases and to provide testing for all possible (success/failure) scenarios the length of the testsequence is considered important. Intent of this paper is to study some techniques used for test casegeneration and process for various GUI based software applications.KEYWORDSGUI Testing, Model-Based Testing, Test Case, Automated Testing, Event Testing.1. INTRODUCTIONGraphical User Interface (GUI) is a program interface that takes advantage of the computer'sgraphics capabilities to make the program easier to use. Graphical User Interface (GUI) providesuser an immense way to interact with the software [1]. The most eminent and essential parts ofthe software that is being used today are Graphical User Interfaces (GUIs) [8], [9]. Even thoughGUIs provides user an easy way to use the software, they make the development process of the software tangled [2].Graphical user interface (GUI) testing is the process of testing software's graphical user interfaceto safeguard it meets its written specifications and to detect if application is working functionally correct. GUI testing involves performing some tasks and comparing the result with the expected output. This is performed using test cases. GUI Testing can be performed either manually byhumans or automatically by automated methods.Manual testing is done by humans such as testers or developers itself in some cases and it is oftenerror prone and there are chances of most of the test scenarios left out. It is very time consumingalso. Automated GUI Testing includes automating testing tasks that have been done manually before, using automated techniques and tools. Automated GUI testing is more, efficient, precise, reliable and cost effective.A test case normally consists of an input, output, expected result and the actual result. More thanone test case is required to test the full functionality of the GUI application. A collection of testcases are called test suite. A test suite contains detailed guidelines or objectives for eachcollection of test cases.Model Based Testing (MBT) is a quick and organized method which automates the testing process through automated test suite generation and execution techniques and tools [11]. Model based testing uses the directed graph model of the GUI called event-interaction graph (EIG) [4] and event semantic interaction graph (ESIG). Event interaction graph is a refinement of event flow graph (EFG) [1]. EIG contains events that interact with the business logic of the GUI application. Event Semantic Interaction (ESI) is used to identify set of events that need to be tested together in multi-way interactions [3] and it is more useful when partitioning the events according to its functionality.This paper is organized as follow: Section 2 provides some techniques, algorithms used to generate test cases, a method to repair the infeasible test suites are described in section 3, GUI testing on various types of softwares or under different conditions are elaborated in section 4, section 5 describes about testing the GUI application by taking event context into consideration and last section concludes the paper.2. TEST CASE GENERATION2.1. Using GUI Run-Time State as FeedbackXun Yuan and Atif M Memon [3], used GUI run time state as feedback for test case generation and the feedback is obtained from the execution of a seed test suite on an Application Under Test (AUT).This feedback is used to generate additional test cases and test interactions between GUI events in multiple ways. An Event Interaction Graph (EIG) is generated for the application to be tested and seed test suites are generated for two-way interactions of GUI events. Then the test suites are executed and the GUI’s run time state is recorded. This recorded GUI run time state is used to obtain Event Semantic Interaction(ESI) relationship for the application and these ESI are used to obtain the Event Semantic Interaction Graph(ESIG).The test cases are generated and ESIGs is capable of managing test cases for more than two-way interactions and hence forth 2-, 3-,4-,5- way interactions are tested. The newly generated test cases are tested and additional faults are detected. These steps are shown in Figure 1. The fault detection effectiveness is high than the two way interactions and it is because, test cases are generated and executed for combination of events in different execution orders.There also some disadvantages in this feedback mechanism. This method is designed focusing on GUI applications. It will be different for applications that have intricate underlying business logic and a simple GUI. As multi-way interactions test cases are generated, large number of test cases will be generated. This feedback mechanism is not automated.Figure 1. Test Case Generation Using GUI Runtime as Feedback2.2. Using Covering Array TechniqueXun Yuan et al [4], proposed a new automated technique for test case generation using covering arrays (CA) for GUI testing. Usually 2-way covering are used for testing. Because as number of events in a sequence increases, the size of test suite grows large, preventing from using sequences longer than 3 or 4. But certain defects are not detected using this coverage strength. Using this technique long test sequences are generated and it is systematically sampled at particular coverage strength. By using covering arrays t-way coverage strength is being maintained, but any length test sequences can be generated of at least t. A covering array, CA(N; t, k, v), is an N × k array on v symbols with the property that every N × t sub-array contains all ordered subsets of size t of the v symbols at least once.As shown in Figure 2, Initially EIG model is created which is then partitioned into groups of interacting events and then constraints are identified and used to generate abstract model for testing. Long test cases are generated using covering array sampling. Event sequences are generated and executed. If any event interaction is missed, then regenerate test cases and repeat the steps.The disadvantages are event partition and identifying constraints are done manually.Figure 2. Test Generation Using Covering Array2.3. Dynamic Adaptive Automated test GenerationXun Yuan et al [5], suggested an algorithm to generate test suites with fewer infeasible test cases and higher event interaction coverage. Due to dynamic state based nature of GUIs, it is necessary and important to generate test cases based on the feedback from the execution of tests. The proposed framework uses techniques from combinatorial interaction testing to generate tests and basis for combinatorial interaction testing is a covering array. Initially smoke tests are generated and this is used as a seed to generate Event Semantic Interaction (ESI) relationships. Event Semantic Interaction Graph is generated from ESI. Iterative refinement is done through genetic algorithm. An initial model of the GUI event interactions and an initial set of test sequences based on the model are generated. Then a batch of test cases are generated and executed. Code coverage is determined and unexecutable test cases are identified. Once the infeasible test cases are identified, it is removed and the model is updated and new batch of test cases are generated and the steps are followed till all the uncovered ESI relationships are covered. These automated test case generation process is shown in Figure 3. This automated test generation also provides validation for GUIs.The disadvantages are event contexts are not incorporated and need coverage and test adequacy criteria to check how these impacts fault detection.Figure 3. Automated Test Case Generation3. REPAIRING TEST SUITESSi Huang et al [6], proposed a method to repair GUI test suites using Genetic algorithm. New test cases are generated that are feasible and Genetic algorithm is used to develop test cases that provide additional test suite coverage by removing infeasible test cases and inserting new feasible test cases. A framework is used to automatically repair infeasible test cases. A graph model such as EFG, EIG, ESIG and the ripped GUI structure are used as input. The main controller passesgenerator along with the strength of testing. This covering array generator generates an initial set of event sequences. The covering array information is send to test case assembler and it assembles this into concrete test cases. These are passed back to the controller and test suite repair phase begins. Feasible test cases are returned by the framework once the repair phase is complete. Genetic algorithm is used as a repair algorithm. An initial set of test cases are executed and if there is no infeasible test cases, it exits and is done. If infeasible test cases are present, it then begins the repair phase. A certain number of iterations are set based on an estimate of how large the repaired test suite will be allowed to grow and for each iteration the genetic algorithm is executed. The algorithm adds best test case to the final test suites. Stopping criteria’s are used to stop the iterations.The advantages are it generates smaller test suites with better coverage on the longer test sequences. It provides feasible test cases. But it is not scalable for larger applications as execution time is high. As GUI ripping is used, the programs that contain event dependencies may not be discovered.4. GUI TESTING ON VARIOUS APPLICATIONS4.1. Industrial Graphical User Interface SystemsPenelope Brooks et al [7], developed GUI testing methods that are relevant to industry applications that improve the overall quality of GUI testing by characterizing GUI systems using data collected from defects detected to assist testers and researchers in developing more effective test strategies. In this method, defects are classified based on beizer’s defect taxonomy. Eight levels of categories are present each describing specific defects such as functional defects, functionality as implemented, structural defects, data defects, implementation defects, integration defects, system defects and test defects. The categories can be modified and added according to the need. If any failures occur, it is analyzed under which defect category it comes and this classification is used to design better test oracle to detect such failures, better test case algorithm may be designed and better fault seeding models may be designed.Goal Question Metric (GQM) Paradigm is used. It is used to analyze the test cases, defects and source metrics from the tester / researcher point of view in the context of industry-developed GUI software. The limitations are, the GUI systems are characterized based on system events only. User Interactions are not included.4.2. Community-Driven Open Source GUI ApplicationsQing Xie and Atif M. Memon [8], presented a new approach for continuous integration testing of web-based community-driven GUI-based Open Source Software(OSS).As in OSS many developers are involved and make changes to the code through WWW, it is prone to more defects and the changes keep on occurring. Therefore three nested techniques or three concentric loops are used to automate model-based testing of evolving GUI-based OSS. Crash testing is the innermost technique operates on each code check-in of the GUI software and it is executed frequently with an automated GUI testing intervention and performs quickly also. It reports the software crashes back to the developer who checked in the code. Smoke testing is the second technique operates on each day's GUI build and performs functional reference testing of the newly integrated version of the GUI, using the previously tested version as a baseline. Comprehensive Testing is the outermost third technique conducts detailed comprehensive GUI integration testing of a major GUI release and it is executed after a major version of GUI is available. Problems are reported to all the developers who are part of the development of the particular version.flaws that persist across multiple versions GUI-based OSS are detected by this approach fully automatically. It provides feedback. The limitation is that the interactions between the three loops are not defined.4.3. Continuously Evolving GUI-Based Software ApplicationsQing Xie and Atif M. Memon [9], developed a quality assurance mechanism to manage the quality of continuously evolving software by Presenting a new type of GUI testing, called crash testing to help rapidly test the GUI as it evolves. Two levels of crash testing is being described: immediate feedback-based crash testing in which a developer indicates that a GUI bug was fixed in response to a previously reported crash; only the select crash test cases are re run and the developer is notified of the results in a matter of seconds. If any code changes occur, new crash test cases are generated and executed on the GUI. Test cases are generated that can be generated and executed quickly and cover all GUI functionalities. Once EIG is obtained, a boolean flag is associated with each edge in the graph. During crash testing, once test cases that cover that particular edge are generated, then the flag is set. If any changes occur, boolean flag for each edge is retained. Test cases are executed and crashes during test execution are used to identify serious problems in the software. The crash testing process is shown in Figure 4. The effectiveness of crash test is known by the total number of test cases used to detect maximum faults. Significantly, test suite size has no impact on number of bugs revealed.This crash testing technique is used to maintain the quality of the GUI application and it also helps in rapidly testing the application. The drawbacks are, this technique is used for only testing GUI application and cannot used in web applications, Fault injection or seeding technique, which is used to evaluate the efficiency of the method used is not applied here.Figure 4. Crash Testing Process4.4. Rapidly Evolving SoftwareAtif M. Memon et al [10], made several contributions in the area of GUI smoke testing in terms of GUI smoke test suites, their size, fault detection ability and test oracle. Daily Automated Regression Tester (DART) framework is used to automate GUI smoke testing. Developers work on the code during day time and DART automatically launches the Application Under Test (AUT) during night time, builds it and runs GUI smoke tests. Coverage and error report are mailed to developer. In DART all the process such as Analyzing the AUT’s GUI structure using GUI ripper, Test case generation, Test oracle generation, Test case executor, Examining theand test oracles are generated. Fault seeding is used to evaluate fault detection techniques used. An adequate number of faults of each fault type are seeded fairly.The disadvantages are Some part of code are missed by smoke tests, Some of the bugs reported by DART are false positive, Overall effectiveness of DART depends on GUI ripper capabilities, Not available for industry based application testing, Faults that are not manifested on the GUI will go undetected5. INCORPORATING EVENT CONTEXTXun Yuan et al [1], developed a new criterion for GUI testing. They used a combinatorial interaction testing technique. The main motivation of using combinatorial interaction is to incorporate context and it also considers event combinations, sequence length and include all possible event. Graph models are used and covering array is used to generate test cases which are the basis for combinatorial interaction testing.A tool called GUITAR (GUI Testing Framework) is used for testing and this provides functionalities like generate test cases, execute test cases, verify correctness and obtain coverage reports. Initially using GUI ripper, a GUI application is converted into event graph and then the events are grouped depending on functionality and constraints are identified. Covering array is generated and test sequences are produced. Test cases are generated and executed. Finally coverage is computed and a test adequacy criterion is analyzed.The advantages are: contexts are incorporated, detects more faults when compared to the previous techniques used. The disadvantages are infeasible test cases make some test cases unexecutable, grouping events and identifying constraints are not automated.Figure 5. Testing Process6. CONCLUSIONSIn this paper, some of the various test case generation methods and various types of GUI testing adapted for different GUI applications and techniques are studied. Different approaches are being used under various testing environment. This study helps to choose the test case generation technique based on the requirements of the testing and it also helps to choose the type of GUI test to perform based on the application type such as open source software, industrial software and the software in which changes are checked in rapidly and continuously.REFERENCES[1][2]Xun Yuan, Myra B. Cohen, Atif M. Memon, (2010) “GUI Interaction Testing: Incorporating Event Context”, IEEE Transactions on Software Engineering, vol. 99.A. M. Memon, M. E. Pollack, and M. L. Soffa, (2001) “Hierarchical GUI test case generation using automated planning”, IEEE Transactions on Software Engineering, Vol. 27, no. 2, pp. 144-155.X. Yuan and A. M. M emon, (2007) “Using GUI run-time state as feedback to generate test cases”, in International Conference on Software Engineering (ICSE), pp. 396-405.X. Yuan, M. Cohen, and A. M. Memon, (2007) “Covering array sampling of input event sequences for automated GUI testing”, in International Conference on Automated Software Engineering (ASE), pp. 405-408.X. Yuan, M. Cohen, and A. M. Memon, (2009) “Towards dynamic adaptive automated test generation for graphical user interfaces”, in First International Workshop on TESTing Techniques & Experimentation Benchmarks for Event-Driven Software (TESTBEDS), pp. 1-4.Si Huang, Myra Cohen, and Atif M. Memon, (2010) “Repairing GUI Test Suites Using a Genetic Algorithm, “in Proceedings of the 3rd IEEE InternationalConference on Software Testing Verification and Validation (ICST).P. Brooks, B. Robinson, and A. M. Memon, (2009) “An initial characterization of industrial graphical user interface systems”, in ICST 2009: Proceedings of the 2nd IEEE International Conference on Software Testing, Verification and Validation, Washington, DC, USA: IEEE Computer Society.Q. Xie, and A.M. Memon (2006) “Model-based testing of community driven open-source GUI applications”, in International Conference on Software Maintenance (ICSM), pp. 145-154.Q. Xie and A. M. Memon, (2005) “Rapid “crash testing” for continuously evolving GUI- based software applications”, in International Conference on Software Maintenance (ICSM), pp. 473-482.A. M. Memon and Q. Xie, (2005) “Studying the fault-detection effectiveness of GUI test cases for rapidly evolving software”, IEEE Transactions on Software Engineering, vol. 31, no. 10, pp. 884-896.U. Farooq, C. P. Lam, and H. Li, (2008) “Towards automated test sequence generation”, in Australian Software Engineering Conference, pp. 441-450.[3][4][5][6][7][8][9][10][11]研究基于GUI测试生成的测试用例摘要随着 WWW的出现和信息技术与软件开发的发展,软件测试成为一个主要问题。
Labview毕业论文毕业论文中英文资料外文翻译文献
Labview毕业论文毕业论文中英文资料外文翻译文献中英文资料Virtual Instruments Based on Reconfigurable LogicVirtual Instruments advantages of more traditional instruments:中英文资料greatly enhanced the capabilities of traditional instruments.Nevertheless, there are two main factors which limits the application of virtual中英文资料基于虚拟仪器的可重构逻辑虚拟仪器的出现是测量仪器发展历史上的一场革命。
它充分利用最新的计算机技术来实现和扩展仪器的功能,用计算机屏幕可以简单地模拟大多数仪器的调节控制面板,以各种需要的形式表达并且输出检测结果,用计算机软件实现大部分信号的分析和处理,完成大多数控制和检测功能。
用户通过应用程序将一般的通用计算机与功能化模块硬件结合起来,通过友好的界面来操作计算机,就像在操作自己定义,自己设计的单个仪器,可完成对被测量的采集,分析,判断,控制,显示,数据存储等。
虚拟仪器较传统仪器的优点(1)融合计算机强大的硬件资源,突破了传统仪器在数据处理,显示,存储等方面的限制,大大增强了传统仪器的功能。
(2)利用计算机丰富的软件资源,实现了部分仪器硬件的软件化,节省了物质资源,增加了系统灵活性。
通过软件技术和相应数值算法,实时,直接地对测试数据进行各种分析与处理,通过图形用户界面技术,真正做到界面友好、人中英文资料机交互。
(3)虚拟仪器的硬件和软件都具有开放性,模块化,可重复使用及互换性等特点。
因此,用户可根据自己的需要,选用不同厂家的产品,使仪器系统的开发更为灵活,效率更高,缩短系统组建时间。
传统的仪器是以固定的硬件和软件资源为基础的specific系统,这使得系统的功能和应用程序由制造商定义。
测量系统分析程序(中英文版本)
4.1 MSA:指Measurement Systems Analysis(测量系统分析)的英文简称。
MSA: is the abbreviation for Measurement Systems Analysis
4.1量具:指任何用来获得测量结果的装置;一般用来特指用在车间的装置(包括用来测量合格/
measurement equipment and get difference for average of measuring value
4.7稳定性:指测量系统在某持续时间内测量同一基准或零件的单一性时获得的测量值总变差。
Stability :the overall difference of measuring value we get when measurement system measure the
4.2测量系统:指用来对被测特性赋值的操作、程序、量具、设备、软件以及操作人员的集合;
用来获得测量结果的整个过程。
Measurement Systems :it means the operation ,procedure ,measure ,equipment ,software and
Operators used to evaluate the value of measured characteristics and the process we get measuring result
same standard or oneness of parts in durainearity :the difference of correctness value within expectant working range of measure
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
中英文对照外文翻译文献(文档含英文原文和中文翻译)A Systematic Review on Software measurement ProgramsTouseef Tahir, Ali JafarDepartment of computer science, Comsats institute of information technology, Lahore, Pakistan Blekinge institute of technology, SE 371 79, Karlskrona, SwedenAbstractMost of the measurement programs fail to achieve targeted goals. This paper presents outcomes of systematic review on software measurement programs. The aim of the study was to analyse Applications, success factors, existing measurement models/frameworks and tools. 1579 research studies were reviewed in the beginning and on basis of predefined criteria 28 studies were chosen for analysis. The selection of research studies was done on the basis of structured procedure of systematic review. Outcome of this study consists of observations and suggestions on the basis of analysis of selected studies.Keywords:Measurement Program;Software;Measurement Models;Measurement Framework1.INTRODUCTIONSoftware measurement programs (MP) help in both management andimplementation of software processes at each level of the organization. In order to get accurate results, it manages flow of data within the processes. The software products are becoming larger and more complex. By managing such software projects require accurate and precise estimations that can be helpful to provide a quality product to the customer. There should be a technological support and well defined structured approach to gather and process the data continuously throughout the software development. This process is called the measurement process. This is used in the MPs which is basically a set of procedures and guidelines to gather, calculate and evaluate the measures.According to, software MPs usually fail after implementation in a software development process. In, 50-80% of the MPs fail after a year due to different reasons. The most important reason of the failure of the MPs includes the lack of appropriate knowledge available to gain the required measures and/or too abstract goals. The failure of the software MP depends on different factors relevant to product,process and resources. According to, software MPs usually fail as they require expert judgment for selecting appropriate number of measures in relation to the organizational goals.There is a need to improve the measurement process; when there is difference between the expected outcome of the process and the actual performance of the process. In recent years, there are different models and frameworks developed that are used to measure different attributes of the software process. In assessment of the MP can be done according to different views i.e. process, product, resource, value based,context and social.In recent years, MPs assist a quantitative approach to development processes. These MPs also used in order to increase the software process improvement. Software MPs give a competitive advantage over those who prefer traditional approaches. These programs have been an important part of software development life cycle (SDLC) like other processes i.e. design, testing, and implementation.Measurement activities are carried out during the software life cycle of project.Implementing a MP is a well defined structured approach in order to gather andprocess the data continuously throughout the software development lifecycle. The main purpose of software measures is to extract good from the raw data, and MPs are used to apply these software measures in management and technical aspects. Software measures are used to classify the best practices i.e. Software Process Improvement, estimating and planning projects effectively,manage budget effectively, and it also helps comparison of current practices and tools. Software MPs provide a source for industry comparison and facilitate effective communication between developer and customer. MPs start with definition of goals and their respective questions which leads to formation of metrics. At start, an organization needs to set proper objectives for what they are going to do and then start measuring.This paper presents a systematic review (SR) on MPs, their applications, measurement models/frameworks and tools.Section II presents SR process definition and research questions. Section III presents SR planning process. Section IV presents selected primary study. Section V presents reporting process of SR. Section VI presents analysis and discussion. Section VII presents implementation of SR analysis. Section VIII presents conclusions.2.SYSTEMATIC REVIEWAccording to, the purpose of systematic review is to provide more structured way to make an assessment, identification and interpretation of research which is relevant to the specific research question.It has three phases namely “planning the review”, “conducting the review” and “reporting the review”. In the planning phase, it is defined that how literature review have been conducted in a systematic manner and a review protocol is developed which acts as a search guide during systematic literature review. In the second step, systematic literature review is conducted which involves primary studies, quality assessment, data extraction and data synthesis. In the last step, literature review is reported.Systematic review is an iterative process instead of sequential, because it involves a number of iteration. Example would be inclusion and exclusion criteria,when actual review is conducted several primary studies are included and excluded.A. Research QuestionsFollowing research questions will be answered during the systematic review:(1)RQ_1a: How do Organizations use software measurement programs?(2)RQ_1b: What are the success factors in software measurement programs?(3)RQ_2: What are the models/frameworks, and tools developed for measurement programs?3.PLANNING THE REVIEWReview is planned according to the guidelines given in.B. Review ProtocolReview protocol consists of inclusion/ exclusion criteria, search keywords, databases to be searched, quality assessment checklist, data synthesis, data extraction form and research questions. Review Protocol developed to identify the current state of the art in MPs and goal definition from 01 Jan, 1997 to 01 June, 2011.C. Search strategyAppropriate Search keywords are very important for effecting search process. This process is done by following the guidelines in. This has been done by following steps(1)Identification of search keywords by analysing the context, objectives, relevant area of research questions.(2)Searched resources were analysed for further identification of keywords, including the keywords section of research resources.(3)Identification of synonyms, alternatives and hypernyms for each key word.(4)Boolean OR was used for synonyms, alternatives, and hypernyms.(5)Boolean AND was used to make a search string and make searching precise The resulted search string given below:(metric OR measure OR measurement) AND (program OR plan OR process) AND (success OR important Or successful OR success story or good practices or practices) AND (factor OR feature OR variable) AND (Software OR software application OR software development life cycle OR software development process ORsoftware system OR software industry) AND (models OR guidelines or practices) AND(framework or structure infrastructure) AND (tool OR instrument OR mechanism or device).D.Primary search processThe search process is divided into two steps: primary and secondary research. The primary search process consisted of searching online research databases, search engines, e-journals,conference proceeding and grey literature using set of keywords in the resulted search string.In the first step 1579 articles were scanned and 69 articles were selected on the basis of title and abstract. In the second step selected articles were reviewed completed and final set of articles after the second step consists of 28 articles 。