外文文献翻译--基于价值的软件测试管理
软件测试管理制度模板英文
软件测试管理制度模板英文1. IntroductionSoftware testing is a crucial part of the software development process, ensuring the quality and reliability of the final product. An efficient and effective software testing management system is essential for the successful completion of any software project.This document outlines the software testing management system that will be used for all software testing activities within the organization. It describes the processes, procedures, roles, and responsibilities that will be followed to ensure that all software testing is carried out in a systematic and organized manner.2. ObjectivesThe objectives of the software testing management system are as follows:- To ensure that software testing is carried out in a systematic and organized manner.- To define the roles and responsibilities of the testing team.- To establish processes and procedures for conducting all types of software testing activities.- To ensure that the software testing activities are aligned with the overall software development process.- To provide a framework for continuous improvement of the software testing process.3. ScopeThe software testing management system will cover all software testing activities within the organization. This includes, but is not limited to, the following:- Functional testing- Non-functional testing (performance, security, usability, etc.)- Integration testing- System testing- User acceptance testing4. Roles and Responsibilities4.1 Test ManagerThe test manager is responsible for overseeing all software testing activities. Their responsibilities include:- Defining the overall testing strategy and approach for each project.- Planning, directing, and coordinating all testing activities.- Ensuring that the testing activities are aligned with the software development process. - Monitoring and controlling the progress of the testing activities.- Reporting on the status and results of the testing activities to project stakeholders.4.2 Test Analyst/Test EngineerThe test analyst/test engineer is responsible for carrying out the testing activities. Their responsibilities include:- Analyzing the requirements to identify the test scenarios and test cases.- Developing test cases based on the requirements and test scenarios.- Executing the test cases and recording the results.- Analyzing and reporting any defects found during testing.4.3 Test CoordinatorThe test coordinator is responsible for coordinating the testing activities. Their responsibilities include:- Scheduling and coordinating the activities of the test team.- Establishing and maintaining the test environment.- Managing the test data and test tools.5. Testing Process5.1 Test PlanningThe test planning phase involves defining the testing strategy and approach for the project, as well as identifying the scope and objectives of the testing activities. The test manager is responsible for developing the test plan, which should include the following:- Objectives and scope of the testing- Testing strategy and approach- Roles and responsibilities of the testing team- Test schedule and resource requirements- Test environment and test data requirements- Risk and issue management plan- Test reporting and documentation requirementsThe test plan should be reviewed and approved by the project stakeholders before testing activities commence.5.2 Test DesignThe test design phase involves identifying the test scenarios and test cases based on the requirements. The test analyst/test engineer is responsible for developing the test cases, which should be based on the following:- Functional requirements- Non-functional requirements- Use cases and user stories- Business processesThe test cases should be reviewed and approved by the test manager before testing activities commence.5.3 Test ExecutionThe test execution phase involves executing the test cases and recording the results. The test analyst/test engineer is responsible for executing the test cases, which should include the following:- Defining the test execution strategy and approach- Executing the test cases based on the test schedule- Recording the test results and any defects found during testing- Analyzing the test results and providing feedback to the project stakeholders5.4 Test ReportingThe test reporting phase involves reporting on the status and results of the testing activities. The test manager is responsible for preparing the test reports, which should include the following:- Status of the testing activities- Test coverage and test results- Defect tracking and resolution- Recommendations for improvementThe test reports should be reviewed and approved by the project stakeholders.6. Testing Tools and EnvironmentThe software testing management system will define the testing tools and environment that will be used for all testing activities. This may include the following:- Test management tools- Test automation tools- Defect tracking tools- Test environments (e.g., hardware, software, network)The test coordinator is responsible for establishing and maintaining the test tools and environment.7. Training and DevelopmentThe software testing management system will provide training and development opportunities for the testing team to improve their skills and knowledge in software testing. This may include the following:- Training on testing methodologies and best practices- Training on testing tools and technology- Certification programs for testing professionalsThe test manager is responsible for identifying the training needs of the testing team and providing appropriate training and development opportunities.8. Quality AssuranceThe software testing management system will define the quality assurance activities that will be carried out to ensure the quality of the testing process. This may include the following:- Review and audit of the testing activities- Process improvement initiatives- Compliance with industry standards and best practicesThe test manager is responsible for establishing and maintaining the quality assurance activities within the testing process.9. Continuous ImprovementThe software testing management system will provide a framework for continuous improvement of the testing process. This may include the following:- Periodic review and analysis of the testing process- Identification of improvement opportunities- Implementation of improvement initiativesThe test manager is responsible for ensuring that continuous improvement activities are carried out within the testing process.10. ConclusionThe software testing management system outlined in this document provides a systematic and organized framework for conducting all software testing activities within the organization. By following the processes, procedures, roles, and responsibilities described in this document, the testing team can ensure that software testing is carried out in an efficient and effective manner, leading to the successful completion of software projects.。
软件质量保证(外文翻译文献)
软件质量保证(外文翻译文献)
本文转述自一篇关于软件质量保证的外文翻译文献,以下是其主要内容:
概述
该文献讨论了软件质量保证的重要性以及一些简单的策略来提高软件的质量。
软件质量保证的重要性
软件质量保证对于确保软件能够正常运行且符合用户需求非常重要。
如果软件质量不佳,可能会导致功能故障、数据损失或安全漏洞等问题,给用户带来负面的体验和损失。
简单的策略来提高软件质量
本文献提出了一些简单的策略来提高软件的质量,包括:
1. 测试:通过全面的测试来发现软件中的错误和缺陷。
测试包括单元测试、集成测试和系统测试等。
2. 代码审查:对软件代码进行审查,以发现潜在的问题和改进代码的可读性。
3. 持续集成:通过持续集成来频繁地构建、测试和发布软件,以快速发现和修复问题。
4. 用户反馈:积极收集用户的反馈和建议,以改进软件的功能和性能。
5. 文档化:提供清晰、详尽的文档,以帮助用户了解软件的功能和使用方法。
结论
软件质量保证是确保软件质量和用户满意度的重要手段。
通过采用简单的策略,如全面测试、代码审查和持续集成等,可以提高软件的质量并减少潜在问题的发生。
此外,积极收集用户反馈和提供详尽的文档也是提高软件质量的关键因素。
注意:本文献的内容仅供参考,部分内容可能无法确认或引用来源不明,因此需谨慎使用。
软件测试外包项目成本估算外文文献翻译
文献信息文献标题:Considerations for Cost Estimation of Software Testing Outsourcing Projects(软件测试外包项目成本估算的思考)文献作者及出处:Ismail F F, Razali R, Mansor Z.Considerations for cost estimation of software testing outsourcing projects[J]. International Journal on Advanced Science, Engineering and Information Technology, 2019, 9(1): 142-152.字数统计:英文7816单词,42973字符;中文12514汉字外文文献Considerations for Cost Estimation of Software TestingOutsourcing ProjectsAbstract— Software testing outsourcing appears to be the best alternative to acquire better software quality with competent ratification by extrinsic parties who have the capability to do it. Through the effort, organizations are peeking to promising benefits constitute in it such as current testing technology, experts, an abridgment of the project’s duration and more concentration on the main organisation’s activity. Along with these benefits, one important reason that encourages the decision is optimization of cost expenditure, which the strategy is perceived as a good move for a competitive organization. However, implementing such preference eventually results in a different outcome. Organizations have to bear the higher cost and incur losses of cost deviation from the expected estimation. The conflicting between cost and benefits raises an important concern of striving better cost estimation for such projects. This paper aims to address this interest by analyzing the existing literature in order to identify the contributing factors towards better cost estimation for software testing outsourcing project-context. The analysis is done using the content analysis method. The results could be divided into two categories; which are the cost items andcontributing factors. Cost items consist of direct cost and indirect cost, which refers to the expenses for the project. While the contributing factors consist of people and environment, which are needed to produce accurate cost estimation. The findings provide an insight to excogitate attentively the essentials in the endeavor of improving the exactitude of cost estimation for software testing outsourcing project.Keywords—software testing project; outsourcing project; cost estimation; project management; software cost management.I.INTRODUCTIONThe interest in software testing outsourcing implementation has recently become a trend. The activity which is defined as the delegation of testing part to external parties who provide the agreed services typically aims for cost savings advantage while achieving a quality software product within the estimated schedule.There is a severe demand of ensuring software testing to be executed properly, as the testing process has a significant value in determining the software quality, mainly when the software is widely used in critical business and real-time applications. The execution of testing becomes complicated, requiring more pre-requisites of quality confirmation such as current testing technology and testing experts’ opinions. For these reasons, the process seems eligible to be conducted in an outsource setting.The increasing dependency on the quality of the testing process in an outsourcing environment calls for attention to the aspect of cost estimation. In a highly competitive economy situation, the cost is one of the important considerations for an organization when software is outsourced. It plays a vital role in balancing the equation with the projects’ resources, schedule, and budget especially within the distributed environment. Proper cost estimation is useful in making strategic decisions and planning for the organization.However, while estimation is an important activity, it is also considered to be a challenging one. Organizations often encounter insufficient data in doing the estimation, causing them to leave significant major features out of the baseline estimates. Previous cost estimation guides also do not discuss the costs according tothe context of software testing outsourcing. Many of the work appear independently either in cost estimation of testing part or outsourcing projects. The non-context guidance resulted in the involving parties to not well acquainted on the precaution of the subject matter. This consequently leads to inaccurate estimates of projects costs resulting in significant losses such as lower expenses compared to the estimation, which actually could be saved, or unexpected higher outlays. Based on these issues, this paper intends to highlight the essential elements to be considered for cost estimation according to the context of software testing outsourcing projects. While there are some inescapable uncertainties, targeting an improved and definite cost estimation is positively achievable through the effort of recognizing those overlooked elements.This paper is organised as follows: Section 2 discusses the related work of software cost estimation. Section 3 presents the methodology used for this paper. Section 4 discusses the findings, and section 5 concludes the work and suggests some future work in the area.II.MATERIAL AND METHODFor the Materials section, this paper discussed the literature background of cost estimation issues based on two perspectives; the first is software testing and the second is globally distributed software projects. For the Method section, the later part explained the methodology used in the research.A.Cost Estimation of Software Testing ProjectPrevious research on software cost estimation of software testing projects has proposed a variety of cost models, techniques, management strategies and best practices for improving estimations. The estimation of software testing has often been regarded as a part of the software development process, and only a few types of research treat testing as a separate entity. Since the nature of testing is distinctly different from other processes in software development, the estimation should be made specifically according to the testing context.The test case complexity is one of the important elements that need to beincluded in the estimation. The test case contains valuable test information, and it will be used throughout the project. The complexity of test cases can significantly influence the time spent on testing activities which can affect the cost required. Besides that, the type of tests also needs to be considered in the cost estimation. Different test types require different necessities depending on the tests specific objectives. Hence, the cost charged against the test type is different according to their needs.The frequency of changes committed against software also influences the cost estimation. As defects can be present from various sources such as requirement, design, code, documents or bad fix, the effort in handling those defects is different depending on the necessity of the changes. More cost is expected to occur if the tests contain a high level of change frequency.In conducting the testing execution or managing the project, the team size that is involved in those activities plays a part in the estimation. The team size reflects the number of people which makes up the team. The team size should be pertinent according to the capacity of work in order to create effective work results. The quality of the testers is also important in making up the team size. Their performance and skill level must be compatible with the work intricacy to compose an effective team size.Apart from the team size, the level of expertise of the project’s personnel needs to be considered. More cost needs to be allocated to hire personnel that has more expertise, especially if their skills, knowledge or consultation is highly required for the success of the project.Besides that, the turnover rate aspect is also an important element in cost estimation. The higher rate of people leaving the project will cause lower productivity of the employees eventually adding the cost of getting a new workforce and the additional time to train them.In achieving a quality test, the estimation has to consider the cost of additional work hours to be spent on rework in the testing execution. Predicting the amount of rework is necessary and is expected in testing routine to achieve good qualitysoftware especially in a specified time which could be very costly if the test is overdue. Besides, the cost of the compatibility for the software to be tested in the outsourcing environment needs to be taken into account. Example of the cost of compatibility that may need to be covered is the tools unavailability in the market or the technological changes which require modification to be done towards the testing setup. This is important as the absence of predetermined stipulation may lead to cost estimation deviation.The documentation is important as it serves as the basis of the testing process. In order to produce accurate information of estimation effort, the referred documentation must be as comprehensive as possible, legible and stable. Examples of the referred documentation are the Project Plan, Test Plan and estimation guideline. A comprehensive document may need to include full and detailed information regarding the previous projects and testing execution. Documents with a high level of legibility provide clearer information and description towards the tasks. Besides that, stable documentation is necessary as unchanged data provide more reliable estimation information. Also, the estimation method is one of the important factors that need to be considered. In order to estimate the required effort about a testing perspective which is essential to confirm the software quality, the estimation method identified needs to be repeatable.Previous studies also indicated that attributes of the people relating to the estimation need to be taken into consideration. The experience of the tester who is doing the tasks is important. The experience can be delineated in two perspectives which are to have domain application knowledge and testing knowledge. For domain application knowledge, the experience is defined by the number of similar projects that the tester has worked on before. As for testing knowledge, it involves the ability to handling the software’s language, tools, technology or equipment used. The testers should have experience in handling the task and technology associated with the software testing. Testers who are relatively new to the technology or any related tools used in the project typically require more time to grasp or use the technology. This, unfortunately, would affect the productivity in conducting the given tasks.The attitude of testing personnel involved in the process also influences the estimation. The testers need to have a commitment to their testing responsibility and perceive the testing process as an imperative matter to produce better software products. By commitment, testers need to practice testing ethics properly. The practice of ethics is important because if it is not taken seriously, more bugs and defects would be created ultimately deferring the test process.The studies mentioned above have highlighted elements that are considered imperative in the cost estimation, which are taken from the testing perspective. In that matter, consideration should also be made from outsourcing viewpoint in order to meet the research context.B.Cost Estimation of Software Outsourcing ProjectThis section considers the cost estimation for general software outsourcing projects. Even though the elements are not collected from cost estimation for software testing outsourcing studies directly, the elements still need to be included as they also conduce to the cost for such projects.The cost associated with distributed projects is highly influenced by the constraints of the geographical location of the dispersed teams. As such, different geographical locations would involve different time zones. Participating teams might have to encounter different official work day that overlaps with the official holiday of other teams.One of the common issues of outsourcing project is the cultural value and the language of the teams. The diversity of culture and the huge gap of the language used would influence the accuracy of the cost estimation being made as the matter would interfere the effort efficiency of the involving parties. Different cultural values by the involving parties have a significant impact on their practice in managing the project. As for language, the use of good language is important as it is a practical need for writing and communication affairs. Previous studies show that lower distinction of culture and higher language proficiency would evoke a better working environment and increase productivity. The political stability of the country must also be taken into account. This issue is viewed based on the political circumstances of the vendor'scountry. The instability of politic could affect the project duration thus intruding the cost that has been set up.As the distributed teams commonly are dispersed among each other, there are costs that incurred in order to connect the involving parties and manage the project. For instance, in order to facilitate the information exchange process and interaction between the teams involved, the cost for communication tools such as email, instant messaging, telephone, and video conferencing need to be included. Simultaneously, there are also costs to set up the servers and online storage which are used for storing and retaining all information throughout the project. Besides that, some costs may need to be invested for project management tools in order to manage the project in distributed settings and test tools to manage the testing activities in particular. The project may also need to consider the costs for other facilities such as the storage room and office space which are commonly required to store physical materials generated throughout the project.One of the best practices for testing outsourcing projects is to send representatives to the vendor’s organization to monitor the work progress. Due to the distance, the need to allocate the representative might require extra costs for the travel necessity. The need to visit frequently or for a longer time of period results in the production of different travel costs such as the accommodation or lodging expenses. Since the travel includes some geographical distance, related costs for the journey such as transportation and immigration arrangement should be considered in the estimation since these activities have an impact on the cost.Besides that, training program also may put an extra cost into the cost allocation, in which, training session might be required to transfer the knowledge between the teams involved, or for the employee's skill development purposes.The distributed locations in outsourcing setting require specified time to be allocated in order to conduct the knowledge transfer process between involving parties. Longer time spent in the process can lead to significant cost overruns. Therefore, projects that have a higher level of task inter dependency may require more time to be conducted. The level of task capability of the involving parties also mayaffect the time taken to transfer the knowledge among them. Teams with a lower level of capacity regarding the task may require more time for knowledge transfer. Thus, more cost is expected from tasks of such nature. Besides that, the type of team coordination also indirectly influences the cost. It can be categorized into two types of coordination which are organic and mechanistic. The team that practices the organic or less formal type of coordination appears to be more effective and is expected to require less cost compared to the mechanistic type of coordination.Previous studies also indicate that there are many software cost estimation methods available for distributed projects. For example, the algorithmic method, estimating by analogy and expert judgment method. Since the cost estimation would involve outsourcing element, the estimation method needs to fulfill the distributed characteristics. Thus, the estimation method is identified to be adjustable so that it can suit according to outsourcing context.The estimator influences the estimation accuracy. The person should have experience and must be realistic. Experience is emphasized through the use of estimation tools. An experienced estimator is more competent in customizing the value and inputs for estimation. Besides that, the estimator must also be realistic in handling the information. Realistic refers to the state of being sensitive to the estimation details. For example, the estimation should be made by acknowledging the capability of the person doing the tasks and engaging them in the process of doing the estimation instead of estimating only from estimator’s view.In general, previous studies have outlined some elements that need to be contemplated in the attempt to acquire definite cost estimation for testing outsourcing project. Those elements, however, are confined in separate cost estimation references which are either software testing or outsourcing. The elements thus need to be organized according to its similarities of meaning which eventually will result in several apparent categories holistically.C.Research MethodologyThe methodology used in this research was a qualitative method. The method was chosen as it allows researchers to acquire the data in detail and gains a deeperunderstanding of the subject matter. One of the components in the qualitative method is conducting the literature review. The analysis was conducted on previous related work, concerning two perspectives of cost estimation. The first is cost estimation of software testing projects and the second is cost estimation of outsourcing software projects. In conducting the literature reviews, this study aimed to answer the following research questions:•What are the cost items for software testing outsourcing projects?•What are the contributing factors for cost estimation of software testing outsourcing projects?The reviews were made on articles published within eleven years back, which started in 2006. The year is justified as relevant for the research as it correlates with the beginning of rising interests in the effect of software globalization industry.The articles were searched through the following online database; IEEE Explore, ACM Digital Library, Science Direct, Scopus, Springer Link, Google Scholar and ISI Web of Science Proceedings, which covered both journal and proceeding articles. The keywords used in the search were ((“cost estimation”) AND (“software test” OR “test” OR “testing”)) and ((“cost estimation”) AND (“outsource” OR “client vendors”)). The inclusion criteria used in selecting the related studies were:•Articles that were published from the year 2006 until 2018.•Articles that described the elements that contribute to cost estimation for software testing and outsourcing projects.•Articles are written in English.Searching using the snowball technique was also applied. The technique was used to find more relevant articles based on the referrals that already met the study criteria. The elimination was done due to the irrelevancy of the contents in certain articles to the subjects discussed in this study.After applying the inclusion and exclusion criteria, 36 published articles were selected. In order to answer the research questions, this study employed content analysis to gather the data. Content analysis is one of the qualitative research methods to analyze the written, verbal or communication messages. The method recognizessimilar attributes or common categories in the data. The collected data were categorized as manifest and latent contents. Manifest content is the literal meaning of the message conveyed while latent content is the hidden or implicit meaning of the message. In the direction of identifying both manifest and latent contents, deductive and inductive approaches were adopted in this study. It was carried out continuously throughout the study until the saturation period was reached and significant content categories emerged.III.RESULTS AND DISCUSSIONBased on the analysis, the results can be divided into two distinct categories. The first consists of the cost items that need to be included in the estimation which is Direct Cost and Indirect Cost. The cost items highlight the necessary costs that actually exist throughout the process which is often overlooked, thus leading to cost deviation. The second category is the contributing factors for the cost estimation, consist of the People and Environment. The contributing factors contain elements that are needed in order to produce more accurate cost estimation for software testing outsourcing projects. Table 1 summarizes the categories and its reference’s sources. Fig. 1 presents the findings in a diagram form and the results of data analysis are presented in the following paragraphsTABLE I COST ITEMS AND FACTORS FOR COST ESTIMATION OF SOFTWARE TESTINGOUTSOURCING PROJECTFig. 1 Considerations for Cost Estimation of Software Testing Outsourcing ProjectA.Direct CostDirect Cost refers to the cost required for the materials and the people who are involved in the project. There are four cost items identified in Direct Cost, which are Testing, Infrastructure, Travelling and Human Resource. They are explained in the following sections.1)TestingThe Testing cost refers to the cost that needs to be included to calculate the effort required for the testing process of the software. The cost is calculated based on theestimated time taken to complete the task and the personnel’s wage rate charged per hour. There are three identified cost items which are Test case complexity, Test type, and Test change frequency.By the time the software is ready to be tested, the test case will be the source which testers would mainly refer to throughout the phase. A test case consists of the input values, the preconditions and post conditions, and the expected result for the test. In order to execute a test case, testers need to prepare the data input and fulfill the test’s precondition. Testers also need to make sure that the test meets its post condition’s criteria and should produce the expected result. Since each test case has its specific conditions that need to be met and have its level of complexities, the amount of work hour required to settle each test case is different. The estimation needs to consider the test case complexity as it will influence the time taken for its completion as well as the cost required to execute the test.The effort estimation is also related to the test type. Test type refers to the group of test activities aimed at testing component or system which focuses on a specific test objective. Depending on its objectives, the necessity of the test is specific and distinct from one another. Thus, the required time, effort and cost required to conduct each test type is also different accordingly.As testers need to test, frequent changes from developers or clients may cause delay to the project. Generally, a test that has a high level of change frequency will incur more cost. The changes would influence the testing effort which then would affect the project schedule and cost that has been set. Hence, test change frequency needs to be evaluated to predict the necessary work hour and its reasonable cost. For example, the estimator could refer to the previous similar projects history to identify and predict the change pattern of a particular test.2)InfrastructureThe infrastructure refers to the entire collection of the associated equipment which is used to support testing outsourcing projects. As testing outsourcing requires the sharing of information and resources between the client and vendor, it must be supported by well-established infrastructure. This includes the Communication tools,Project management tools, Repository, Test tools, and Facilities. The estimation made should include the cost of establishing or preparing the infrastructure as it also contributes to the cost involved.Communication tools aim to provide a medium for information exchange and facilitate the interaction between the involving parties. As testing outsourcing projects involve geographical distance, it is crucial to ensure smooth knowledge transfer, and the right information is conveyed. Communication tools such as telephone, instant messaging, emails, groupware, and video conferencing for web meetings are necessary to carry out such activities. Thus, the estimation should consider the cost to acquire the communication tools such as the software licenses, and its associated costs such as phone call bills and internet bills. Aside from that, in order to establish the connection, the network needs to be set up. As there is a variety of information shared between the involving parties, the network needs to be protected. Therefore, the estimation needs to consider the cost to be invested for network establishment and its security measures such as firewall and virtual private network setup.Repository tools refer to the digital online storage used to store the information consumed and generated during the execution of the project. The repository is crucial for archiving purposes and handling the information consumed. Servers, data center management, multiple delivery centers, and online storage are the examples of knowledge repository that is involved in managing project across various geographical locations. Example of the information stored and shared in the repository are testing artifacts such as the test cases, test plan and project documentation; project plans and design diagrams. As the testing outsourcing project would involve teams in separate locations, the cost to setup global repository needs to be considered in the estimation.The estimation also needs to include the cost of the project management tools. As the management for software testing project in particular is relatively difficult to manage, it is even more complicated to handle with teams that are distributed. The project management tools aid the planning and controlling of the testing activities and mainly handle the outsourcing operating tasks. Examples of the administration tasksthat need to be handled by the management tools are the planning and support, scheduling, resource allocation, cost control, project scheduling, and activities reminders. Consequently, some cost needs to be allocated for management tools in order to conduct the project activities in the outsource setting.The cost of test tools covers for all the necessary equipment needed for the test execution process — for instance, the test execution tools and hardware tools. Examples of test execution tools are memory debugging tool, memory leaks detection tool, and bug tracking. The estimation needs to consider the cost for the right testing tools which can aid the process such as the cost for license fees or the cost for a software upgrade that is necessary throughout the project duration.The hardware tool refers to the equipment needed to support or complement the environment in order to carry out the test execution. Examples of the hardware tools are a laptop, tablet, and mobile of various screen size. The necessities of the hardware tools are specific according to the particular testing requirement. Hence, the appropriate cost estimation needs to be made for the required hardware tools.The facilities refer to the provided physical infrastructure that will be used by the client throughout the project duration. Examples of the facilities are office space, physical storage, and testing laboratory. For instance, in monitoring the project, the representative might need to use the facilities for a particular duration of time, depending on the project needs. Hence, the cost estimation needs to consider the leasing expenses for the office space to be used by the representative about the specified time. The testing outsourcing projects also typically share valuable physical materials such as documents and hard drives which need to be kept appropriately. In order to ensure the information is secure, physical storage to store those materials need to be provided. Therefore, the estimation has to consider the related cost in providing such needs. Besides that, if the software needs to be tested under a specific environment, the cost for the vendor’s testing laboratory might also need to be counted. The testing laboratory is commonly equipped with better test equipment and the target system to be used. For example, the servers, network setup and related peripherals are included which is more suitable for particular testing objectives.。
软件工程毕业论文文献翻译中英文对照
软件工程毕业论文文献翻译中英文对照学生毕业设计(论文)外文译文学生姓名: 学号专业名称:软件工程译文标题(中英文):Qt Creator白皮书(Qt Creator Whitepaper)译文出处:Qt network 指导教师审阅签名: 外文译文正文:Qt Creator白皮书Qt Creator是一个完整的集成开发环境(IDE),用于创建Qt应用程序框架的应用。
Qt是专为应用程序和用户界面,一次开发和部署跨多个桌面和移动操作系统。
本文提供了一个推出的Qt Creator和提供Qt开发人员在应用开发生命周期的特点。
Qt Creator的简介Qt Creator的主要优点之一是它允许一个开发团队共享一个项目不同的开发平台(微软Windows?的Mac OS X?和Linux?)共同为开发和调试工具。
Qt Creator的主要目标是满足Qt开发人员正在寻找简单,易用性,生产力,可扩展性和开放的发展需要,而旨在降低进入新来乍到Qt的屏障。
Qt Creator 的主要功能,让开发商完成以下任务: , 快速,轻松地开始使用Qt应用开发项目向导,快速访问最近的项目和会议。
, 设计Qt物件为基础的应用与集成的编辑器的用户界面,Qt Designer中。
, 开发与应用的先进的C + +代码编辑器,提供新的强大的功能完成的代码片段,重构代码,查看文件的轮廓(即,象征着一个文件层次)。
, 建立,运行和部署Qt项目,目标多个桌面和移动平台,如微软Windows,Mac OS X中,Linux的,诺基亚的MeeGo,和Maemo。
, GNU和CDB使用Qt类结构的认识,增加了图形用户界面的调试器的调试。
, 使用代码分析工具,以检查你的应用程序中的内存管理问题。
, 应用程序部署到移动设备的MeeGo,为Symbian和Maemo设备创建应用程序安装包,可以在Ovi商店和其他渠道发布的。
, 轻松地访问信息集成的上下文敏感的Qt帮助系统。
【计算机专业文献翻译】软件测试
附录2 英文文献及其翻译原文:Software testingThe chapter is about establishment of the system defects objectives according to test programs. You will understand the followings: testing techniques that are geared to discover program faults, guidelines for interface testing, specific approaches to object-oriented testing, and the principles of CASE tool support for testing. Topics covered include Defect testing, Integration testing, Object-oriented testing and Testing workbenches.The goal of defect testing is to discover defects in programs. A successful defect test is a test which causes a program to behave in an anomalous way. Tests show the presence not the absence of defects. Only exhaustive testing can show a program is free from defects. However, exhaustive testing is impossible.Tests should exercise a system's capabilities rather than its components. Testing old capabilities is more important than testing new capabilities. Testing typical situations is more important than boundary value cases.An approach to te sting where the program is considered as a ‘black-box’. The program test cases are based on the system specification. Test planning can begin early in the software process. Inputs causing anomalous behaviour. Outputs which reveal the presence of defects.Equivalence partitioning. Input data and output results often fall into different classes where all members of a class are related. Each of these classes is an equivalence partition where the program behaves in an equivalent way for each class member. Test cases should be chosen from each partition.Structural testing. Sometime it is called white-box testing. Derivation of test cases according to program structure. Knowledge of the program is used to identify additional test cases. Objective is to exercise all program statements, not all path combinations.Path testing. The objective of path testing is to ensure that the set of test cases is such that each path through the program is executed at least once. The starting point for path testing is a program flow graph that shows nodes representing program decisions and arcs representing the flow of control.Statements with conditions are therefore nodes in the flow graph. Describes the program control flow. Each branch is shown as a separate path and loops are shown by arrows looping back to the loop condition node. Used as a basis for computing the cyclomatic complexity.Cyclomatic complexity = Number of edges -Number of nodes +2The number of tests to test all control statements equals the cyclomatic complexity.Cyclomatic complexity equals number ofconditions in a program. Useful if used with care. Does not imply adequacy of testing. Although all paths are executed, all combinations of paths are not executed.Cyclomatic complexity: Test cases should be derived so that all of these paths are executed.A dynamic program analyser may be used to check that paths have been executed.Integration testing.Tests complete systems or subsystems composed of integrated components. Integration testing should be black-box testing with tests derived from the specification.Main difficulty is localising errors. Incremental integration testing reduces this problem. Tetsing approaches. Architectural validation. Top-down integration testing is better at discovering errors in the system architecture.System demonstration.Top-down integration testing allows a limited demonstration at an early stage in the development. Test observation: Problems with both approaches. Extra code may be required to observe tests. Takes place when modules or sub-systems are integrated to create larger systems. Objectives are to detect faults due to interface errors or invalid assumptions about interfaces. Particularly important for object-oriented development as objects are defined by their interfaces.A calling component calls another component and makes an error in its use of its interface e.g. parameters in the wrong order.Interface misunderstanding. A calling component embeds assumptions about the behaviour of the called component which are incorrectTiming errors: The called and the calling component operate at different speeds and out-of-date information is accessed.Interface testing guidelines: Design tests so that parameters to a called procedure are at the extreme ends of their ranges. Always test pointer parameters with null pointers. Design tests which cause the component to fail. Use stress testing in message passing systems. In shared memory systems, vary the order in which components are activated.Stress testingExercises the system beyond its maximum design load. Stressing the system often causes defects to come to light. Stressing the system test failure behaviour. Systems should not fail catastrophically. Stress testing checks for unacceptable loss of service or data. Particularly relevant to distributed systems which can exhibit severe degradation as a network becomes overloaded.The components to be tested are object classes that are instantiated as objects. Larger grain than individual functions so approaches to white-box testing have to be extended. No obvious ‘top’ to the system for top-down integration and testing. Object-oriented testing. Testing levels.Testing operations associated with objects.Testing object classes. Testing clusters of cooperating objects. Testing the complete OO systemObject class testing. Complete test coverage of a class involves. Testing all operations associated with an object. Setting and interrogating all object attributes. Exercising the object in all possible states.Inheritance makes it more difficult to design object class tests as the information to be tested is not localized. Weather station object interface. Test cases are needed for all e a state model toidentify state transiti.ons for testing.Object integration. Levels of integration are less distinct in objectoriented systems. Cluster testing is concerned with integrating and testing clusters of cooperating objects. Identify clusters using knowledge of the operation of objects and the system features that are implemented by these clusters. Testing is based on a user interactions with the system. Has the advantage that it tests system features as experienced byusers.Thread testing.Tests the systems response to events as processing threads through the system. Object interaction testing. Tests sequences of object interactions that stop when an object operation does not call on services from another objectScenario-based testing. Identify scenarios from use-cases and supplement these with interaction diagrams that show the objects involved in the scenario. Consider the scenario in the weather station system where a report is generated.Input of report request with associated acknowledge and a final output of a report. Can be tested by creating raw data and ensuring that it is summarised properly. Use the same raw data to test the WeatherData object.Testing workbenches.Testing is an expensive process phase. Testing workbenches provide a range of tools to reduce the time required and total testing costs. Most testing workbenches are open systems because testing needs are organisation-specific. Difficult to integrate with closed design and analysis workbenchesTetsing workbench adaptation. Scripts may be developed for user interface simulators and patterns for test data generators. Test outputs may have to be prepared manually for comparison. Special-purpose file comparators may be developed.Key points:Test parts of a system which are commonly used rather than those which are rarely executed. Equivalence partitions are sets of test cases where the program should behave in an equivalent way. Black-box testing is based on the system specification. Structural testing identifies test cases which cause all paths through the program to be executed.Test coverage measures ensure that all statements have been executed at least once. Interface defects arise because of specification misreading, misunderstanding, errors or invalid timing assumptions. To test object classes, test all operations, attributes and states.Integrate object-oriented systems around clusters of objects.译文:软件测试本章的目标是介绍通过测试程序发现程序中的缺陷的相关技术。
关于软件测试的外国文献
关于软件测试的外国文献软件测试是软件开发过程中至关重要的一环,而外国文献中关于软件测试的研究和实践也非常丰富。
下面我将从不同角度介绍一些相关的外国文献,以便更全面地了解软件测试的最新发展。
1. "Software Testing Techniques" by Boris Beizer:这本经典著作详细介绍了软件测试的各种技术和方法,包括黑盒测试、白盒测试、基于模型的测试等。
它提供了许多实用的指导和案例,对软件测试的理论和实践都有很深入的探讨。
2. "Testing Computer Software" by Cem Kaner, Jack Falk, and Hung Q. Nguyen:这本书介绍了软件测试的基础知识和常用技术,包括测试计划的编写、测试用例设计、缺陷管理等。
它强调了测试的全过程管理和质量保证,对于软件测试初学者来说是一本很好的入门指南。
3. "The Art of Software Testing" by Glenford J. Myers, Corey Sandler, and Tom Badgett:这本书从理论和实践的角度探讨了软件测试的艺术。
它介绍了测试的基本原则和策略,以及如何设计有效的测试用例和评估测试覆盖率。
这本书对于提高测试人员的思维和技巧非常有帮助。
4. "Foundations of Software Testing" by Aditya P. Mathur:这本书系统地介绍了软件测试的基本概念、技术和方法。
它涵盖了测试过程的各个阶段,包括需求分析、测试设计、执行和评估。
这本书还提供了丰富的案例和练习,帮助读者深入理解和应用软件测试的原理和技术。
5. "Software Testing: Principles and Practices" by Srinivasan Desikan and Gopalaswamy Ramesh:这本书介绍了软件测试的原则、实践和工具。
软件测试技术英文作文
软件测试技术英文作文Software testing is an essential part of the software development process. It helps to identify and fix bugs and errors in the software, ensuring that it functions as intended.There are various techniques and methods used in software testing, such as unit testing, integration testing, system testing, and acceptance testing. Each of these techniques has its own unique approach and purpose in ensuring the quality of the software.One common technique in software testing is black box testing, where the tester examines the functionality of the software without knowing its internal code. This allows for a more user-focused approach to testing, as it simulates how an end user would interact with the software.On the other hand, white box testing involves examining the internal code and structure of the software to identifyany errors or vulnerabilities. This technique is more focused on the technical aspects of the software and is often used to ensure its security and stability.In addition to these techniques, there are also various tools and frameworks available for software testing, such as Selenium, JUnit, and TestNG. These tools help to automate the testing process and make it more efficient and reliable.Overall, software testing is a crucial step in the software development lifecycle, and it requires a combination of techniques, methods, and tools to ensure the quality and reliability of the software.。
软件测试外文翻译--GUI自动化测试研究
附录1外文译文GUI自动化测试研究摘要:指出了目前自动化测试所采用的录制技术存在的不足,针对不断变化的图形用户界面测试代码很难维护和扩展的问题,采用基于对象的捕捉技术,设计了以Windows消息机制为基础的GU IATF测试框架,实现了高度灵活并易于扩展的图形用户界面自动化测试。
关键词:软件测试;回归测试;自动化0.引言测试是一种旨在评估一个程序或系统的属性或能力,确定它是否符合其所需结果的活动。
在整个软件开发过程中,从需求分析到系统设计直到代码实现,都会出现或多或少的问题。
如何保障软件的质量,软件测试就成为关键的技术。
软件测试的工作量很大并具有一定的重复性,尤其在测试后期所进行的回归测试中(回归测试在软件出现发展性的改变和修正性改变时运行),需要验证以前发现的问题在新版本中是否解决,大部分测试工作是重复的。
实现软件测试的自动化可以使大量的测试程序化地反复执行,不仅节约了大量的劳动力,而且提高了测试效率并保证了测试的质量。
1.录制技术的不足目前一些录制技术被应用到图形用户界面的自动化测试中,在软件开发周期中,系统需要不断地更新和维护,为了保证测试质量,测试代码对不断变化的系统要有很强的适应能力,换句话说,测试也同样需要维护。
测试脚本的录制过程是根据具体的界面和操作进行的,一旦脚本的执行界面发生改变,运行就会出现异常,甚至仅仅是被操作对象位置的改变或图像分辨率的改变都可能会造成图形用户界面自动化测试的失败,因此,基于录制技术的自动化测试维护的代价相当高。
另外,脚本录制的过程是固定的,所以脚本的运行会完全按照操作步骤,不具备灵活性。
2.自动化测试框架的提出在目前的软件测试中,一个备受关注的问题是如何高效地实现图形用户界面的自动化测试,并使测试代码具有很高的灵活性。
本文提出了一种基于对象捕捉技术的图形用户界面自动化测试框架GUIATF(Graphics User Interface Automation Testing Framework),为测试人员方便地创建并灵活地维护测试代码提供保证。
【计算机专业文献翻译】性能测试方法
届毕业设计(论文)英文参考文献英文文献1:Database Security文献出处,年,Vol.卷(期) Network Security Volume: 2003, Issue: 6, June, 2003, pp. 11-12作者: Paul Morrison英文文献2:APPROACHES TO PERFORMANCE TESTING文献出处,年,Vol.卷(期)Approaches to Performance Testing Vol.18, No.3, pp.312-319,2000作者: Matt Maccaux学生院系专业名称学生班级学生学号学生姓名学生层次APPROACHES TO PERFORMANCE TESTINGby Matt Maccaux09/12/2005AbstractThere are many different ways to go about performance testing enterprise applications, some of them more difficult than others. The type of performance testing you will do depends on what type of results you want to achieve. For example, for repeatability, benchmark testing is the best methodology. However, to test the upper limits of the system from the perspective of concurrent user load, capacity planning tests should be used. This article discusses the differences and examines various ways to go about setting up and running these performance tests.IntroductionPerformance testing a J2EE application can be a daunting and seemingly confusing task if you don't approach it with the proper plan in place. As with any software development process, you must gather requirements, understand the business needs, and lay out a formal schedule well in advance of the actual testing. The requirements for the performance testing should be driven by the needs of the business and should be explained with a set of use cases. These can be based on historical data (say, what the load pattern was on the server for a week) or on approximations based on anticipated usage. Once you have an understanding of what you need to test, you need to look at how you want to test your application.Early on in the development cycle, benchmark tests should be used to determine if any performance regressions are in the application. Benchmark tests are great for gathering repeatable results in a relatively short period of time. The best way to benchmark is to change one and only one parameter between tests. For example, if you want to see if increasing the JVM memory has any impact on the performance of your application, increment the JVM memory in stages (for example, going from 1024 MB to 1224 MB, then to 1524 MB, and finally to 2024 MB) and stop at each stage to gather the results and environment data, record this information, and then move on to the next test. This way you'll have a clear trail to follow when you are analyzing the results of the tests. In the next section, I discuss what a benchmark test looks like and the best parameters for running these tests.Later on in the development cycle, after the bugs have been worked out of the application and it has reached a stable point, you can run more complex types of tests to determine how the system will perform under different load patterns. These types of tests are called capacity planning, soak tests, and peak-rest tests, and are designed to test "real-world"-type scenarios by testing the reliability, robustness, and scalability of the application. The descriptions I use below should be taken in the abstract sense because every application's usage pattern will be different. For example, capacity-planning tests are generally used with slow ramp-ups (defined below), but if your application sees quick bursts of trafficduring a period of the day, then certainly modify your test to reflect this. Keep in mind, though, that as you change variables in the test (such as the period of ramp-up that I talk about here or the "think-time" of the users) the outcome of the test will vary. It is always a good idea to run a series of baseline tests first to establish a known, controlled environment to compare your changes with later.BenchmarkingThe key to benchmark testing is to have consistently reproducible results. Results that are reproducible allow you to do two things: reduce the number of times you have to rerun those tests; and gain confidence in the product you are testing and the numbers you produce. The performance-testing tool you use can have a great impact on your test results. Assuming two of the metrics you are benchmarking are the response time of the server and the throughput of the server, these are affected by how much load is put onto the server. The amount of load that is put onto the server can come from two different areas: the number of connections (or virtual users) that are hitting the server simultaneously; and the amount of think-time each virtual user has between requests to the server. Obviously, the more users hitting the server, the more load will be generated. Also, the shorter the think-time between requests from each user, the greater the load will be on the server. Combine those two attributes in various ways to come up with different levels of server load. Keep in mind that as you put more load on the server, the throughput will climb, to a point.Figure 1. The throughput of the system in pages per second as load increases over timeNote that the throughput increases at a constant rate and then at some point levels off.At some point, the execute queue starts growing because all the threads on the server will be in use. The incoming requests, instead of being processed immediately, will be put into a queue and processed when threads become available.Figure 2. The execute queue length of the system as load increases over timeNote that the queue length is zero for a period of time, but then starts to grow at a constant rate. This is because there is a steady increase in load on the system, and although initially the system had enough free threads to cope with the additional load, eventually it became overwhelmed and had to start queuing them up.When the system reaches the point of saturation, the throughput of the server plateaus, and you have reached the maximum for the system given those conditions. However, as server load continues to grow, the response time of the system also grows even as the throughput plateaus.Figure 3. The response times of two transactions on the system as load increases over timeNote that at the same time as the execute queue (above) starts to grow, the response time also starts to grow at an increased rate. This is because the requests cannot be served immediately.To have truly reproducible results, the system should be put under a high load with no variability. To accomplish this, the virtual users hitting the server should have 0 seconds of think-time between requests. This is because the server is immediately put under load and will start building an execute queue. If the number of requests (and virtual users) is kept consistent, the results of the benchmarking should be highly accurate and very reproducible.One question you should raise is, "How do you measure the results?" An average should be taken of the response time and throughput for a given test. The only way to accurately get these numbers though is to load all the users at once, and then run them for a predetermined amount of time. This is called a "flat" run.Figure 4. This is what a flat run looks like. All the users are loaded simultaneously.The opposite is known as a "ramp-up" run.Figure 5. This is what a ramp-up run looks like. The users are added at a constant rate (x number per second) throughout the duration of the test.The users in a ramp-up run are staggered (adding a few new users every x seconds). The ramp-up run does not allow for accurate and reproducible averages because the load on the system is constantly changing as the users are being added a few at a time. Therefore, the flat run is ideal for getting benchmark numbers.This is not to discount the value in running ramp-up-style tests. In fact, ramp-up tests are valuable for finding the ballpark in which you think you later want to run flat runs. The beauty of a ramp-up test is that you can see how the measurements change as the load on the system changes. Then you can pick the range you later want to run with flat tests.The problem with flat runs is that the system will experience "wave" effects.Figure 6. The throughput of the system in pages per second as measured during a flat runNote the appearance of waves over time. The throughput is not smooth but rather resembles a wave pattern.This is visible from all aspects of the system including the CPU utilization.Figure 7. The CPU utilization of the system over time, as measured during a flat runNote the appearance of waves over a period of time. The CPU utilization is not smooth but rather has very sharp peaks that resemble the throughput graph's waves.Additionally, the execute queue experiences this unstable load, and therefore you see the queue growing and shrinking as the load on the system increases and decreases over time.Figure 8. The execute queue of the system over time as measured during a flat runNote the appearance of waves over time. The execute queue exactly mimics the CPU utilization graph above.Finally, the response time of the transactions on the system will also resemble this wave pattern.Figure 9. The response time of a transaction on the system over time as measured during a flat runNote the appearance of waves over time. The transaction response time lines up with the above graphs, but the effect is diminished over time.This occurs when all the users are doing approximately the same thing at the same time during the test. This will produce very unreliable and inaccurate results, so something must be done to counteract this. There are two ways to gain accurate measurements from these types of results. If the test is allowed to run for a very long duration (sometimes several hours, depending on how long one user iteration takes) eventually a natural sort of randomness will set in and the throughput of the server will "flatten out." Alternatively, measurements can be taken only between two of the breaks in the waves. The drawback of this method is that the duration you are capturing data from is going to be short.Capacity PlanningFor capacity-planning-type tests, your goal is to show how far a given application can scale under a specific set of circumstances. Reproducibility is not as important here as in benchmark testing because there will often be a randomness factor in the testing. This is introduced to try to simulate a more customer-like or real-world application with a real user load. Often the specific goal is to find out how many concurrent users the system can support below a certain server response time. As an example, the question you may ask is, "How many servers do I need to support 8,000 concurrent users with aresponse time of 5 seconds or less?" To answer this question, you'll need more information about the system.To attempt to determine the capacity of the system, several factors must be taken into consideration. Often the total number of users on the system is thrown around (in the hundreds of thousands), but in reality, this number doesn't mean a whole lot. What you really need to know is how many of those users will be hitting the server concurrently. The next thing you need to know is what the think-time or time between requests for each user will be. This is critical because the lower the think-time, the fewer concurrent users the system will be able to support. For example, a system that has users with a1-second think-time will probably be able to support only a few hundred concurrently. However, a system with a think-time of 30 seconds will be able to support tens of thousands (given that the hardware and application are the same). In the real world, it is often difficult to determine exactly what the think-time of the users is. It is also important to note that in the real world users won't be clicking at exactly that interval every time they send a request.This is where randomization comes into play. If you know your average user has a think-time of 5 seconds give or take 20 percent, then when you design your load test, ensure that there is 5 seconds +/- 20 percent between every click. Additionally, the notion of "pacing" can be used to introduce more randomness into your load scenario. It works like this: After a virtual user has completed one full set of requests, that user pauses for either a set period of time or a small, randomized period of time (say, 2 seconds +/- 25 percent), and then continues on with the next full set of requests. Combining these two methods of randomization into the test run should provide more of a real-world-like scenario.Now comes the part where you actually run your capacity planning test. The next question is, "How do I load the users to simulate the load?" The best way to do this is to try to emulate how users hit the server during peak hours. Does that user load happen gradually over a period of time? If so, a ramp-up-style load should be used, where x number of users are added ever y seconds. Or, do all the users hit the system in a very short period of time all at once? If that is the case, a flat run should be used, where all the users are simultaneously loaded onto the server. These different styles will produce different results that are not comparable. For instance, if a ramp-up run is done and you find out that the system can support 5,000 users with a response time of 4 seconds or less, and then you follow that test with a flat run with 5,000 users, you'll probably find that the average response time of the system with 5,000 users is higher than 4 seconds. This is an inherent inaccuracy in ramp-up runs that prevents them from pinpointing the exact number of concurrent users a system can support. For a portal application, for example, this inaccuracy is amplified as the size of the portal grows and as the size of the cluster is increased.This is not to say that ramp-up tests should not be used. Ramp-up runs are great if the load on the system is slowly increased over a long period of time. This is because the system will be able to continually adjust over time. If a fast ramp-up is used, the system will lag and artificially report a lower response time than what would be seen if a similar number of users were being loaded during a flat run.So, what is the best way to determine capacity? Taking the best of both load types and running a series of tests will yield the best results. For example, using a ramp-up run to determine the range of users that the system can support should be used first. Then, once that range has been determined, doing a series of flat runs at various concurrent user loads within that range can be used to more accurately determine the capacity of the system.Soak TestsA soak test is a straightforward type of performance test. Soak tests are long-duration tests with a static number of concurrent users that test the overall robustness of the system. These tests will show any performance degradations over time via memory leaks, increased garbage collection (GC), or other problems in the system. The longer the test, the more confidence in the system you will have. It is a good idea to run this test twice—once with a fairly moderate user load (but below capacity so that there is no execute queue) and once with a high user load (so that there is a positive execute queue).These tests should be run for several days to really get a good idea of the long-term health of the application. Make sure that the application being tested is as close to real world as possible with a realistic user scenario (how the virtual users navigate through the application) testing all the features of the application. Ensure that all the necessary monitoring tools are running so problems will be accurately detected and tracked down later.Peak-Rest TestsPeak-rest tests are a hybrid of the capacity-planning ramp-up-style tests and soak tests. The goal here is to determine how well the system recovers from a high load (such as one during peak hours of the system), goes back to near idle, and then goes back up to peak load and back down again.The best way to implement this test is to do a series of quick ramp-up tests followed by a plateau (determined by the business requirements), and then a dropping off of the load. A pause in the system should then be used, followed by another quick ramp-up; then you repeat the process. A couple things can be determined from this: Does the system recover on the second "peak" and each subsequent peak to the same level (or greater) than the first peak? And does the system show any signs of memory or GC degradation over the course of the test? The longer this test is run (repeating the peak/idle cycle over and over), the better idea you'll have of what the long-term health of the system looks like.ConclusionThis article has described several approaches to performance testing. Depending on the business requirements, development cycle, and lifecycle of the application, some tests will be better suited than others for a given organization. In all cases though, you should ask some fundamental questions before going down one path or another. The answers to these questions will then determine how to best test the application.These questions are:∙How repeatable do the results need to be?∙How many times do you want to run and rerun these tests?∙What stage of the development cycle are you in?∙What are your business requirements?∙What are your user requirements?∙How long do you expect the live production system to stay up between maintenance downtimes?∙What is the expected user load during an average business day?By answering these questions and then seeing how the answers fit into the above performance test types, you should be able to come up with a solid plan for testing the overall performance of your application. Additional Reading∙WebLogic Server Performance and Tuning - WebLogic Server product documentation∙WebLogic Server performance tools and information - WebLogic Server product documentation ∙The Grinder: Load Testing for Everyone by Philip Aston (dev2dev, November 2002)∙Performance Tuning Guide - WebLogic Portal product documentation∙dev2dev WebLogic Server Product Center性能测试方法对于企业应用程序,有许多进行性能测试的方法,其中一些方法实行起来要比其他方法困难。
数据库应用程序测试中英文对照外文翻译文献
中英文翻译Database Application Testing1.IntroductionDatabases play a pivotal role in almost every organization in today’s information-based society. Commercial Database managementsystems(DBMSs) provide organizations with efficient access to huge amounts of data without affecting the integrity of data and relieving the user of the any need to understand the low-level implementation details. Over the years tremendous efforts have been devoted to ensuring use of efficient and integrity protecting data structures and algorithms by DBMSs. However, little has been done to develop systematic techniques for ensuring correctness of applications using these DBMSs. Many testing techniques have been developed to help ensure that behaviour of a program is in accordance with the specifications. However, these techniques mostly target programs written in traditional imperative languages and can’t be of much help when it comes to database applications. Like any other program, database application program can be viewed as an attempt to implement a function. Considered this way, both the input and output spaces of this function will include database state apart from the explicit input and output parameters of the application. This affects substantially the way a test case is defined, generated and executed to check correctness of application. Hence there is a need for new approaches specifically oriented towards testing database applications.Testing database application programs involves the following phases :•Extraction of information from database schema•Generation of test data and Populating test database•Generation of test cases as input to the application program•Validation of database state and output after executionUsing live data has several limitations. It may not reflect sufficiently wide variety of possible situations and even if it does, it might be difficult to find them in a large database. Secondly, privacy or security constraints might prevent the user from seeing sensitive data. Hence, various methods for generating synthetic test data have been proposed. When generating data and populating the test database, its important to generate valid and interesting data e.g. it would be advisable to select data so as to include situations which the tester believes are likely to occur or will expose faults in application. The technique used for test data generation will determine the extent of coverage of test database. Selecting a good initial database state so as to include a wide variety of scenarios resembling real data for the particular application is very beneficial. Since database state plays an important role in determining the output, it has to be checked after each execution that only the specified modifications and none others have occurred.2.AGENDA - tool set for testing DB applicationsAGENDA is a tool set has been designed. AGENDA takes as input the application database schema, application source code and files containing sample values which contain suggested values for the attributes provided by the user. The user interactively selects test heuristics and provides information about expected behaviour of test cases. Using this information AGENDA, populates the database, generates inputs to the application, executes the application on those inputs and checks some aspects of correctness of the resulting database state and application output.13. Input Generator :It generates the input data to be supplied to the application by using information derived from Agenda parser and State generator in addition to the information gained by parsing the SQL statements in the application program and information useful for checking test results. Information derived from parsing the source code may be useful in suggesting inputs that tester should supply to the application. The input generator thus generates test inputs by instantiating the input parameters with actual parameters.4. State Validator :The validator monitors the change in application DB state during execution of a test. It automatically logs the changes in the application tables andsemi-automatically checks the state change.5. Output Validator :It captures the application’s outputs and checks them against the query preconditions and post conditions that have been generated by the tool or supplied by the tester.6. Design and Implementation6.1 Parsing toolThe Agenda Parsing tool is based on PostgresSQL parser. PostgresSQL parser creates an Abstract Syntax Tree containing relevant information about tables, attributes and constraints from a given schema. However, this information is spread out at different locations in the tree. In addition, it is possible to have different tree structures having the same underlying information about the tables, because of use of different SQL DDL syntactic constructs expressing the same information. Consequently, the exact location of relevant information depends on the exact syntax of schema definition. Some of the information from DBMS’s internal catalog tables is needed by other components of AGENDA. Allowing them to directly query these tables would have introduced interdependency between AGENDA components which is not desirable. Hence all the information that needs to be processed is stored in Agenda DB which is made available to other components. This decoupling of PostgresSQL from rest of the components allows AGENDA to be ported to different DBMS just by changing the Parser. The Parser extracts information about integrity constraints such as uniqueness constraints, referential constraints and not NULL constraints from schema. It also extracts limited information from semantic constraints, particularly boundary values. This is very useful in automatic data-partitioning and input generation. Next, the Agenda Parser parses the sample-value files containing user-supplied data and stores the sample values, their data groups and associated attributes in the Agenda DB. Attributes involved in composite constraints are marked so that they can be correctly handled by input generator.7 Extensions to AGENDA7.1 Testing Web DB applicationsWith the tremendous growth of World Wide Web, many new web-based services that are driven by data stored in databases are gaining importance. Examples include E-commerce applications such as online stores, and business-to-business support products. Some of these are of critical importance and hence it is essential to ensure their correct functioning. Most web DB applications consist of threes layers - at the base is DBMS and a database, at the top is client web browser and in between lies the application logic - usually developed with a server-side scripting language or Java extended with library that can interface with DBMSs, and can decode and produce HTML pages displayed in the client browser. For a web application a test case is considered as a sequence of pages to be visited along with the input values to be provided to the pages containing forms. The white box approach involves following steps:1. Information extraction from application source :Useful information such as URL links(which includes all other URLs that can be reached from the current page) and parameter information(name-value pairs that are passed to the Servlet) for each URL is extracted from application source. URLs are partitioned into two categories depending on their content - static and data-based(dynamic) page.2. Web application graph generation and path selection :Based on the information extracted earlier, an application graph, where each node represents a URL and edges represent URL links, is generated and then simplified according to URL link types. There is an edge from URL A to URL B if URL A produces a link to URL B in the HTML page it generates. Paths through the graph represent natural sequences of execution of URLs as a user navigates through the web application. Hence, some of these paths are selected as test cases to represent possible scenarios of use of the application.3. Input Generation :For each path selected, AGENDA is used to generate inputs for each URL. The path along with inputs constitute a test case. An XML file is generated corresponding to each such test case.4. Test Execution :The XML file is parsed using XML parser to extract URL information and the test case is executed automatically using open source Jakarta Http Client integrated with AGENDA. After execution of each update or insertion, AGENDA checks the new database states. Output pages are checked by manual inspection or other tools. The tool in its current form is targeted to the Java Servlet model, using JDBC for database access, and makes some assumptions about programming style. However, the basic technique can be applied to more general servlet styles and other web application languages.8.Regression Tests For Database ApplicationsAny application is constantly going through the process of evolution such as its components getting replaced with more powerful components, various optimizations being incorporated and so on. Whenever such modification is introduced in an application, it is important to check for the integrity of the application and that is the purpose of regression tests. There are various tools built for automating the regression testing procedure, most popular being JUnit framework developed for carrying out regression tests for Java applications. Database applications which are composed of many layers and stacked in various layers are, in particular, subject to constant change for instancere-engineering of business processes, authorization rules being changed etc. Changing database applications is very costly and involves great deal of manual work since there aren’t any tools available that can automatically carrying out regression tests on them.9.Conclusions and Future WorkThe AGENDA tool set was designed and implemented in response to a lack of specific work targeted at testing database application. Prior to AGENDA, various approaches had been proposed and implemented for tackling the issues involved in database testing individually. However, no single tool had been designed to tackle all the issues together by integrating the strategies to handle different issues. AGENDA handles a variety of issues such as test data generation, populating the test database, generating interesting test cases and handling integrity constraints of the application database such as not-Null, uniqueness etc, checking the database state after every modification,executing the test case and validating the output. Besides, later extensions to AGENDA have enhanced its ability improving the state checking and input generation mechanism and enabling the tool to test transactions. AGENDA has also served as an aid in testing web-based database applications. However, there are a lot of issues still to be dealt with and many limitations to be addressed. AGENDA uses semi-automatic technique for generating test data. e.g. For attributes having numeric/real value the sample-value file is generated automatically (Section 2.2.1). However, attributes of string type are not handled. Increasing the extent of automation, extracting more information from the embedded SQL statements in the application program source are some of the important tasks that need attention. The tool for testing web-based applications has a lot of limitations as of now. It can currently handle only applications implemented Java Servlets and HTML pages. Further, it assumes that the application source follows certain programming style. These issues are being addressed to. There is also work going on to extend the tool to handle issues like sessions, cookies etc and test web application security. Regression testing is a well-studied technique in Software engineering, however issues specifically related to database applications haven’t received the deserved attention. The whole topic of testing database applications is still in its infancy. No rigorous methodologies have been devised yet and there are several open issues such as the automatic generation and evolution of test runs, the generation of test databases, and the development of platform independent tools. All these challenges are currently being tackled and efforts are on to make the process of testing database applications efficient.数据库应用程序的测试1.引言数据库中的几乎每一个组织在当今信息化社会中发挥了举足轻重的作用。
5、外文文献翻译(附原文)产业集群,区域品牌,Industrial cluster ,Regional brand
外文文献翻译(附原文)外文译文一:产业集群的竞争优势——以中国大连软件工业园为例Weilin Zhao,Chihiro Watanabe,Charla-Griffy-Brown[J]. Marketing Science,2009(2):123-125.摘要:本文本着为促进工业的发展的初衷探讨了中国软件公园的竞争优势。
产业集群深植于当地的制度系统,因此拥有特殊的竞争优势。
根据波特的“钻石”模型、SWOT模型的测试结果对中国大连软件园的案例进行了定性的分析。
产业集群是包括一系列在指定地理上集聚的公司,它扎根于当地政府、行业和学术的当地制度系统,以此获得大量的资源,从而获得产业经济发展的竞争优势。
为了成功驾驭中国经济范式从批量生产到开发新产品的转换,持续加强产业集群的竞争优势,促进工业和区域的经济发展是非常有必要的。
关键词:竞争优势;产业集群;当地制度系统;大连软件工业园;中国;科技园区;创新;区域发展产业集群产业集群是波特[1]也推而广之的一个经济发展的前沿概念。
作为一个在全球经济战略公认的专家,他指出了产业集群在促进区域经济发展中的作用。
他写道:集群的概念,“或出现在特定的地理位置与产业相关联的公司、供应商和机构,已成为了公司和政府思考和评估当地竞争优势和制定公共决策的一种新的要素。
但是,他至今也没有对产业集群做出准确的定义。
最近根据德瑞克、泰克拉[2]和李维[3]检查的关于产业集群和识别为“地理浓度的行业优势的文献取得了进展”。
“地理集中”定义了产业集群的一个关键而鲜明的基本性质。
产业由地区上特定的众多公司集聚而成,他们通常有共同市场、,有着共同的供应商,交易对象,教育机构和其它像知识及信息一样无形的东西,同样地,他们也面临相似的机会和威胁。
在全球产业集群有许多种发展模式。
比如美国加州的硅谷和马萨诸塞州的128鲁特都是知名的产业集群。
前者以微电子、生物技术、和风险资本市场而闻名,而后者则是以软件、计算机和通讯硬件享誉天下[4]。
毕业设计之文献翻译(计算机软件测试)[管理资料]
Software Testing: Black-Box TechniquesSmirnov SergeyAbstract – Software systems play a key role in different parts of modern life. Software is used in every financial, business, educational etc. organization. Therefore, there is a demand for high quality software. It means software should be proper tested and verified before system-integration time. This work concentrated on so-called black-box technique for software testing. The several black-box methods were considered with their strengths and weaknesses. Also, the potential of automated black-box techniques for better performance in testing of reusable components was studied. Finally, the topic related to software security testing was discussed.1. IntroductionComputer technologies plays an important role in the modern society. Computers and Software that drives them affect more people and more businesses than ever today. Therefore, there is a pressure for software developers not only to build software systems quickly, but to focus on quality issues too. Low quality software that can cause loss of life or money is no longer acceptable. In order to achieve a production of highquality software the whole process of developing and maintaining of the software has to be changed and developers have to be correspondingly educated and trained. Testing takes an important part in any software development process (Fig. ). As a process by itself it is related to two other processes verification and validation.Validation is a process of evaluation a software system or component during or, at the end of, the development cycle in order to determine whether it satisfies specified requirements [8]. Verification is the process of evaluating a software system or component to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase [8].Software testing is a process or several processes designed to make sure computer code does what it was designed to do and that it does not do anything unexpected [2].The software testers are responsible to design tests that reveal defects and can be used to evaluate usability and reliability of the software performance. To achieve these goals testers must select a finite number of test cases [1]. There are two basic techniques that can be used to design test cases:–Black-box (sometimes called functional or specification);–– White-box (sometimes called clear or glass box). The white-box technique focuses on the inner structure of the software under test (SUT). To be able to design test cases using this approach a tester has to have a knowledge of the software structure.The source code or suitable pseudo code must be available [1].Figure : A Software Development ProcessBy using the black-box approach the software is viewed as a black box. The goal of a tester is to be completely unconcerned about inner structure of the software. Instead, concentrate on software behavior and functionality (Table 1).Table 1: Two basic testing techniquesWhy do we need black-box testing? First, this approach is useful for revealing requirements and specification defects. Another reason is a testing of reusable software components. Many companies use components from an outside vendors that specialize in the development of specific types of software, so-called Commercial Offthe- Shell Components (COTS).Using such components can save time and money. However, the components 1 have to be evaluated before becoming a part of any developed system. In most cases when a COTS component is purchased from a vendor, no source code is available and even if there is some, it is very expensive to buy. Usually just an executable version of the component is in the hands. In this case black-box testing might be very useful.Next sections of the work present the black-box methods and some issues related to an automation of the methods and software security testing.2. Black-Box Software Testing MethodsBy using black-box approach we are considering only inputs and outputs as a basis for designing test cases. However, we should keep in mind that due to finite time and resources an exhaustive test of all possible inputs is not possible. Therefore, it is a goal of a tester by using available resources to produce the test cases that give a maximum number of found defects. There are several methods that can help to achieve the above mentioned goal.Random TestingEach software system has an input domain from which input data is selected for testing. If inputs are randomly selected this is called random testing. The advantage of the method is that it can save time and effort that more detailed and thoughtful test input selection methods require. On the other hand, random test inputs in many cases can not produce effective set of test data [2].Equivalence Class PartitioningAn Equivalence Class Partitioning (ECP) approach divides the input domain of a software to be tested into the finite number of partitions or eqivalence classes. This method can be used to partition the output domain as well, but it is not commonly used.The result of the partitioning allows a tester to select one member of each class and based on it create test cases. It is assumed that all other members of the same equivalence class are processed the same way by the software under test. Therefore, if one test case based on chosen member detects a defect, all the other test cases based on that class would be expected to detect the same defect. And vice versa, if the test case did not detect a defect, we would expect that no other test cases in the equivalence class would produce an error.This approach has the following advantages [1]:–Elimination of the needs for exhaustive testing through the whole input/output domain, that is not possible;––Following the approach a tester selects a subset of test inputs with a high probability of detecting the defects.A test case design by ECP has two steps:1) Identifying the equivalence classes;2) Defining the test cases.We identify the equivalence classes by taking each input condition and partitioning it into two or more groups: valid equivalence classes, that include valid input to the software, and invalid equivalence classes, that represent all other possible states [2]. There are a set of rules that can be used to identify equivalence classes [2]:–If an input condition specifies a range of values, identify one valid equivalence classwithin this range, and two invalid classes out of range on the left and right side respectively.––If an input condition specify a number of values, specify one valid equivalence class within the values, and two invalid equivalence classes out of the number.––If an input condition specify a set of input values and there is a believe that the software handles each value differently, identify a valid equivalence class for each and one invalid equivalence class.––If an input condition specify a …must be“ situation, identify one valid equivalence class and one invalid equivalence class.However, there are no fast rules for identification of equivalence classes. With experience a tester is able to select equivalence class more effectively and with confindence.If there is a doubt, that the software does not process the members of the equivalence class identically, the equivalence class should be split into smaller classes. The second step of defining the test cases is as following [2]:1. Assign an unique number to each equivalence class;2. Write a new test case trying to cover all valid equivalence class;3. Write a new test case for each invalid equivalence class.Boundary Value AnalisisThe Equivalence Class Partitioning can be supplemented by another method called Boundary Value Analysis (BV A). A tester selects elements close to the edges of the input, so that the test case covers both upper and lower edges of an equivalence class [1].The ability of creating a high-quality test cases with the use of the Boundary Value Analysis depends greatly on the tester's experience as in case of the Equivalence Class Partitioning approach.Cause-Effect GraphingThe major weakness of Equivalence Class Partitioning and Boundary Value Analysis is that the methods do not allow to combine conditions. Furthermore, the number of possible combination is usually very large. Therefore, there must be a systematic way of selectiong a subset of input combinations.Cause-Effect Graphing provides a systematic approach for selecting a set of test cases. The natural-language specification is translated into a formal language – a cause-effect graph. The graph is a digital-logic circuit, but in order to build a graph no knowledge of electronics is necessary. The tester should understand only the boolean logic. The following steps are used to produce test cases [2]:–Divide the specification into workable parts. Large specifications make a cause-effectgraph difficult to manage.Figure : Simple Cause-Effect Graphs–Identify the causes and effects in the specification. A cause is a distinct input condition or an equivalence class of input conditions. An effect is an output condition or a system transformation. The causes and effects are identified by reading the specification.Once identified, each cause and effect is assigned an unique number.–From cause and effect information a boolean causeeffect graph that links causes and effects together is created.–Annotations with constraints are added, that describe combinations of causes and/or effects which are impossible.–The graph is converted to a decision table.–The colomns of the decision table are converted into test cases.The simple examples of cause-effects graphs are shown in Figure . The more detailed description with examples of this method can be found in [1] and [2].Error GuessingDesign of test cases using error guessing method is based on the tester's past experience and intuition. It is impossible to give a procedure for an error guessing approach since it is more intuitive and ad hoc process. The basic idea behind is to enumerate a list of possible errors and then write test cases based on this list.State Transition Testing StateTransition Testing can be used for both objectoriented and procedural software development. The approach is based on the concept of finite-state machine and states. It views the softwareunder test in term of its states, transitions between states, and the inputs or events that trigger state changes.A state is an internal configuration of a system. It is defined in terms of the values assumed at a particular time for the variables that characterize the system or component [1].A finite-state machine is an abstract machine that can be represented by a state graph having a finite number of states and a finite number of transitions between states [1].A State Transition Graph (STG) can be designed for the whole software system or for its specific modules. The STG graph consists of nodes (circles, ovals, rounded rectangles) that represent states and arrows between nodes that indicate what input (event) will cause a transition between two linked states. The Figure shows a simple state transition graph [1].Figure : A simple state transition graphS1 and S2 are two states. The black dot is a pointer to an initial state from outside. The arrows represent inputs/actions that cause the state transformations. It is useful to attach to the graph the system variables that are effected by state changes. The state transition graph can become very complex for large systems. One way to simplify it to use a state table representation. A state table for the graph in Figure is shown in Table 2 [1]. The State Table lists all inputs that cause the state transitions. For each state and each input the next state and action taken are shown.Table 2: A state table for the state transition graph in Fig.The STG should be prepared by developers as a part of the requirements specification. Once the graph was designed it must be reviewed. The review should ensure that –the proper number of states is represented;–each state transition (input/output/action) is correct;–equivalent states are identified;–unreachable and dead states are identified.Unreachable states are states that will never be reached with any input sequence and may indicate missing transitions. Dead states are states that once entered can not be exited [1]. After the review the test cases should be planed. One practical approach is to test every possible state transition [4].3. Automated Black-Box TestingA few black-box methods were listed above. The problem with those methods is that often the performance of testing depends greatly on experience and intuition of the tester. Therefore, there is a question if black-box testing can be automated to make testing more thorough and cost-effective.Furthermore, there is need in black-box methods, that can be used for testing reusable software components before integration into a system under development. The reusable components can be independently developed or commercially purchased. The quality of these components can vary from one vendor to another.The general strategy for automated black-box testing of software components was proposed in [5] . The strategy is based on combination of three techniques: automatic generation of component test drivers, automatic generation of test data, and automatic or semi-automatic generation of wrappers serving the role of test oracles.An approach that allows testers to take advantage of the combinatorial explosion of expected results was developed in [6]. There is a possibility to generate and check the correctness of a relatively small sets of test cases by using software Input/Output relationships. Then the expected results can be generated for the much larger combinatorial test data set. It allows a fully automated execution.In [3] Richard Torkar made a comparison of the main black-box methods in order to find their weaknesses and strengths. It was mentioned that the methods such as Cause-Effect Graphing and Error-Guessing are not suitable for automation. The difficulty in case of Equivalence Class Partitioning would be to automate the partitioning of the input domain in a satisfactory way.Since the effectiveness of black-box techniques is close connected to experience of the tester, in our opinion they can be automated by using artificial intellegence methods such as neural networks and fuzzy logic. More information about research in this area can be found in [7]. 4. Black Box Testing and Software SecurityAt the present there is a pressure on software developers to produce high quality software. Thesecurity aspects are highly related to a software quality. Security testing should be integrated in the testing process, but in reality it is not true in most cases. Usually the developers test the software just for functional requirements and do not consider security issues.One way to check software for secure vulnerabilites is to study known security problems in similar systems and generate test cases based on it. Then applying black-box techniques to run these test cases. The black-box methods play an important part in securtity testing. They allow the testers to look at the software under test from the side of attackers, which usually do not have any information about attacked system and therefore consider it as a black-box. Security testing is important for e-commerce software systems such as corporate web-sites. Furthermore, since buffer overflow is a result of bad constructed software programs, security testing can reveal such vulnerabilities, what is helpful for checking both local programs such as games, calculators, office software etc. and remote software such as e-mail servers, FTP, DNS and Internet web servers.ConclusionSoftware testing became an essential part of the software development process. The well designed test cases can significantly increase the quantity of found faults and errors.The mentioned above black-box methods provide an effective way of testing with no knowledge of inside structure of the software to be tested. Nevertheless, the quality of the black-box testing depends in general on the experience and intuition of the tester. Therefore, it is hard to automate this process. In spite of this fact, there were made a several attempts to develop approaches for automated black-box testing.The black-box testing helps the developers and testers to check software under test for secure vulnerabilities. The secure testing is a matter of importance for e-commerce applications, that are available in the Internet for a wide range of people, and for revealing buffer overflow vulnerabilities in different local and remote applications.软件测试:黑盒技术Smirnov Sergey摘要:在现代社会中,软件系统占了一个重要的位子。
软件测试策略范文
软件测试策略范文Software testing is an essential part of the software development process as it helps ensure the quality and reliability of the final product. 软件测试是软件开发过程中不可或缺的一部分,因为它有助于确保最终产品的质量和可靠性。
One software testing strategy that is commonly used is the black-box testing strategy. 运用广泛的一种软件测试策略是黑盒测试策略。
This strategy focuses on testing the functionality of the software without looking at its internal code. 这种策略侧重于测试软件的功能,而不考虑其内部代码。
Another popular software testing strategy is the white-box testing strategy. 另一种流行的软件测试策略是白盒测试策略。
This strategy involves testing the software by examining its internal code and logic. 这种策略涉及通过检查软件的内部代码和逻辑来进行测试。
In addition to black-box and white-box testing, there are other testing strategies such as unit testing, integration testing, system testing, and acceptance testing. 除了黑盒测试和白盒测试,还有其他测试策略,如单元测试、集成测试、系统测试和验收测试。
软件测试中英文对照外文翻译文献
STUDY PAPER ON TEST CASE GENERATION FOR GUI BASED TESTINGABSTRACTWith the advent of WWW and outburst in technology and software development, testing the softwarebecame a major concern. Due to the importance of the testing phase in a software development lifecycle,testing has been divided into graphical user interface (GUI) based testing, logical testing, integrationtesting, etc.GUI Testing has become very important as it provides more sophisticated way to interact withthe software. The complexity of testing GUI increased over time. The testing needs to be performed in away that it provides effectiveness, efficiency, increased fault detection rate and good path coverage. Tocover all use cases and to provide testing for all possible (success/failure) scenarios the length of the testsequence is considered important. Intent of this paper is to study some techniques used for test casegeneration and process for various GUI based software applications.KEYWORDSGUI Testing, Model-Based Testing, Test Case, Automated Testing, Event Testing.1. INTRODUCTIONGraphical User Interface (GUI) is a program interface that takes advantage of the computer'sgraphics capabilities to make the program easier to use. Graphical User Interface (GUI) providesuser an immense way to interact with the software [1]. The most eminent and essential parts ofthe software that is being used today are Graphical User Interfaces (GUIs) [8], [9]. Even thoughGUIs provides user an easy way to use the software, they make the development process of the software tangled [2].Graphical user interface (GUI) testing is the process of testing software's graphical user interfaceto safeguard it meets its written specifications and to detect if application is working functionally correct. GUI testing involves performing some tasks and comparing the result with the expected output. This is performed using test cases. GUI Testing can be performed either manually byhumans or automatically by automated methods.Manual testing is done by humans such as testers or developers itself in some cases and it is oftenerror prone and there are chances of most of the test scenarios left out. It is very time consumingalso. Automated GUI Testing includes automating testing tasks that have been done manually before, using automated techniques and tools. Automated GUI testing is more, efficient, precise, reliable and cost effective.A test case normally consists of an input, output, expected result and the actual result. More thanone test case is required to test the full functionality of the GUI application. A collection of testcases are called test suite. A test suite contains detailed guidelines or objectives for eachcollection of test cases.Model Based Testing (MBT) is a quick and organized method which automates the testing process through automated test suite generation and execution techniques and tools [11]. Model based testing uses the directed graph model of the GUI called event-interaction graph (EIG) [4] and event semantic interaction graph (ESIG). Event interaction graph is a refinement of event flow graph (EFG) [1]. EIG contains events that interact with the business logic of the GUI application. Event Semantic Interaction (ESI) is used to identify set of events that need to be tested together in multi-way interactions [3] and it is more useful when partitioning the events according to its functionality.This paper is organized as follow: Section 2 provides some techniques, algorithms used to generate test cases, a method to repair the infeasible test suites are described in section 3, GUI testing on various types of softwares or under different conditions are elaborated in section 4, section 5 describes about testing the GUI application by taking event context into consideration and last section concludes the paper.2. TEST CASE GENERATION2.1. Using GUI Run-Time State as FeedbackXun Yuan and Atif M Memon [3], used GUI run time state as feedback for test case generation and the feedback is obtained from the execution of a seed test suite on an Application Under Test (AUT).This feedback is used to generate additional test cases and test interactions between GUI events in multiple ways. An Event Interaction Graph (EIG) is generated for the application to be tested and seed test suites are generated for two-way interactions of GUI events. Then the test suites are executed and the GUI’s run time state is recorded. This recorded GUI run time state is used to obtain Event Semantic Interaction(ESI) relationship for the application and these ESI are used to obtain the Event Semantic Interaction Graph(ESIG).The test cases are generated and ESIGs is capable of managing test cases for more than two-way interactions and hence forth 2-, 3-,4-,5- way interactions are tested. The newly generated test cases are tested and additional faults are detected. These steps are shown in Figure 1. The fault detection effectiveness is high than the two way interactions and it is because, test cases are generated and executed for combination of events in different execution orders.There also some disadvantages in this feedback mechanism. This method is designed focusing on GUI applications. It will be different for applications that have intricate underlying business logic and a simple GUI. As multi-way interactions test cases are generated, large number of test cases will be generated. This feedback mechanism is not automated.Figure 1. Test Case Generation Using GUI Runtime as Feedback2.2. Using Covering Array TechniqueXun Yuan et al [4], proposed a new automated technique for test case generation using covering arrays (CA) for GUI testing. Usually 2-way covering are used for testing. Because as number of events in a sequence increases, the size of test suite grows large, preventing from using sequences longer than 3 or 4. But certain defects are not detected using this coverage strength. Using this technique long test sequences are generated and it is systematically sampled at particular coverage strength. By using covering arrays t-way coverage strength is being maintained, but any length test sequences can be generated of at least t. A covering array, CA(N; t, k, v), is an N × k array on v symbols with the property that every N × t sub-array contains all ordered subsets of size t of the v symbols at least once.As shown in Figure 2, Initially EIG model is created which is then partitioned into groups of interacting events and then constraints are identified and used to generate abstract model for testing. Long test cases are generated using covering array sampling. Event sequences are generated and executed. If any event interaction is missed, then regenerate test cases and repeat the steps.The disadvantages are event partition and identifying constraints are done manually.Figure 2. Test Generation Using Covering Array2.3. Dynamic Adaptive Automated test GenerationXun Yuan et al [5], suggested an algorithm to generate test suites with fewer infeasible test cases and higher event interaction coverage. Due to dynamic state based nature of GUIs, it is necessary and important to generate test cases based on the feedback from the execution of tests. The proposed framework uses techniques from combinatorial interaction testing to generate tests and basis for combinatorial interaction testing is a covering array. Initially smoke tests are generated and this is used as a seed to generate Event Semantic Interaction (ESI) relationships. Event Semantic Interaction Graph is generated from ESI. Iterative refinement is done through genetic algorithm. An initial model of the GUI event interactions and an initial set of test sequences based on the model are generated. Then a batch of test cases are generated and executed. Code coverage is determined and unexecutable test cases are identified. Once the infeasible test cases are identified, it is removed and the model is updated and new batch of test cases are generated and the steps are followed till all the uncovered ESI relationships are covered. These automated test case generation process is shown in Figure 3. This automated test generation also provides validation for GUIs.The disadvantages are event contexts are not incorporated and need coverage and test adequacy criteria to check how these impacts fault detection.Figure 3. Automated Test Case Generation3. REPAIRING TEST SUITESSi Huang et al [6], proposed a method to repair GUI test suites using Genetic algorithm. New test cases are generated that are feasible and Genetic algorithm is used to develop test cases that provide additional test suite coverage by removing infeasible test cases and inserting new feasible test cases. A framework is used to automatically repair infeasible test cases. A graph model such as EFG, EIG, ESIG and the ripped GUI structure are used as input. The main controller passesgenerator along with the strength of testing. This covering array generator generates an initial set of event sequences. The covering array information is send to test case assembler and it assembles this into concrete test cases. These are passed back to the controller and test suite repair phase begins. Feasible test cases are returned by the framework once the repair phase is complete. Genetic algorithm is used as a repair algorithm. An initial set of test cases are executed and if there is no infeasible test cases, it exits and is done. If infeasible test cases are present, it then begins the repair phase. A certain number of iterations are set based on an estimate of how large the repaired test suite will be allowed to grow and for each iteration the genetic algorithm is executed. The algorithm adds best test case to the final test suites. Stopping criteria’s are used to stop the iterations.The advantages are it generates smaller test suites with better coverage on the longer test sequences. It provides feasible test cases. But it is not scalable for larger applications as execution time is high. As GUI ripping is used, the programs that contain event dependencies may not be discovered.4. GUI TESTING ON VARIOUS APPLICATIONS4.1. Industrial Graphical User Interface SystemsPenelope Brooks et al [7], developed GUI testing methods that are relevant to industry applications that improve the overall quality of GUI testing by characterizing GUI systems using data collected from defects detected to assist testers and researchers in developing more effective test strategies. In this method, defects are classified based on beizer’s defect taxonomy. Eight levels of categories are present each describing specific defects such as functional defects, functionality as implemented, structural defects, data defects, implementation defects, integration defects, system defects and test defects. The categories can be modified and added according to the need. If any failures occur, it is analyzed under which defect category it comes and this classification is used to design better test oracle to detect such failures, better test case algorithm may be designed and better fault seeding models may be designed.Goal Question Metric (GQM) Paradigm is used. It is used to analyze the test cases, defects and source metrics from the tester / researcher point of view in the context of industry-developed GUI software. The limitations are, the GUI systems are characterized based on system events only. User Interactions are not included.4.2. Community-Driven Open Source GUI ApplicationsQing Xie and Atif M. Memon [8], presented a new approach for continuous integration testing of web-based community-driven GUI-based Open Source Software(OSS).As in OSS many developers are involved and make changes to the code through WWW, it is prone to more defects and the changes keep on occurring. Therefore three nested techniques or three concentric loops are used to automate model-based testing of evolving GUI-based OSS. Crash testing is the innermost technique operates on each code check-in of the GUI software and it is executed frequently with an automated GUI testing intervention and performs quickly also. It reports the software crashes back to the developer who checked in the code. Smoke testing is the second technique operates on each day's GUI build and performs functional reference testing of the newly integrated version of the GUI, using the previously tested version as a baseline. Comprehensive Testing is the outermost third technique conducts detailed comprehensive GUI integration testing of a major GUI release and it is executed after a major version of GUI is available. Problems are reported to all the developers who are part of the development of the particular version.flaws that persist across multiple versions GUI-based OSS are detected by this approach fully automatically. It provides feedback. The limitation is that the interactions between the three loops are not defined.4.3. Continuously Evolving GUI-Based Software ApplicationsQing Xie and Atif M. Memon [9], developed a quality assurance mechanism to manage the quality of continuously evolving software by Presenting a new type of GUI testing, called crash testing to help rapidly test the GUI as it evolves. Two levels of crash testing is being described: immediate feedback-based crash testing in which a developer indicates that a GUI bug was fixed in response to a previously reported crash; only the select crash test cases are re run and the developer is notified of the results in a matter of seconds. If any code changes occur, new crash test cases are generated and executed on the GUI. Test cases are generated that can be generated and executed quickly and cover all GUI functionalities. Once EIG is obtained, a boolean flag is associated with each edge in the graph. During crash testing, once test cases that cover that particular edge are generated, then the flag is set. If any changes occur, boolean flag for each edge is retained. Test cases are executed and crashes during test execution are used to identify serious problems in the software. The crash testing process is shown in Figure 4. The effectiveness of crash test is known by the total number of test cases used to detect maximum faults. Significantly, test suite size has no impact on number of bugs revealed.This crash testing technique is used to maintain the quality of the GUI application and it also helps in rapidly testing the application. The drawbacks are, this technique is used for only testing GUI application and cannot used in web applications, Fault injection or seeding technique, which is used to evaluate the efficiency of the method used is not applied here.Figure 4. Crash Testing Process4.4. Rapidly Evolving SoftwareAtif M. Memon et al [10], made several contributions in the area of GUI smoke testing in terms of GUI smoke test suites, their size, fault detection ability and test oracle. Daily Automated Regression Tester (DART) framework is used to automate GUI smoke testing. Developers work on the code during day time and DART automatically launches the Application Under Test (AUT) during night time, builds it and runs GUI smoke tests. Coverage and error report are mailed to developer. In DART all the process such as Analyzing the AUT’s GUI structure using GUI ripper, Test case generation, Test oracle generation, Test case executor, Examining theand test oracles are generated. Fault seeding is used to evaluate fault detection techniques used. An adequate number of faults of each fault type are seeded fairly.The disadvantages are Some part of code are missed by smoke tests, Some of the bugs reported by DART are false positive, Overall effectiveness of DART depends on GUI ripper capabilities, Not available for industry based application testing, Faults that are not manifested on the GUI will go undetected5. INCORPORATING EVENT CONTEXTXun Yuan et al [1], developed a new criterion for GUI testing. They used a combinatorial interaction testing technique. The main motivation of using combinatorial interaction is to incorporate context and it also considers event combinations, sequence length and include all possible event. Graph models are used and covering array is used to generate test cases which are the basis for combinatorial interaction testing.A tool called GUITAR (GUI Testing Framework) is used for testing and this provides functionalities like generate test cases, execute test cases, verify correctness and obtain coverage reports. Initially using GUI ripper, a GUI application is converted into event graph and then the events are grouped depending on functionality and constraints are identified. Covering array is generated and test sequences are produced. Test cases are generated and executed. Finally coverage is computed and a test adequacy criterion is analyzed.The advantages are: contexts are incorporated, detects more faults when compared to the previous techniques used. The disadvantages are infeasible test cases make some test cases unexecutable, grouping events and identifying constraints are not automated.Figure 5. Testing Process6. CONCLUSIONSIn this paper, some of the various test case generation methods and various types of GUI testing adapted for different GUI applications and techniques are studied. Different approaches are being used under various testing environment. This study helps to choose the test case generation technique based on the requirements of the testing and it also helps to choose the type of GUI test to perform based on the application type such as open source software, industrial software and the software in which changes are checked in rapidly and continuously.REFERENCES[1][2]Xun Yuan, Myra B. Cohen, Atif M. Memon, (2010) “GUI Interaction Testing: Incorporating Event Context”, IEEE Transactions on Software Engineering, vol. 99.A. M. Memon, M. E. Pollack, and M. L. Soffa, (2001) “Hierarchical GUI test case generation using automated planning”, IEEE Transactions on Software Engineering, Vol. 27, no. 2, pp. 144-155.X. Yuan and A. M. M emon, (2007) “Using GUI run-time state as feedback to generate test cases”, in International Conference on Software Engineering (ICSE), pp. 396-405.X. Yuan, M. Cohen, and A. M. Memon, (2007) “Covering array sampling of input event sequences for automated GUI testing”, in International Conference on Automated Software Engineering (ASE), pp. 405-408.X. Yuan, M. Cohen, and A. M. Memon, (2009) “Towards dynamic adaptive automated test generation for graphical user interfaces”, in First International Workshop on TESTing Techniques & Experimentation Benchmarks for Event-Driven Software (TESTBEDS), pp. 1-4.Si Huang, Myra Cohen, and Atif M. Memon, (2010) “Repairing GUI Test Suites Using a Genetic Algorithm, “in Proceedings of the 3rd IEEE InternationalConference on Software Testing Verification and Validation (ICST).P. Brooks, B. Robinson, and A. M. Memon, (2009) “An initial characterization of industrial graphical user interface systems”, in ICST 2009: Proceedings of the 2nd IEEE International Conference on Software Testing, Verification and Validation, Washington, DC, USA: IEEE Computer Society.Q. Xie, and A.M. Memon (2006) “Model-based testing of community driven open-source GUI applications”, in International Conference on Software Maintenance (ICSM), pp. 145-154.Q. Xie and A. M. Memon, (2005) “Rapid “crash testing” for continuously evolving GUI- based software applications”, in International Conference on Software Maintenance (ICSM), pp. 473-482.A. M. Memon and Q. Xie, (2005) “Studying the fault-detection effectiveness of GUI test cases for rapidly evolving software”, IEEE Transactions on Software Engineering, vol. 31, no. 10, pp. 884-896.U. Farooq, C. P. Lam, and H. Li, (2008) “Towards automated test sequence generation”, in Australian Software Engineering Conference, pp. 441-450.[3][4][5][6][7][8][9][10][11]研究基于GUI测试生成的测试用例摘要随着 WWW的出现和信息技术与软件开发的发展,软件测试成为一个主要问题。
Labview毕业论文毕业论文中英文资料外文翻译文献
Labview毕业论文毕业论文中英文资料外文翻译文献中英文资料Virtual Instruments Based on Reconfigurable LogicVirtual Instruments advantages of more traditional instruments:中英文资料greatly enhanced the capabilities of traditional instruments.Nevertheless, there are two main factors which limits the application of virtual中英文资料基于虚拟仪器的可重构逻辑虚拟仪器的出现是测量仪器发展历史上的一场革命。
它充分利用最新的计算机技术来实现和扩展仪器的功能,用计算机屏幕可以简单地模拟大多数仪器的调节控制面板,以各种需要的形式表达并且输出检测结果,用计算机软件实现大部分信号的分析和处理,完成大多数控制和检测功能。
用户通过应用程序将一般的通用计算机与功能化模块硬件结合起来,通过友好的界面来操作计算机,就像在操作自己定义,自己设计的单个仪器,可完成对被测量的采集,分析,判断,控制,显示,数据存储等。
虚拟仪器较传统仪器的优点(1)融合计算机强大的硬件资源,突破了传统仪器在数据处理,显示,存储等方面的限制,大大增强了传统仪器的功能。
(2)利用计算机丰富的软件资源,实现了部分仪器硬件的软件化,节省了物质资源,增加了系统灵活性。
通过软件技术和相应数值算法,实时,直接地对测试数据进行各种分析与处理,通过图形用户界面技术,真正做到界面友好、人中英文资料机交互。
(3)虚拟仪器的硬件和软件都具有开放性,模块化,可重复使用及互换性等特点。
因此,用户可根据自己的需要,选用不同厂家的产品,使仪器系统的开发更为灵活,效率更高,缩短系统组建时间。
传统的仪器是以固定的硬件和软件资源为基础的specific系统,这使得系统的功能和应用程序由制造商定义。
软件测试管理中英文资料外文翻译文献
软件测试管理中英文资料外文翻译文献中英文资料外文翻译文献基于价值的软件测试管理鲁道夫,斯蒂芬,保罗沈青松译摘要:根据研究表明测试已经成为软件开发过程中一个很重要的环节,它占据了整个软件开发成本的百分之三十到五十。
测试通常不是用来组织商业价值的最大化,也不是肩负着项目的使命。
路径测试、分支测试、指导测试、变换测试、场景测试以及需求测试等对于软件的所有方面都是同等重要的。
然而在实践中百分之八十的价值往往来自百分之二十的软件。
为了从软件测试中得到最大的投资回报,测试管理需要最大化它的价值贡献。
在本章,我们将更加促进对基于价值的测试的需要,描述支持基于价值的测试管理的实践,勾画出基于价值的测试管理的框架,并举例说明该框架。
关键词: 基于价值的软件测试,基于价值的测试,测试成本,测试利益,测试管理11.1 前言测试是软件质量保证过程中最重要和最广泛使用的方法。
校验和验证旨在通过综合分析, 测试软件确保其正确运行功能,确保软件的质量和软件的可靠性。
在IEEE610.12(1990)中,测试被定义为在规定条件下对执行的系统或者组件进行观察和记录,并对系统或者组件进行评价的活动。
测试在实践过程中被广泛的使用,在保证质量策略的诸多组织中扮演着重要的角色。
软件影响着成千上万人的日常生活,担负着艰巨的任务。
因此软件在不久的将来将显得尤其的重要。
研究表明,测试通常消耗软件开发成本的30%至50%。
对于安全危急系统,甚至更高的比例也不足为奇。
因此软件测试具有挑战的就是寻找更多的有效途径进行有效的测试。
软件测试管理的价值在于努力减少测试成本和满足需求。
有价值的测试管理对于项目目标和商业价值也能有很好的向导。
在第一章,Boehm 列举了很多方面的潜在测试成本。
该例子说明了利用客户结账类型的7%的成本来完成50%的软件测试利益。
尽管百分百测试是一个不太切实际的目标, 然而通过调整测试方法, 仍有很大的空间来改进和节省达到预期的价值。
软件测试[意义与方法][中英对照翻译]
软件测试1.Objective目的The objective of this document is to describe software testing for use with any software development project. The purpose is to provide a baseline for software testing activities. A standardized testing process is required because it improves the effectiveness and efficiency of testing for the project. It does so in several ways.本文档的目的是描述任何软件开发项目中的软件测试活动,目的是提供一个标准的测试活动基准。
标准化的测试流程是被需要的,因为它可以提高项目测试的效率和有效性。
∙Defines the testing process∙Makes the testing process repeatable∙Ensures high-risk components of the system are tested∙Lessens the effects of individual differences (tester background and skill set) on testing∙Adds “intelligence” to testing∙Provides metrics for managing and controlling testing∙Provides metrics for assessing and improving testing∙Provides a basis for test automation∙Produces specific testing deliverable∙定义测试过程∙确保测试过程可重复∙确保系统中高风险的组件得到测试∙缩小测试中由于测试人员的背景和技能的不同产生的影响∙增加“intelligence”去测试∙为管理和控制测试提供度量∙为评估和改进测试提供度量∙提供自动化测试的基础∙产出具体的测试交付物2.What is Software Testing?什么是软件测试?The goal of the testing activity is to find as many errors as possible before the user of the software finds them. We can use testing to determine whether a program component meets its requirements.测试活动的目标是在软件的真正用户使用前尽可能的发现更多的错误,我们使用测试去确定一个程序组件是否满足它的需求。
外文翻译---软件测试策略
附录英文文献SOFTW ARE TESTING STEATEGIESA strategy for software testing integrates software test case design methods into a well-planned series of steps that result in the successful construction of software .As important ,a software testing strategy provides a rode map for the software developer, the quality assurance organization ,and the customer—a rode map that describes the steps to be conducted as part of testing, when these steps are planned and then undertaken, and how much effort, time, and resources will be required. Therefore , any testing strategy must incorporate test planning, test case design, test execution, and resultant data collection .一INTEGRATION TESTINGA neophyte in the software world might ask a seemingly legitimate question once all modules have been unit tested:“IF they all work individually, why do you doubt that they’ll work when we put them together?”The problem, of course, is“putting them together”—interfacing . Data can be lost across an interface; one module can have an inadvertent, adverse affect on another; subfunctions, when combiner, may not produce the desired major function; individually acceptable imprecision may be magnified to unacceptable levels; global data structures can present problems—sadly, the list goes on and on .Integration testing is a systematic technique for constructing the program structure while conducting tests to uncover errors associated with interfacing. The objective is to take unit tested modules and build a program structure that has been dictated by design.There is often a tendency to attempt non-incremental integration; that is, to construct the program using a :“big bang”approach. All modules are combined in advance .the entire program in tested as a whole. And chaos usually results! A set of errors are encountered. Correction is difficult because isolation of causes is complicated by the vast expanse of the entire program. Once these errors are corrected, new ones appear and the process continues in a seemingly endless, loop.Incremental integration is the antithesis of the big bang approach. The program is constructed and tested in small segments, where errors are easier to isolate and correct; interfaces are more likely to be tested completely; and a systematic test approach may be applied. In the sections that follow, a number of different incremental integration strategies are discussed.1.1 Top-Down IntegrationTop-Down Integration is an incremental approach to construction of program structure. Modules are integrated by moving downward through the control hierarchy, beginning with the main control module (main program). Modules subordinate (and ultimately subordinate) to the main control module are incorporated into the structure in either a depth-first or breadth-first manner.Depth-first integration would integrate all modules on a major control path of the structure. Selection of a major path is somewhat arbitrary and depends on application-specific characteristics. For example, selecting the left hand path, modules M1,M2, and M5 would be integrated first. Next, M8 or (if necessary for proper functioning of M2) M6 would be integrated. Then, the central and right hand control paths are built. Breadth-first integration incorporates all modules directly subordinate at each level, moving across the structure horizontally. From the figure, modules M2, M3, and M4 would be integrated first. The next control level, M5, M6, and so on, follows.The integration process is performed in a series of steps:(1)The main control module is used as a test driver, and stubs are substituted for all modules directly subordinate to the main control module.(2)Depending on the integration approach selected (i.e., depth- or breadth-first), subordinate stubs are replaced one at a time with actual modules.(3)Tests are conducted as each module is integrated.(4)On completion of each set of tests, another stub is replaced with the real module.(5)Regression testing may be conducted to ensure that new errors have not been introduced.The process continues from step 2 until the program structure is built.The top-down integration strategy verifies major control or decision points early in the test process. In a well-factored program structure, decision making occurs at upper levels in the hierarchy and is therefore encountered first. If major controlprogram do exist, early recognition is essential. If depth-first integration is selected, a complete function of the software may be implemented and demonstrated. For example, consider a classic transaction structure in which a complex series of interactive inputs path are requested, acquired, and validated via an incoming path. The incoming path may be integrated in a top-down manner. All input processing (for subsequent transaction dispatching) maybe demonstrated before other elements of the structure have been integrated. Early demonstration of functional capability is a confidence builder for both the developer and the customer.Top-down strategy sounds relatively uncomplicated, but in practice, logistical problems can arise. The most common of these problems occurs when processing at low levels in the hierarchy is required to adequately test upper levels, Stubs replace low-level modules at the beginning of top-down testing; therefore no significant data can flow upward in the program structure. The tester is left with three choices: 1 delay many tests until stubs are replaced with actual modules, 2 develop stubs that perform limited functions that simulate the actual module, or 3 integrate the software from the bottom of the hierarchy upward.The first approach (delay tests until stubs are replaced by actual modules) causes us to lose some control over correspondence between specific tests and incorporation of specific modules. This can lead to difficulty in determining the cause of errors and tends to violate the highly constrained nature of top-down approach. The second approach is workable, but can lead to significant overhead, as stubs become more and more complex. The third approach is called bottom-up testing.1.2 Bottom-Up IntegrationBottom-up integration testing, as its name implies, begins construction and testing with atomic modules (i.e., modules at the lowest level in the program structure). Because modules are integrated from the bottom up, processing required for modules subordinate to a given level is always available and the need for stubs is eliminated.A bottom-up integration strategy may be implemented with the following steps:1 Low-level modules are combined into clusters (sometimes called builds) that perform a specific software subfunction.1. A driver (a control program for testing) is written to coordinate test case input and output.2 .The cluster is tested.3.Drivers are removed and clusters are combined moving upward in the program structure.Modules are combined to form clusters 1,2, and 3. Each of the clusters is tested using a driver (shown as a dashed block) Modules in clusters 1 and 2 are subordinate to M1. Drivers D1 and D2 are removed, and the clusters are interfaced directly to M1. Similarly, driver D3 for cluster 3 is removed prior to integration with module M2. Both M1 and M2 will ultimately be integrated with M3, and so forth.As integration moves upward, the need for separate test drivers lessens, In fact, if the top tow levels of program structure are integrated top-down, the number of drivers can be reduced substantially and integration of clusters is greatly simplified.1.3 Regression TestingEach time a new module is added as part of integration testing, the software changes. New data flow paths are established, new I/O may occur, and new control logic is invoked. These changes may cause problems functions that regression testing is the re-execution of some subset of tests that have already been conducted to ensure that changes have not propagated unintended side effects.In a broader context, successful tests (of any kind) result in the discovery of errors, and errors must be corrected. Whenever software is corrected, some aspect of the software configuration (the program, its documentation, or the data that support it) is changes. Regression testing is the activity that helps to ensure that changes (due to testing or for other reasons) do not introduce unintended behavior or additional errors.Regression testing maybe conducted manually, be re-executing a subset of all test cases or using automated capture playback tools. Capture-playback tools enable the software engineer to capture test cases and results for subsequent playback and comparison. The regression test suite (the subset of tests to be executed) contains three different classes of test cases:1.A representative sample of tests that will exercise all software functions.2.Additional tests that focus on software functions that are likely to be affected by the change.3.Tests that focus on the software components that have been changed.As integration testing proceeds, the number of regression tests can grow quite large. Therefore, the regression test suite should be designed to include only those tests that address one or more classes of errors in each of the major program functions. It is impractical and inefficient to re-execute every test for every program functiononce a change has occurred.1.4 Comments on Integration TestingThere has been much discussion of the relative advantages and disadvantages of top-down versus bottom-up integration testing. In general, the advantages of one strategy tend to result in disadvantages for the other strategy. The major disadvantage of the top-down approach is the need for stubs and the attendant testing difficulties that can be associated with them. Problems associated with stubs maybe offset by the advantage of testing major control functions early. The major disadvantage of bottom up integration is that “the program as an entity does not exist until the last module is added”. This drawback is tempered by easier test case design and a lack of stubs.Selection of an integration strategy depends upon software characteristics and sometimes, project schedule. In general, a combined approach (sometimes called sandwich testing) that uses a top-down strategy for upper levels of the program structure, coupled with a bottom-up strategy for subordinate levels maybe the best compromise.As integration testing is conducted, the tester should identify critical modules. A critical module has one or more of following characteristics: 1 addresses several software requirements;2 has a high level of control (resides relatively high in the program structure); 3 is complex or error-prone(cyclomatic complexity maybe used as an indicator ); or 4 has definite performance requirements. Critical modules should be tested as early as is possible. In addition, regression tests should focus on critical module function.二SYSTEM TESTING2.1 Recovery TestingMany computer-based systems must recover from faults and resume processing within a prespecified time. In some cases, a system must be fault tolerant; that is, processing fault must not cause overall system function to cease. In other cases, a system failure must be corrected whining a specified period of time or severe economic damage will occur.Recovery testing is a system test that forces the software to fail in variety of ways and recovery is properly performed. If recovery is automatic (performed by the system itself), re-initialization, checkpointing mechrequires human intervention, the mean time to repair is evaluated to determine whether it is within acceptable limits.2.2 Security TestingAny computer-based system that manages sensitive information or cause actions that can improperly harm (or benefit) individuals is a target for improper or illegal penetration.Penetration spans a broad range of activities: hackers who attempt to penetrate systems for sport; disgruntled employees who attempt to penetrate for revenge; and dishonest individuals who attempt to penetrate for illicit personal gain.Security testing attempts to verify that protection mechanisms built into a system will in fact protect it from improper penetration. To quote Beizer:“The system’s security must, of course, be tested for invulnerability from frontal attack—but must also be tested for invulnerability from flank or rear attack.”During security testing, the tester plays the role of the individual who desires to penetrate the system. Anything goes! The tester may attempt to acquires passwords through external clerical means, may attack the system with custom software designed to break down any defenses that have been constructed; may overwhelm the errors, hoping to penetrate during recovery; may browse through insecure data, hoping to find the key to system entry; and so on.Given enough time and resources, good security testing will ultimately penetrate a system. The role of the system designer is to make penetration cost greater than the value of the information that will be obtained.2.3 Stress TestingDuring earlier software testing steps, while-box techniques resulted in through evaluation of normal program functions and performance. Stress tests are designed to confront programs with abnormal situations. In essence, the tester who performs stress testing asks:“How high can we crank this up before it fail?”Stress testing executes a system in a manner that demands resources in abnormal quantity, frequency, or volume. For example, 1 special tests maybe designed that generate 10 interrupts per second, when one or two is the average rate; 2 input data rates maybe increased by an order of magnitude to determine how input functions will respond; 3 test cases that require maximum memory or other resources maybe executed;4 test cases that may cause thrashing in a virtual operating system maybe designed; or 5 test cases that may cause excessive hunting for disk resident datamaybe created. Essentially, the tester attempts to break the problem.A variation of stress testing is a technique called sensitivity testing. In some situations (the most common occur in mathematical algorithms) a very small rang of data contained within the bounds of valid data for a program may cause extreme and even erroneous processing or profound performance degradation. This situation is analogous to a singularity in a mathematical function. Sensitivity testing attempts to uncover data combinations within valid input classes that may cause instability or improper processing.2.4 Performance TestingFor real-time and embedded systems, software that provides required function but does not conform to performance requirements is unacceptable. Performance testing is designed to test run-time performance of software within the context of an integrated system. Performance testing occurs throughout all steps in the testing process. Even at the unit level, the performance of an individual module maybe assessed as while-box test are conducted. H0owever, it is not until all system elements are fully integrated that the true performance of a system can be ascertained.软件测试策略软件测试策略把软件测试案例的设计方法集成到一系列已经周密计划的步骤中去,从而使软件的开发得以成功地完成。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
毕业设计(论文) 外文文献翻译外文资料名称:Value-Based Management ofSoftware Testing外文资料出处:Institute of SoftwareTechnology & Interactive System附件: 1.外文资料翻译译文2.外文原文中文3000字基于价值的软件测试管理鲁道夫,斯蒂芬,保罗摘要:根据研究表明测试已经成为软件开发过程中一个很重要的环节,它占据了整个软件开发成本的百分之三十到五十。
测试通常不是用来组织商业价值的最大化,也不是肩负着项目的使命。
路径测试、分支测试、指导测试、变换测试、场景测试以及需求测试等对于软件的所有方面都是同等重要的。
然而在实践中百分之八十的价值往往来自百分之二十的软件。
为了从软件测试中得到最大的投资回报,测试管理需要最大化它的价值贡献。
在本章,我们将更加促进对基于价值的测试的需要,描述支持基于价值的测试管理的实践,勾画出基于价值的测试管理的框架,并举例说明该框架。
关键词: 基于价值的软件测试,基于价值的测试,测试成本,测试利益,测试管理11.1 前言测试是软件质量保证过程中最重要和最广泛使用的方法。
校验和验证旨在通过综合分析, 测试软件确保其正确运行功能,确保软件的质量和软件的可靠性。
在IEEE610.12(1990)中,测试被定义为在规定条件下对执行的系统或者组件进行观察和记录,并对系统或者组件进行评价的活动。
测试在实践过程中被广泛的使用,在保证质量策略的诸多组织中扮演着重要的角色。
软件影响着成千上万人的日常生活,担负着艰巨的任务。
因此软件在不久的将来将显得尤其的重要。
研究表明,测试通常消耗软件开发成本的30%至50%。
对于安全危急系统,甚至更高的比例也不足为奇。
因此软件测试具有挑战的就是寻找更多的有效途径进行有效的测试。
软件测试管理的价值在于努力减少测试成本和满足需求。
有价值的测试管理对于项目目标和商业价值也能有很好的向导。
在第一章,Boehm 列举了很多方面的潜在测试成本。
该例子说明了利用客户结账类型的7%的成本来完成50%的软件测试利益。
尽管百分百测试是一个不太切实际的目标, 然而通过调整测试方法, 仍有很大的空间来改进和节省达到预期的价值。
基于软件工程的价值动力在于目前软件工程的实践研究都是把需求, 测试案例, 测试对象和产品缺陷看的同等重要。
这对测试显然是正确的,它对产品价值间接的做出贡献。
开发和测试的分离使得这问题显得更为突出。
测试往往是一个纯粹的技术问题,将使得测试和商业决策之间曾解链的关系变的更加紧密。
本章主要描述了提高基于价值的测试管理的需求, 解释其基本要素, 讨论现有的实例来支持基于价值的测试, 以及基于价值的测试管理的基本框架。
本章接下来部分的基本结构如下:11.2节讨论测试管理的贡献价值; 11.3节讨论已测案例对于测试管理的支持; 11.4节用例子描述基于价值的测试管理的框架。
本章最后将对具体的再作更进一步的研究。
11.2基于价值测试的描述基于价值的校验和验证的目标被定义为确保软件的实施能够满足其预期的目标价值利益。
如果我们从价值的角度去考虑,那么测试的贡献应该在哪呢?从根本上来说, 可以从两个方面来考虑: 内部方面包括测试成本以及测试效益, 外部方面强调未来系统的威胁以及机遇。
对于基于价值的测试,其关键就是要把这两个方面结合起来,也就是说通过客户和市场需求调整内部测试的过程。
为了使内部和外部两方面因素相结合,只专注于技术方面的测试显然是不恰当的。
相反,测试管理需要有个全局的把握。
例41描述了测试管理内外双方面之间的依赖关系。
内部方面的就如同测试经理对整个项目的控制。
这方面的费用主要来自于软件测试实践以及短期长期的测试。
而外部方面被认为是测试主管所能控制以外的一些利益和参数数据.基于价值的软件测试管理使得测试能够满足以利益为重的价值主张,以及使整个小组聚焦在有价值的测试方向上。
对于软件测试的外部观点的首要问题是“我们如何确保软件系统的价值目标?”这目标就是通过协调价值主张来测试软件集中的有价值部分,最重要的品质以及项目风险的及时调整等.回答这样的问题包括市场机遇,项目的价值主张以及成本效益。
参考第一章关于机遇和风险的详细介绍以及参考第七章的价值主张的引出与调和。
内部观点是建立在价值主张的利益之上,以及测试的预算代表着整个项目的一个大概水平。
这主要的问题就是如何把测试作为一项投资活动。
为了能够高效迅速的测试及降低开发预算。
适当的内外部交流协调能够满足测试的利益价值。
测试的价值贡献测试与其他的开发环节诸如代码和用户界面设计相比,它不能立即对产品创造价值。
然而测试提供和支持软件开发过程中产生的有价值的任务活动。
理解测试贡献价值的关键点在于测试的贡献效应。
测试的贡献建立了测试与最终产品价值利益之间的关系。
最直接的客户是直接与测试小组有密切联系的程序开发者和项目主管。
在基于价值的软件工程测试过程中的集中力量是顾客和用户(见第七章)。
顾客和用户通过设定语境和范围来进行测试达到测试的价值目标。
测试的客户开发人员,项目主管,质量主管,顾客,分析者,最终用户或者维修人员们都得益于软件系统的分析,依靠反馈来检测问题,降低其不确定性,做出相关的决定来加快产品进程。
下面的例子显示了不同组对于测试需求的反馈信息:∙顾客和用户关于多大程度上需求一致是否满意以及在多大程度上满足软件的价值利益.测试还对项目的进程提供可见性和洞察力.通过测试的结果可以了解已通过的测试案例.当验收测试时出现不实用的或者失败的显示在实际环境中才能出现的问题,α和β测试提供了一个更加坚实的基础来验证结果。
∙销售和产品主管从测试计划、定价、促销和分配方面获取相关的信息。
产品实际的质量与顾客和用户所期望的质量之间的差异很容易会导致误解和错误的设想以至于降低或者阻止了真正价值的实现。
为了能够成功地达到这些期望以及满足个人或组织的目标,通过客户需求来调整产品设计满足某些功能。
∙对于项目主管,测试支持了风险管理和项目进程的估计。
重点是识别和排除潜在的价值破坏和抑制价值功绩的风险。
早期的大幅降低项目绩效的严重缺陷,是一个主要的目的。
测试降低了不确定性和帮助项目主管对于清除缺陷、系统稳定性以及产品更新发布能够作出更好、更明智的决定。
∙质量主管对于问题的识别以及对特定问题的动态趋势较为感兴趣。
测试结果对于项目评估、对于质量策略的保证以及进程的改进提供了帮助。
Rosenberg讨论了测试如何对确保质量作出贡献并展示了测试问题如何验证如何修正以此来提升项目进程。
开发者和用户了解相关问题的当前状态,并且提供相关数据来衡量及预测软件的质量和可靠性。
∙开发人员通常需要获取反馈信息来验证测试实施是否完整,是否符合标准,是否满足质量要求。
为了保证稳定性,测试提供了相关缺陷的详细信息,提示测试失败的原因。
除此之外,测试对于项目缺陷的改进作出反馈。
例如,通过相关的修改以后需要测试其是否随着相关的改动使原先的功能有所改动或者出现衰退的情况,这些都是需要注意的。
对于需求工程师来说,测试对于验证和确认需求是很有价值的。
Weinberg曾指出“最有效的方法之一就是通过测试案例就象测试一个完整的系统来形成测试需求”。
黑盒测试通过其需求帮助能够保证它们的完整性,准确性,透明性及简明性等。
因此测试能够提升要求且向着测试驱动的方向发展。
简言之,测试能够通过降低计划的不确定性和风险性来提升利益,作出相关的决定,努力控制把不必要的消耗减少到最低程度(内部原因)。
尤为重要的是,它有助于实现预期的价值利益。
这些利益的得来也并不是免费的,测试的代价通常也是有意义的(外部原因).测试可以被理解为购买信息,也可被认为降低成本风险,减少不确定性的投资活动。
在成本和利益的投资上需要对测试需求作出相关的决定。
因此接下来的两个问题是:什么是测试成本?什么是价值活动的测试利益?Value-Based Management of Software TestingRudolf Ramler, Stefan Biffl and Paul GrünbacherAbstract: Testing is one of the most resource-intensive activities in software development and consumes between 30 and 50% of total development costs according to many studies. Testing is however often not organized to maximize business value and not aligned with a project’s mission. Path, branch, instruct ion, mutation, scenario, or requirement testing usually treat all aspects of software as equally important, while in practice 80% of the value often comes from 20% of the software. In order to maximize the return of investment gained from software testing, the management of testing needs to maximize its value contribution. In this chapter we motivate the need for value-based testing, describe practices supporting the management of value-based testing, outline a framework for value-based test management, and illustrate the framework with an example. Keywords: Value-based software engineering, value-based testing, cost of testing, benefits of testing, test management.11.1 IntroductionTesting is one of the most important and most widely used approaches for validation and verification (V&V). V&V aims at comprehensively analyzing and testing software to determine that it performs the intended functions correctly, to ensure that it performs no unintended functions, and to measure its quality and reliability (Wallace and Fujii, 1989). According to IEEE 610.12 (1990) testing is defined as “an activity in which a system or component is executed under specified conditions, the results are observed or recorded, and an evaluation is made of some aspect of the system or component.”Testing is widely used in practice and plays a central role in the quality assurance strategies of many organizations. As software pervades more and more critical tasks and affects everyday life, security, and well being of millions of people (Ferscha and Mattern, 2004), the importance of testing will increase in the future. Studies show that testing already consumes between 30 and 50% of software development costs (Beizer, 1990). Even higher percentages are not uncommon for safety-critical systems. Finding more efficient ways to perform effective testing is therefore a key challenge in testing (Harrold,2000).Managing software testing based on value considerations promises to tackle increasing testing costs and required effort. Value-based test management could also provide guidance to better align testing investments with project objectives and business value. In Chapter 1, Boehm presents an impressive example of potential test cost savings (on project level as well as on global scale) by focusing testing on the most valuable aspects. The example illustrates that with an investment-oriented focus on testing 7% of the customer billing types (1 in 15) achieved 50% of the benefits of testing the software. Completely testing the system requires a constantly increasing effort and, due to decreasing marginal benefits, results in a negative return on investment. Although a “100% tested” status is not a practical goal, there is still room for a considerable amount of improvement and savings by better adjusting testing to its value contribution.The motivation for value-based software engineering comes from the fact that “much of current software engineering practice and research is done in a value neutral setting, in which every requirement, use case, object, and defect is treated as equally important” (Boehm, 2003). This is especially true for testing, where its indirect contribution to product value leads to a value-neutral perception of testing. The common separation of concerns between development and testing exacerbates the problem. Testing is often reduced to a purely technical issue leaving the close relationship between testing and business decisions unlinked and the potential value contribution of testing unexploited.The objectives of this chapter are to motivate the need for value-based management of testing, to explain its underlying elements, to discuss existing practices that support value-based testing, and to outline a general framework for value-based test management. The remainder of this chapter is thus structured as follows. In Section 11.2 we discuss test management under the light of its value contribution. In Section 11.3 we describe existing practices that support value-based testing. Section 11.4 depicts a value-based test management framework using an example for illustration. An outlook on further research directions closes the chapter.11.2 Taking a Value-Based Perspective on TestingThe objectives of value-based verification and validation are defined as “ensuring thata sof tware solution satisfies its value objectives” and organizing V&V tasks to operate as an investment activity” (Boehm and Huang, 2003). What are the contributions of testing if we look at it from a value-based perspective? Fundamentally, we can consider two dimensions: The internal dimension of testing covers costs and benefits of testing. The external dimension emphasizes the opportunities and risks of the future system that have to be addressed. The key challenge in value-based testing is to integrate these two dimensions, i.e., align the internal test process with the value objectives coming from the customers and the market.It becomes clear that a pure focus on the technical aspects of testing (e.g., the testing methods and tools) is inappropriate to align the internal and external dimensions. Instead, test management activities need to adopt a value-based perspective.Figure 41 illustrates the external and internal dimensions of test management and their interdependencies. The internal dimension is similar to the scope of control of the test manager in the project. This dimension addresses costs from software testing practice as well as short-term and long-term benefits of testing. The external dimension considers stakeholders and parameters outside the scope of control of the test manager. Value-based test management organizes testing to satisfy value propositions of the stakeholders and to focus the team on the most worthwhile testing targets.The key question coming from the external view of softwar e testing is: “How can we ensure the value objectives of the software system?” The goal is to reconcile stakeholder value propositions by focusing testing efforts on the most worthwhile parts of the software, the most important quality characteristics, and the most urgent symptoms of risks that threaten the value contribution of the project. Answering this question involves market opportunities and threats, project-specific customer value propositions, as well as costs and benefits. Please refer to Chapter 1 for details about opportunities and risks and to Chapter 7 for elicitation and reconciliation of stakeholder value propositions.The internal view builds on the stakeholder value propositions and the test budget that represents the possible level of testing effort in a project. The key question in this view is: “How can we organize tes ting as an investment activity?”The goal is to achieve effective and efficient testing considering changes in development and budget reductions. Internalproject stakeholders consider how plans for software development and associated testing activities can contribute to stakeholder value propositions by supplying system functionality and performance, but also by limiting the impact of project-relevant risks.Appropriate communication is necessary to balance the external and internal dimensions of testing to assure the consistency of testing objectives with stakeholder value propositions.Value Contribution of TestingCompared to other development activities such as coding or user interface design, testing does not create immediate product value. Instead, testing informs and supports other value generating tasks in software development. A key to understanding the value contribution of testing is the contribution chain of testing (see the benefits realization approach described in Chapter 1). The contribution chain establishes the relation of testing to the final product that ultimately creates value for the stakeholders. Usually, the contribution chain of testing is comp lex and involves several different “clients,” who benefit from testing.Direct clients of testing are developers and project managers, who directly interact with the testing team (representing the internal dimension). However, in the spirit of value-based software engineering important parties for testing are customers and users (representing the external view). Customers and users are the source of value objectives (see Chapter 7), which set the context and scope of testing. Within this context testing informs developers and project managers to what extent value objectives are met and where improvement is required.Clients of TestingDevelopers, project managers, quality managers, customers, analysts, end users, or maintenance staff benefit from a thorough analysis of the software system and rely on feedback for detecting problems, reducing uncertainty, making decisions, or improving products and processes. The following examples show the kind of feedback from testing required by different groups:Customers and users get information as to what extent mutually agreed requirements are satisfied and to what extent the software meets their valuepropositions. Testing also provides visibility and insights about project progress.Passed tests reduce the odds of misbehavior and acceptance decisions are thus frequently based on the results of tests. When acceptance tests are impractical or fail to reveal hidden problems that become visible only in real-world conditions, alpha and beta testing provide a more solid foundation for acceptance decisions.∙Marketing and product managers require information from testing for planning releases, pricing, promotion, and distribution. A gap between the actual quality and the quality expected by customers and users most certainly leads to misleading expectations and wrong assumptions that diminish or prevent value realization (Boehm, 2000b). In order to successfully manage these expectations and to satisfy individual and organizational objectives, reconciling customer needs with product design has to consider quality in addition to functionality.∙For project managers testing supports risk management and progress estimation.The focus is on identifying and eliminating risks that are potential value breakers and inhibit value achievements. Early detection of severe defects that significantly reduce project performance is a major objective. Ideally, testing reduces uncertainty and helps project managers to take better, more informed decisions, e.g., for defect removal, system stabilization, and release decisions.∙Quality managers are interested in the identification of problems and in particular problem trends. Results from testing are the input for the assessment of development performance and provide the basis for quality assurance strategies and process improvement. Rosenberg (2003) discusses how testing contributes to quality assurance and shows that problems need to be documented, corrected, and can then be used for process improvement; after assessing problem reports for their validity corrective actions are implemented in accordance with customer-approved solutions; developers and users are informed about the problem status; and data for measuring and predicting software quality and reliability is provided.∙Developers require feedback from testing to gain confidence that the implementation is complete and correct, conforming to standards, and satisfying quality requirements. For stabilization, testing provides details about defects andtheir estimated severity, information for reproducing defects, and support for revealing the cause of the failures. Besides, testing provides feedback for improvement and learning from defects. For example, throughout maintenance a detailed and reproducible description of problems contributes to the efficient implementation of changes and regression tests ensuring that these changes do not break existing functionality.For requirements engineers, testing is valuable to validate and verify requirements.Gause and Weinberg (1989) point out that “… one of the most effective ways of testing requirements is with test cases very much like those for testing a complete system.” Deriving black-box tests from requirements helps to assure their completeness, accuracy, clarity, and conciseness early on. Tests thus enhance requirements and enable development in a test-driven manner.To summarize, testing helps to realize benefits by reducing planning uncertainty, mitigating risks, making more informed decisions, controlling efforts, and minimizing downstream costs (the internal dimension).More importantly, it helps to realize the expected stakeholder value propositions (the external dimension).These benefits, however, do not come for free and the costs of testing are often significant. Testing can be perceived as buying information and can be considered as an investment activity as it reduces the costs of risks, uncertainties, and the reward of taking risks. Making sound decisions about the investment in testing requires understanding their implications on both costs and benefits. The underlying questions therefore are: What are the costs of testing, and what are the benefits of testing for value generating activities?10。