Model-based testing through a GUI

合集下载

软件测试术语中英文对照

软件测试术语中英文对照
data corruption:数据污染
data definition C-use pair:数据定义C-use使用对
data definition P-use coverage:数据定义P-use覆盖
data definition P-use pair:数据定义P-use使用对
data definition:数据定义
data definition-use coverage:数据定义使用覆盖
data definition-use pair :数据定义使用对
data definition-use testing:数据定义使用测试
Check In :检入
Check Out :检出
Closeout : 收尾
code audit :代码审计
Code coverage : 代码覆盖
Code Inspection:代码检视
Core team : 核心小组
corrective maintenance:故障检修
correctness :正确性
coverage :覆盖率
coverage item:覆盖项
crash:崩溃
Beta testing : β测试
Black Box Testing:黑盒测试
Blocking bug : 阻碍性错误
Bottom-up testing : 自底向上测试
boundary value coverage:边界值覆盖
boundary value testing:边界值测试
Bug bash : 错误大扫除
bug fix : 错误修正
Bug report : 错误报告

RAD6

RAD6
Adopt
IBM MW & Server Toolkits
ISV & Customer Tools
WebSphere Studio
WebSphere Studio Workbench
–Based on Java 集成开发环境支持所有J2EE组件开发及测试 提高编程效率
V6.0 Application Server
Separate Install (Local or Remote)
10
© 2005 IBM Corporation
Annotation-based Programmration
Web Services
Build and consume Web Services – EJB and Java bean JSR 101/109 Web Services Support for setting conformance levels – WS-I SSBP 1.0 (Simple SOAP Basic Profile 1.0), and WS-I AP (Attachments Profile 1.0) Generate JAX-RPC handlers with wizard – Skeleton handler generated and deployment descriptors updated Secure Web Service requests/responses with WS-Security – Security values specified in J2EE deployment descriptor editors Integrated SOAP Monitor available for viewing Web Service traffic – Integrated into creation wizard

软件检验中英文对照

软件检验中英文对照

Acceptance testing | 验收测试Acceptance Testing|可接受性测试Accessibility test | 软体适用性测试actual outcome|实际结果Ad hoc testing | 随机测试Algorithm analysis | 算法分析algorithm|算法Alpha testing | α测试analysis|分析anomaly|异常application software|应用软件Application under test (AUT) | 所测试的应用程序Architecture | 构架Artifact | 工件ASQ|自动化软件质量(Automated Software Quality)Assertion checking | 断言检查Association | 关联Audit | 审计audit trail|审计跟踪Automated Testing|自动化测试Backus-Naur Form|BNF范式baseline|基线Basic Block|基本块basis test set|基本测试集Behaviour | 行为Bench test | 基准测试benchmark|标杆/指标/基准Best practise | 最佳实践Beta testing | β测试Black Box Testing|黑盒测试Blocking bug | 阻碍性错误Bottom-up testing | 自底向上测试boundary value coverage|边界值覆盖boundary value testing|边界值测试Boundary values | 边界值Boundry Value Analysis|边界值分析branch condition combination coverage|分支条件组合覆盖branch condition combination testing|分支条件组合测试branch condition coverage|分支条件覆盖branch condition testing|分支条件测试branch condition|分支条件Branch coverage | 分支覆盖branch outcome|分支结果branch point|分支点branch testing|分支测试branch|分支Breadth Testing|广度测试Brute force testing| 强力测试Buddy test | 合伙测试Buffer | 缓冲Bug | 错误Bug bash | 错误大扫除bug fix | 错误修正Bug report | 错误报告Bug tracking system| 错误跟踪系统bug|缺陷Build | 工作版本(内部小版本)Build Verfication tests(BVTs)| 版本验证测试Build-in | 内置Capability Maturity Model (CMM)| 能力成熟度模型Capability Maturity Model Integration (CMMI)| 能力成熟度模型整合capture/playback tool|捕获/回放工具Capture/Replay Tool|捕获/回放工具CASE|计算机辅助软件工程(computer aided software engineering)CAST|计算机辅助测试cause-effect graph|因果图certification |证明change control|变更控制Change Management |变更管理Change Request |变更请求Character Set | 字符集Check In |检入Check Out |检出Closeout | 收尾code audit |代码审计Code coverage | 代码覆盖Code Inspection|代码检视Code page | 代码页Code rule | 编码规范Code sytle | 编码风格Code Walkthrough|代码走读code-based testing|基于代码的测试coding standards|编程规范Common sense | 常识Compatibility Testing|兼容性测试complete path testing |完全路径测试completeness|完整性complexity |复杂性Component testing | 组件测试Component|组件computation data use|计算数据使用computer system security|计算机系统安全性Concurrency user | 并发用户Condition coverage | 条件覆盖condition coverage|条件覆盖condition outcome|条件结果condition|条件configuration control|配置控制Configuration item | 配置项configuration management|配置管理Configuration testing | 配置测试conformance criterion| 一致性标准Conformance Testing| 一致性测试consistency | 一致性consistency checker| 一致性检查器Control flow graph | 控制流程图control flow graph|控制流图control flow|控制流conversion testing|转换测试Core team | 核心小组corrective maintenance|故障检修correctness |正确性coverage |覆盖率coverage item|覆盖项crash|崩溃criticality analysis|关键性分析criticality|关键性CRM(change request management)| 变更需求管理Customer-focused mindset | 客户为中心的理念体系Cyclomatic complexity | 圈复杂度data corruption|数据污染data definition C-use pair|数据定义C-use使用对data definition P-use coverage|数据定义P-use覆盖data definition P-use pair|数据定义P-use使用对data definition|数据定义data definition-use coverage|数据定义使用覆盖data definition-use pair |数据定义使用对data definition-use testing|数据定义使用测试data dictionary|数据字典Data Flow Analysis | 数据流分析data flow analysis|数据流分析data flow coverage|数据流覆盖data flow diagram|数据流图data flow testing|数据流测试data integrity|数据完整性data use|数据使用data validation|数据确认dead code|死代码Debug | 调试Debugging|调试Decision condition|判定条件Decision coverage | 判定覆盖decision coverage|判定覆盖decision outcome|判定结果decision table|判定表decision|判定Defect | 缺陷defect density | 缺陷密度Defect Tracking |缺陷跟踪Deployment | 部署Depth Testing|深度测试design for sustainability |可延续性的设计design of experiments|实验设计design-based testing|基于设计的测试Desk checking | 桌前检查desk checking|桌面检查Determine Usage Model | 确定应用模型Determine Potential Risks | 确定潜在风险diagnostic|诊断DIF(decimation in frequency) | 按频率抽取dirty testing|肮脏测试disaster recovery|灾难恢复DIT (decimation in time)| 按时间抽取documentation testing |文档测试domain testing|域测试domain|域DTP DETAIL TEST PLAN详细确认测试计划Dynamic analysis | 动态分析dynamic analysis|动态分析Dynamic Testing|动态测试embedded software|嵌入式软件emulator|仿真End-to-End testing|端到端测试Enhanced Request |增强请求entity relationship diagram|实体关系图Encryption Source Code Base| 加密算法源代码库Entry criteria | 准入条件entry point |入口点Envisioning Phase | 构想阶段Equivalence class | 等价类Equivalence Class|等价类equivalence partition coverage|等价划分覆盖Equivalence partition testing | 等价划分测试equivalence partition testing|参考等价划分测试equivalence partition testing|等价划分测试Equivalence Partitioning|等价划分Error | 错误Error guessing | 错误猜测error seeding|错误播种/错误插值error|错误Event-driven | 事件驱动Exception handlers | 异常处理器exception|异常/例外executable statement|可执行语句Exhaustive Testing|穷尽测试exit point|出口点expected outcome|期望结果Exploratory testing | 探索性测试Failure | 失效Fault | 故障fault|故障feasible path|可达路径feature testing|特性测试Field testing | 现场测试FMEA|失效模型效果分析(Failure Modes and Effects Analysis)FMECA|失效模型效果关键性分析(Failure Modes and Effects Criticality Analysis)Framework | 框架FTA|故障树分析(Fault Tree Analysis)functional decomposition|功能分解Functional Specification |功能规格说明书Functional testing | 功能测试Functional Testing|功能测试G11N(Globalization) | 全球化Gap analysis | 差距分析Garbage characters | 乱码字符glass box testing|玻璃盒测试Glass-box testing | 白箱测试或白盒测试Glossary | 术语表GUI(Graphical User Interface)| 图形用户界面Hard-coding | 硬编码Hotfix | 热补丁I18N(Internationalization)| 国际化Identify Exploratory Tests –识别探索性测试IEEE|美国电子与电器工程师学会(Institute of Electrical and Electronic Engineers)Incident 事故Incremental testing | 渐增测试incremental testing|渐增测试infeasible path|不可达路径input domain|输入域Inspection | 审查inspection|检视installability testing|可安装性测试Installing testing | 安装测试instrumentation|插装instrumenter|插装器Integration |集成Integration testing | 集成测试interface | 接口interface analysis|接口分析interface testing|接口测试interface|接口invalid inputs|无效输入isolation testing|孤立测试Issue | 问题Iteration | 迭代Iterative development| 迭代开发job control language|工作控制语言Job|工作Key concepts | 关键概念Key Process Area | 关键过程区域Keyword driven testing | 关键字驱动测试Kick-off meeting | 动会议L10N(Localization) | 本地化Lag time | 延迟时间LCSAJ|线性代码顺序和跳转(Linear Code Sequence And Jump)LCSAJ coverage|LCSAJ覆盖LCSAJ testing|LCSAJ测试Lead time | 前置时间Load testing | 负载测试Load Testing|负载测试Localizability testing| 本地化能力测试Localization testing | 本地化测试logic analysis|逻辑分析logic-coverage testing|逻辑覆盖测试Maintainability | 可维护性maintainability testing|可维护性测试Maintenance | 维护Master project schedule |总体项目方案Measurement | 度量Memory leak | 内存泄漏Migration testing | 迁移测试Milestone | 里程碑Mock up | 模型,原型modified condition/decision coverage|修改条件/判定覆盖modified condition/decision testing |修改条件/判定测试modular decomposition|参考模块分解Module testing | 模块测试Monkey testing | 跳跃式测试Monkey Testing|跳跃式测试mouse over|鼠标在对象之上mouse leave|鼠标离开对象MTBF|平均失效间隔实际(mean time between failures)MTP MAIN TEST PLAN主确认计划MTTF|平均失效时间(mean time to failure)MTTR|平均修复时间(mean time to repair)multiple condition coverage|多条件覆盖mutation analysis|变体分析N/A(Not applicable) | 不适用的Negative Testing | 逆向测试, 反向测试, 负面测试negative testing|参考负面测试Negative Testing|逆向测试/反向测试/负面测试off by one|缓冲溢出错误non-functional requirements testing|非功能需求测试nominal load|额定负载N-switch coverage|N切换覆盖N-switch testing|N切换测试N-transitions|N转换Off-the-shelf software | 套装软件operational testing|可操作性测试output domain|输出域paper audit|书面审计Pair Programming | 成对编程partition testing|分类测试Path coverage | 路径覆盖path coverage|路径覆盖path sensitizing|路径敏感性path testing|路径测试path|路径Peer review | 同行评审Performance | 性能Performance indicator| 性能(绩效)指标Performance testing | 性能测试Pilot | 试验Pilot testing | 引导测试Portability | 可移植性portability testing|可移植性测试Positive testing | 正向测试Postcondition | 后置条件Precondition | 前提条件precondition|预置条件predicate data use|谓词数据使用predicate|谓词Priority | 优先权program instrumenter|程序插装progressive testing|递进测试Prototype | 原型Pseudo code | 伪代码pseudo-localization testing|伪本地化测试pseudo-random|伪随机QC|质量控制(quality control)Quality assurance(QA)| 质量保证Quality Control(QC) | 质量控制Race Condition|竞争状态Rational Unified Process(以下简称RUP)|瑞理统一工艺recovery testing|恢复性测试Refactoring | 重构regression analysis and testing|回归分析和测试Regression testing | 回归测试Release | 发布Release note | 版本说明release|发布Reliability | 可靠性reliability assessment|可靠性评价reliability|可靠性Requirements management tool| 需求管理工具Requirements-based testing | 基于需求的测试Return of Investment(ROI)| 投资回报率review|评审Risk assessment | 风险评估risk|风险Robustness | 强健性Root Cause Analysis(RCA)| 根本原因分析safety critical|严格的安全性safety|(生命)安全性Sanity Testing|理智测试Schema Repository | 模式库Screen shot | 抓屏、截图SDP|软件开发计划(software development plan)Security testing | 安全性测试security testing|安全性测试security.|(信息)安全性serviceability testing|可服务性测试Severity | 严重性Shipment | 发布simple subpath|简单子路径Simulation | 模拟Simulator | 模拟器SLA(Service level agreement)| 服务级别协议SLA|服务级别协议(service level agreement)Smoke testing | 冒烟测试Software development plan(SDP)| 软件开发计划Software development process| 软件开发过程software development process|软件开发过程software diversity|软件多样性software element|软件元素software engineering environment|软件工程环境software engineering|软件工程Software life cycle | 软件生命周期source code|源代码source statement|源语句Specification | 规格说明书specified input|指定的输入spiral model |螺旋模型SQAP SOFTWARE QUALITY ASSURENCE PLAN 软件质量保证计划SQL|结构化查询语句(structured query language)Staged Delivery|分布交付方法state diagram|状态图state transition testing |状态转换测试state transition|状态转换state|状态Statement coverage | 语句覆盖statement testing|语句测试statement|语句Static Analysis|静态分析Static Analyzer|静态分析器Static Testing|静态测试statistical testing|统计测试Stepwise refinement | 逐步优化storage testing|存储测试Stress Testing | 压力测试structural coverage|结构化覆盖structural test case design|结构化测试用例设计structural testing|结构化测试structured basis testing|结构化的基础测试structured design|结构化设计structured programming|结构化编程structured walkthrough|结构化走读stub|桩sub-area|子域Summary| 总结SVVP SOFTWARE Vevification&Validation PLAN| 软件验证和确认计划symbolic evaluation|符号评价symbolic execution|参考符号执行symbolic execution|符号执行symbolic trace|符号轨迹Synchronization | 同步Syntax testing | 语法分析system analysis|系统分析System design | 系统设计system integration|系统集成System Testing | 系统测试TC TEST CASE 测试用例TCS TEST CASE SPECIFICATION 测试用例规格说明TDS TEST DESIGN SPECIFICATION 测试设计规格说明书technical requirements testing|技术需求测试Test | 测试test automation|测试自动化Test case | 测试用例test case design technique|测试用例设计技术test case suite|测试用例套test comparator|测试比较器test completion criterion|测试完成标准test coverage|测试覆盖Test design | 测试设计Test driver | 测试驱动test environment|测试环境test execution technique|测试执行技术test execution|测试执行test generator|测试生成器test harness|测试用具Test infrastructure | 测试基础建设test log|测试日志test measurement technique|测试度量技术Test Metrics |测试度量test procedure|测试规程test records|测试记录test report|测试报告Test scenario | 测试场景Test Script|测试脚本Test Specification|测试规格Test strategy | 测试策略test suite|测试套Test target | 测试目标Test ware | 测试工具Testability | 可测试性testability|可测试性Testing bed | 测试平台Testing coverage | 测试覆盖Testing environment | 测试环境Testing item | 测试项Testing plan | 测试计划Testing procedure | 测试过程Thread testing | 线程测试time sharing|时间共享time-boxed | 固定时间TIR test incident report 测试事故报告ToolTip|控件提示或说明top-down testing|自顶向下测试TPS TEST PEOCESS SPECIFICATION 测试步骤规格说明Traceability | 可跟踪性traceability analysis|跟踪性分析traceability matrix|跟踪矩阵Trade-off | 平衡transaction|事务/处理transaction volume|交易量transform analysis|事务分析trojan horse|特洛伊木马truth table|真值表TST TEST SUMMARY REPORT 测试总结报告Tune System | 调试系统TW TEST WARE |测试件Unit Testing |单元测试Usability Testing|可用性测试Usage scenario | 使用场景User acceptance Test | 用户验收测试User database |用户数据库User interface(UI) | 用户界面User profile | 用户信息User scenario | 用户场景V&V (Verification & Validation) | 验证&确认validation |确认verification |验证version |版本Virtual user | 虚拟用户volume testing|容量测试VSS(visual source safe) |VTP Verification TEST PLAN验证测试计划VTR Verification TEST REPORT验证测试报告Walkthrough | 走读Waterfall model | 瀑布模型Web testing | 网站测试White box testing | 白盒测试Work breakdown structure (WBS) | 任务分解结构Zero bug bounce (ZBB) | 零错误反弹。

软件测试英语术语缩写

软件测试英语术语缩写

软件测试常用英语词汇静态测试:Non-Execution-Based Testing或Static testing 代码走查:Walkthrough代码审查:Code Inspection技术评审:Review动态测试:Execution-Based Testing白盒测试:White-Box Testing黑盒测试:Black-Box Testing灰盒测试:Gray-Box Testing软件质量保证SQA:Software Quality Assurance软件开发生命周期:Software Development Life Cycle冒烟测试:Smoke Test回归测试:Regression Test功能测试:Function Testing性能测试:Performance Testing压力测试:Stress Testing负载测试:Volume Testing易用性测试:Usability Testing安装测试:Installation Testing界面测试:UI Testing配置测试:Configuration Testing文档测试:Documentation Testing兼容性测试:Compatibility Testing安全性测试:Security Testing恢复测试:Recovery Testing单元测试:Unit Test集成测试:Integration Test系统测试:System Test验收测试:Acceptance Test测试计划应包括:测试对象:The Test Objectives测试范围: The Test Scope测试策略: The Test Strategy测试方法: The Test Approach,测试过程: The test procedures,测试环境: The Test Environment,测试完成标准:The test Completion criteria测试用例:The Test Cases测试进度表:The Test Schedules风险:Risks接口:Interface最终用户:The End User正式的测试环境:Formal Test Environment确认需求:Verifying The Requirements有分歧的需求:Ambiguous Requirements运行和维护:Operation and Maintenance.可复用性:Reusability可靠性: Reliability/Availability电机电子工程师协会IEEE:The Institute of Electrical and Electronics Engineers) 正确性:Correctness实用性:Utility健壮性:Robustness可靠性:Reliability软件需求规格说明书:SRS (software requirement specification )概要设计:HLD (high level design )详细设计:LLD (low level design )统一开发流程:RUP (rational unified process )集成产品开发:IPD (integrated product development )能力成熟模型:CMM (capability maturity model )能力成熟模型集成:CMMI (capability maturity model integration )戴明环:PDCA (plan do check act )软件工程过程组:SEPG (software engineering process group )集成测试:IT (integration testing )系统测试:ST (system testing )关键过程域:KPA (key process area )同行评审:PR (peer review )用户验收测试:UAT (user acceptance testing )验证和确认:V&V (verification & validation )控制变更委员会:CCB (change control board )图形用户界面:GUI (graphic user interface )配置管理员:CMO (configuration management officer )平均失效间隔时间:(MTBF mean time between failures )平均修复时间:MTTR (mean time to restoration )平均失效时间:MTTF (mean time to failure )工作任务书:SOW (statement of work )α测试:alpha testingβ测试:beta testing适应性:Adaptability可用性:Availability功能规格说明书:Functional Specification软件开发中常见英文缩写和各类软件开发文档的英文缩写:英文简写文档名称MRD market requirement document (市场需求文档)PRD product requirement document (产品需求文档)SOW 工作任务说明书PHB Process Handbook (项目过程手册)EST Estimation Sheet (估计记录)PPL Project Plan (项目计划)CMP Software Management Plan( 配置管理计划)QAP Software Quality Assurance Plan (软件质量保证计划)RMP Software Risk Management Plan (软件风险管理计划)TST Test Strategy(测试策略)WBS Work Breakdown Structure (工作分解结构)BRS Business Requirement Specification(业务需求说明书)SRS Software Requirement Specification(软件需求说明书)STP System Testing plan (系统测试计划)STC System Testing Cases (系统测试用例)HLD High Level Design (概要设计说明书)ITP Integration Testing plan (集成测试计划)ITC Integration Testing Cases (集成测试用例)LLD Low Level Design (详细设计说明书)UTP Unit Testing Plan ( 单元测试计划)UTC Unit Testing Cases (单元测试用例)UTR Unit Testing Report (单元测试报告)ITR Integration Testing Report (集成测试报告)STR System Testing Report (系统测试报告)RTM Requirements Traceability Matrix (需求跟踪矩阵)CSA Configuration Status Accounting (配置状态发布)CRF Change Request Form (变更申请表)WSR Weekly Status Report (项目周报)QSR Quality Weekly Status Report (质量工作周报)QAR Quality Audit Report(质量检查报告)QCL Quality Check List(质量检查表)PAR Phase Assessment Report (阶段评估报告)CLR Closure Report (项目总结报告)RFF Review Finding Form (评审发现表)MOM Minutes of Meeting (会议纪要)MTX Metrics Sheet (度量表)CCF ConsistanceCheckForm(一致性检查表)BAF Baseline Audit Form(基线审计表)PTF Program Trace Form(问题跟踪表)领测国际科技(北京)有限公司领测软件测试网 /软件测试中英文对照术语表A• Abstract test case (High level test case) :概要测试用例• Acceptance:验收• Acceptance criteria:验收标准• Acceptance testing:验收测试• Accessibility testing:易用性测试• Accuracy:精确性• Actual outcome (actual result) :实际输出/实际结果• Ad hoc review (informal review) :非正式评审• Ad hoc testing:随机测试• Adaptability:自适应性• Agile testing:敏捷测试• Algorithm test (branch testing) :分支测试• Alpha testing:alpha 测试• Analyzability:易分析性• Analyzer:分析员• Anomaly:异常• Arc testing:分支测试• Attractiveness:吸引力• Audit:审计• Audit trail:审计跟踪• Automated testware:自动测试组件• Availability:可用性B• Back-to-back testing:对比测试• Baseline:基线• Basic block:基本块• Basis test set:基本测试集• Bebugging:错误撒播• Behavior:行为• Benchmark test:基准测试• Bespoke software:定制的软件• Best practice:最佳实践• Beta testing:Beta 测试领测国际科技(北京)有限公司领测软件测试网 /• Big-bang testing:集成测试• Black-box technique:黑盒技术• Black-box testing:黑盒测试• Black-box test design technique:黑盒测试设计技术• Blocked test case:被阻塞的测试用例• Bottom-up testing:自底向上测试• Boundary value:边界值• Boundary value analysis:边界值分析• Boundary value coverage:边界值覆盖率• Boundary value testing:边界值测试• Branch:分支• Branch condition:分支条件• Branch condition combination coverage:分支条件组合覆盖率• Branch condition combination testing:分支条件组合测试• Branch condition coverage:分支条件覆盖率• Branch coverage:分支覆盖率• Branch testing:分支测试• Bug:缺陷• Business process-based testing:基于商业流程的测试C• Capability Maturity Model (CMM) :能力成熟度模型• Capability Maturity Model Integration (CMMI) :集成能力成熟度模型• Capture/playback tool:捕获/回放工具• Capture/replay tool:捕获/重放工具• CASE (Computer Aided Software Engineering) :电脑辅助软件工程• CAST (Computer Aided Software Testing) :电脑辅助软件测试• Cause-effect graph:因果图• Cause-effect graphing:因果图技术• Cause-effect analysis:因果分析• Cause-effect decision table:因果判定表• Certification:认证• Changeability:可变性• Change control:变更控制• Change control board:变更控制委员会• Checker:检查人员• Chow's coverage metrics (N-switch coverage) :N 切换覆盖率• Classification tree method:分类树方法• Code analyzer:代码分析器• Code coverage:代码覆盖率领测国际科技(北京)有限公司领测软件测试网 /• Code-based testing:基于代码的测试• Co-existence:共存性• Commercial off-the-shelf software:商用离岸软件• Comparator:比较器• Compatibility testing:兼容性测试• Compiler:编译器• Complete testing:完全测试/穷尽测试• Completion criteria:完成标准• Complexity:复杂性• Compliance:一致性• Compliance testing:一致性测试• Component:组件• Component integration testing:组件集成测试• Component specification:组件规格说明• Component testing:组件测试• Compound condition:组合条件• Concrete test case (low level test case) :详细测试用例• Concurrency testing:并发测试• Condition:条件表达式• Condition combination coverage:条件组合覆盖率• Condition coverage:条件覆盖率• Condition determination coverage:条件判定覆盖率• Condition determination testing:条件判定测试• Condition testing:条件测试• Condition outcome:条件结果• Confidence test (smoke test) :信心测试(冒烟测试)• Configuration:配置• Configuration auditing:配置审核• Configuration control:配置控制• Configuration control board (CCB) :配置控制委员会• Configuration identification:配置标识• Configuration item:配置项• Configuration management:配置管理• Configuration testing:配置测试• Confirmation testing:确认测试• Conformance testing:一致性测试• Consistency:一致性• Control flow:控制流• Control flow graph:控制流图• Control flow path:控制流路径• Conversion testing:转换测试• COTS (Commercial Off-The-Shelf software) :商业离岸软件• Coverage:覆盖率• Coverage analysis:覆盖率分析领测国际科技(北京)有限公司领测软件测试网 /• Coverage item:覆盖项• Coverage tool:覆盖率工具• Custom software:定制软件• Cyclomatic complexity:圈复杂度• Cyclomatic number:圈数D• Daily build:每日构建• Data definition:数据定义• Data driven testing:数据驱动测试• Data flow:数据流• Data flow analysis:数据流分析• Data flow coverage:数据流覆盖率• Data flow test:数据流测试• Data integrity testing:数据完整性测试• Database integrity testing:数据库完整性测试• Dead code:无效代码• Debugger:调试器• Debugging:调试• Debugging tool:调试工具• Decision:判定• Decision condition coverage:判定条件覆盖率• Decision condition testing:判定条件测试• Decision coverage:判定覆盖率• Decision table:判定表• Decision table testing:判定表测试• Decision testing:判定测试技术• Decision outcome:判定结果• Defect:缺陷• Defect density:缺陷密度• Defect Detection Percentage (DDP) :缺陷发现率• Defect management:缺陷管理• Defect management tool:缺陷管理工具• Defect masking:缺陷屏蔽• Defect report:缺陷报告• Defect tracking tool:缺陷跟踪工具• Definition-use pair:定义-使用对• Deliverable:交付物• Design-based testing:基于设计的测试• Desk checking:桌面检查领测国际科技(北京)有限公司领测软件测试网 /• Development testing:开发测试• Deviation:偏差• Deviation report:偏差报告• Dirty testing:负面测试• Documentation testing:文档测试• Domain:域• Driver:驱动程序• Dynamic analysis:动态分析• Dynamic analysis tool:动态分析工具• Dynamic comparison:动态比较• Dynamic testing:动态测试E• Efficiency:效率• Efficiency testing:效率测试• Elementary comparison testing:基本组合测试• Emulator:仿真器、仿真程序• Entry criteria:入口标准• Entry point:入口点• Equivalence class:等价类• Equivalence partition:等价区间• Equivalence partition coverage:等价区间覆盖率• Equivalence partitioning:等价划分技术• Error:错误• Error guessing:错误猜测技术• Error seeding:错误撒播• Error tolerance:错误容限• Evaluation:评估• Exception handling:异常处理• Executable statement:可执行的语句• Exercised:可执行的• Exhaustive testing:穷尽测试• Exit criteria:出口标准• Exit point:出口点• Expected outcome:预期结果• Expected result:预期结果• Exploratory testing:探测测试领测国际科技(北京)有限公司领测软件测试网 /F• Fail:失败• Failure:失败• Failure mode:失败模式• Failure Mode and Effect Analysis (FMEA) :失败模式和影响分析• Failure rate:失败频率• Fault:缺陷• Fault density:缺陷密度• Fault Detection Percentage (FDP) :缺陷发现率• Fault masking:缺陷屏蔽• Fault tolerance:缺陷容限• Fault tree analysis:缺陷树分析• Feature:特征• Field testing:现场测试• Finite state machine:有限状态机• Finite state testing:有限状态测试• Formal review:正式评审• Frozen test basis:测试基线• Function Point Analysis (FPA) :功能点分析• Functional integration:功能集成• Functional requirement:功能需求• Functional test design technique:功能测试设计技术• Functional testing:功能测试• Functionality:功能性• Functionality testing:功能性测试G• glass box testing:白盒测试H• Heuristic evaluation:启发式评估• High level test case:概要测试用例• Horizontal traceability:水平跟踪领测国际科技(北京)有限公司领测软件测试网 /I• Impact analysis:影响分析• Incremental development model:增量开发模型• Incremental testing:增量测试• Incident:事件• Incident management:事件管理• Incident management tool:事件管理工具• Incident report:事件报告• Independence:独立• Infeasible path:不可行路径• Informal review:非正式评审• Input:输入• Input domain:输入范围• Input value:输入值• Inspection:审查• Inspection leader:审查组织者• Inspector:审查人员• Installability:可安装性• Installability testing:可安装性测试• Installation guide:安装指南• Installation wizard:安装向导• Instrumentation:插装• Instrumenter:插装工具• Intake test:入口测试• Integration:集成• Integration testing:集成测试• Integration testing in the large:大范围集成测试• Integration testing in the small:小范围集成测试• Interface testing:接口测试• Interoperability:互通性• Interoperability testing:互通性测试• Invalid testing:无效性测试• Isolation testing:隔离测试• Item transmittal report:版本发布报告• Iterative development model:迭代开发模型K• Key performance indicator:关键绩效指标领测国际科技(北京)有限公司领测软件测试网 /• Keyword driven testing:关键字驱动测试L• Learnability:易学性• Level test plan:等级测试计划• Link testing:组件集成测试• Load testing:负载测试• Logic-coverage testing:逻辑覆盖测试• Logic-driven testing:逻辑驱动测试• Logical test case:逻辑测试用例• Low level test case:详细测试用例M• Maintenance:维护• Maintenance testing:维护测试• Maintainability:可维护性• Maintainability testing:可维护性测试• Management review:管理评审• Master test plan:综合测试计划• Maturity:成熟度• Measure:度量• Measurement:度量• Measurement scale:度量粒度• Memory leak:内存泄漏• Metric:度量• Migration testing:移植测试• Milestone:里程碑• Mistake:错误• Moderator:仲裁员• Modified condition decision coverage:改进的条件判定覆盖率• Modified condition decision testing:改进的条件判定测试• Modified multiple condition coverage:改进的多重条件判定覆盖率• Modified multiple condition testing:改进的多重条件判定测试• Module:模块• Module testing:模块测试• Monitor:监视器• Multiple condition:多重条件• Multiple condition coverage:多重条件覆盖率领测国际科技(北京)有限公司领测软件测试网 /• Multiple condition testing:多重条件测试• Mutation analysis:变化分析• Mutation testing:变化测试N• N-switch coverage:N 切换覆盖率• N-switch testing:N 切换测试• Negative testing:负面测试• Non-conformity:不一致• Non-functional requirement:非功能需求• Non-functional testing:非功能测试• Non-functional test design techniques:非功能测试设计技术O• Off-the-shelf software:离岸软件• Operability:可操作性• Operational environment:操作环境• Operational profile testing:运行剖面测试• Operational testing:操作测试• Oracle:标准• Outcome:输出/结果• Output:输出• Output domain:输出范围• Output value:输出值P• Pair programming:结队编程• Pair testing:结队测试• Partition testing:分割测试• Pass:通过• Pass/fail criteria:通过/失败标准• Path:路径• Path coverage:路径覆盖• Path sensitizing:路径敏感性• Path testing:路径测试领测国际科技(北京)有限公司领测软件测试网 / • Peer review:同行评审• Performance:性能• Performance indicator:绩效指标• Performance testing:性能测试• Performance testing tool:性能测试工具• Phase test plan:阶段测试计划• Portability:可移植性• Portability testing:移植性测试• Postcondition:结果条件• Post-execution comparison:运行后比较• Precondition:初始条件• Predicted outcome:预期结果• Pretest:预测试• Priority:优先级• Probe effect:检测成本• Problem:问题• Problem management:问题管理• Problem report:问题报告• Process:流程• Process cycle test:处理周期测试• Product risk:产品风险• Project:项目• Project risk:项目风险• Program instrumenter:编程工具• Program testing:程序测试• Project test plan:项目测试计划• Pseudo-random:伪随机Q• Quality:质量• Quality assurance:质量保证• Quality attribute:质量属性• Quality characteristic:质量特征• Quality management:质量管理领测国际科技(北京)有限公司领测软件测试网 /R• Random testing:随机测试• Recorder:记录员• Record/playback tool:记录/回放工具• Recoverability:可复原性• Recoverability testing:可复原性测试• Recovery testing:可复原性测试• Regression testing:回归测试• Regulation testing:一致性测试• Release note:版本说明• Reliability:可靠性• Reliability testing:可靠性测试• Replaceability:可替换性• Requirement:需求• Requirements-based testing:基于需求的测试• Requirements management tool:需求管理工具• Requirements phase:需求阶段• Resource utilization:资源利用• Resource utilization testing:资源利用测试• Result:结果• Resumption criteria:继续测试标准• Re-testing:再测试• Review:评审• Reviewer:评审人员• Review tool:评审工具• Risk:风险• Risk analysis:风险分析• Risk-based testing:基于风险的测试• Risk control:风险控制• Risk identification:风险识别• Risk management:风险管理• Risk mitigation:风险消减• Robustness:健壮性• Robustness testing:健壮性测试• Root cause:根本原因S• Safety:安全领测国际科技(北京)有限公司领测软件测试网 /• Safety testing:安全性测试• Sanity test:健全测试• Scalability:可测量性• Scalability testing:可测量性测试• Scenario testing:情景测试• Scribe:记录员• Scripting language:脚本语言• Security:安全性• Security testing:安全性测试• Serviceability testing:可维护性测试• Severity:严重性• Simulation:仿真• Simulator:仿真程序、仿真器• Site acceptance testing:定点验收测试• Smoke test:冒烟测试• Software:软件• Software feature:软件功能• Software quality:软件质量• Software quality characteristic:软件质量特征• Software test incident:软件测试事件• Software test incident report:软件测试事件报告• Software Usability Measurement Inventory (SUMI) :软件可用性调查问卷• Source statement:源语句• Specification:规格说明• Specification-based testing:基于规格说明的测试• Specification-based test design technique:基于规格说明的测试设计技术• Specified input:特定输入• Stability:稳定性• Standard software:标准软件• Standards testing:标准测试• State diagram:状态图• State table:状态表• State transition:状态迁移• State transition testing:状态迁移测试• Statement:语句• Statement coverage:语句覆盖• Statement testing:语句测试• Static analysis:静态分析• Static analysis tool:静态分析工具• Static analyzer:静态分析工具• Static code analysis:静态代码分析• Static code analyzer:静态代码分析工具• Static testing:静态测试• Statistical testing:统计测试领测国际科技(北京)有限公司领测软件测试网 /• Status accounting:状态统计• Storage:资源利用• Storage testing:资源利用测试• Stress testing:压力测试• Structure-based techniques:基于结构的技术• Structural coverage:结构覆盖• Structural test design technique:结构测试设计技术• Structural testing:基于结构的测试• Structured walkthrough:面向结构的走查• Stub: 桩• Subpath: 子路径• Suitability: 符合性• Suspension criteria: 暂停标准• Syntax testing: 语法测试• System:系统• System integration testing:系统集成测试• System testing:系统测试T• Technical review:技术评审• Test:测试• Test approach:测试方法• Test automation:测试自动化• Test basis:测试基础• Test bed:测试环境• Test case:测试用例• Test case design technique:测试用例设计技术• Test case specification:测试用例规格说明• Test case suite:测试用例套• Test charter:测试宪章• Test closure:测试结束• Test comparator:测试比较工具• Test comparison:测试比较• Test completion criteria:测试比较标准• Test condition:测试条件• Test control:测试控制• Test coverage:测试覆盖率• Test cycle:测试周期• Test data:测试数据• Test data preparation tool:测试数据准备工具领测国际科技(北京)有限公司领测软件测试网 / • Test design:测试设计• Test design specification:测试设计规格说明• Test design technique:测试设计技术• Test design tool: 测试设计工具• Test driver: 测试驱动程序• Test driven development: 测试驱动开发• Test environment: 测试环境• Test evaluation report: 测试评估报告• Test execution: 测试执行• Test execution automation: 测试执行自动化• Test execution phase: 测试执行阶段• Test execution schedule: 测试执行进度表• Test execution technique: 测试执行技术• Test execution tool: 测试执行工具• Test fail: 测试失败• Test generator: 测试生成工具• Test leader:测试负责人• Test harness:测试组件• Test incident:测试事件• Test incident report:测试事件报告• Test infrastructure:测试基础组织• Test input:测试输入• Test item:测试项• Test item transmittal report:测试项移交报告• Test level:测试等级• Test log:测试日志• Test logging:测试记录• Test manager:测试经理• Test management:测试管理• Test management tool:测试管理工具• Test Maturity Model (TMM) :测试成熟度模型• Test monitoring:测试跟踪• Test object:测试对象• Test objective:测试目的• Test oracle:测试标准• Test outcome:测试结果• Test pass:测试通过• Test performance indicator:测试绩效指标• Test phase:测试阶段• Test plan:测试计划• Test planning:测试计划• Test policy:测试方针• Test Point Analysis (TPA) :测试点分析• Test procedure:测试过程领测国际科技(北京)有限公司领测软件测试网 /• Test procedure specification:测试过程规格说明• Test process:测试流程• Test Process Improvement (TPI) :测试流程改进• Test record:测试记录• Test recording:测试记录• Test reproduceability:测试可重现性• Test report:测试报告• Test requirement:测试需求• Test run:测试运行• Test run log:测试运行日志• Test result:测试结果• Test scenario:测试场景• Test script:测试脚本• Test set:测试集• Test situation:测试条件• Test specification:测试规格说明• Test specification technique:测试规格说明技术• Test stage:测试阶段• Test strategy:测试策略• Test suite:测试套• Test summary report:测试总结报告• Test target:测试目标• Test tool:测试工具• Test type:测试类型• Testability:可测试性• Testability review:可测试性评审• Testable requirements:需求可测试性• Tester:测试人员• Testing:测试• Testware:测试组件• Thread testing:组件集成测试• Time behavior:性能• Top-down testing:自顶向下的测试• Traceability:可跟踪性U• Understandability:易懂性• Unit:单元• unit testing:单元测试• Unreachable code:执行不到的代码领测国际科技(北京)有限公司领测软件测试网 /• Usability:易用性• Usability testing:易用性测试• Use case:用户用例• Use case testing:用户用例测试• User acceptance testing:用户验收测试• User scenario testing:用户场景测试• User test:用户测试V• V -model:V 模式• Validation:确认• Variable:变量• Verification:验证• Vertical traceability:垂直可跟踪性• Version control:版本控制• Volume testing:容量测试W• Walkthrough:走查• White-box test design technique:白盒测试设计技术• White-box testing:白盒测试• Wide Band Delphi:Delphi 估计方法。

测试常见术语(中英文对比附解析)

测试常见术语(中英文对比附解析)

测试常见术语(中英文对比附解析)Acceptance Testing--可接受性测试一般由用户/客户进行的确认是否可以接受一个产品的验证性测试。

actual outcome--实际结果被测对象在特定的条件下实际产生的结果。

Ad Hoc Testing--随机测试测试人员通过随机的尝试系统的功能,试图使系统中断。

algorithm--算法(1)一个定义好的有限规则集,用于在有限步骤内解决一个问题;(2)执行一个特定任务的任何操作序列。

algorithm analysis--算法分析一个软件的验证确认任务,用于保证选择的算法是正确的、合适的和稳定的,并且满足所有精确性、规模和时间方面的要求。

Alpha Testing--Alpha测试由选定的用户进行的产品早期性测试。

这个测试一般在可控制的环境下进行的。

analysis--分析(1)分解到一些原子部分或基本原则,以便确定整体的特性;(2)一个推理的过程,显示一个特定的结果是假设前提的结果;(3)一个问题的方法研究,并且问题被分解为一些小的相关单元作进一步详细研究。

anomaly--异常在文档或软件操作中观察到的任何与期望违背的结果。

application software--应用软件满足特定需要的软件。

architecture--构架一个系统或组件的组织结构。

ASQ--自动化软件质量(Automated Software Quality)使用软件工具来提高软件的质量。

assertion--断言指定一个程序必须已经存在的状态的一个逻辑表达式,或者一组程序变量在程序执行期间的某个点上必须满足的条件。

assertion checking--断言检查用户在程序中嵌入的断言的检查。

audit--审计一个或一组工作产品的独立检查以评价与规格、标准、契约或其它准则的符合程度。

audit trail--审计跟踪系统审计活动的一个时间记录。

Automated Testing--自动化测试使用自动化测试工具来进行测试,这类测试一般不需要人干预,通常在GUI、性能等测试中用得较多。

Reference “Automated Model-Based Testing of

Reference  “Automated Model-Based Testing of

Automated Model-Based Testing of Community-Driven Open-Source GUI ApplicationsZheng-Wen Shen2006/07/121Reference •“Automated Model-Based Testing of Community-Driven Open-Source GUI Applications”–Qing Xie and Atif Memon–22nd International Conference onSoftware Maintenance(ICSM 2006),Philadelphia, PA, USA, Sep. 25-27, 2006.2Outline•1. Introduction•2. Testing Loops•3. Overview of GUI Model•4. Experiment•5. Conclusions341. Introduction (1/3)•Open-source software (OSS ) development on world-wide web. (WWW )–Communities of programmers distributed world-wide–Unprecedented code churn ratesProblems on OSS (2/3)•Little direct inter-developer communication–CVS commit log message, bug reports,change-requests, and comments •Developers work on loosely coupled parts of the application code.–Local change has inadvertently brokenother parts of the overall software code52. Testing Loops•CR Tools Test cases Æfragile–Input event sequence can no longerexecute on the GUI–The expected output stored withthe test case becomes obsolete.•Model-based techniques–Generate and maintain test casesautomatically during OSS evolution–Employs GUI models to generate test cases.7Continuous GUI Testing83. Overview of GUI Model(1/5)•An event-flow graph(EFG) model represents all possible event sequences that may be executed on a GUI.–All executable paths of the software •Microsoft Word (total 4210 events)–80 events open menus, 346 events openwindows, 196 events close windows, and theremaining 3588 events interact with theunderlying code.9Event-Interaction Graph (2/5)•Test interactions between loosely-coupled parts of an OSS•System-interaction events = Non-structural events + close windows events •Test cases consist of event-flow-paths that start and end with system-interaction events, without any intermediate system-interaction events.10Path in an EFG and EIG(3/5)11Test Case Generation (4/5) 1.Test cases are short Ægenerate andexecute very quickly2.Only consists of system-interactionevents3.The expected state is stored only forsystem-interaction events4.All system-interaction events areexecuted; most of the GUI’sfunctionality is covered.5. Each test case is independentand the suite can be distributed.12Test Oracle Creation (5/5)•Oracles for Crash Tests–Crashes during test execution may be used toidentify serious problems in the software.•Oracles for Smoke Tests–Software does what it was doing beforemodifications–Reference Testing•Oracles for Comprehensive Testing –Specifications-based approach–Precondition + Effect134. Experiment•Do popular web based community-driven GUI–based OSS have problems that can be detected by our automated techniques?•Do these problems persist across multiple versions of the OSS?14Notice•Execute fully-automatic crash testing process on applications and report problems–A crash ÙAn uncaught exception •Determine how long these problems have been in the application code •We overall process executed without any human intervention in 5-8 hours.15Subject Applications •FreeMind, a premier mind-mapping software–0.0.2, 0.1.0, 0.4, 0.7.1, 0.8.0RC5, 0.8.0•Gantt Project, a project scheduling application – 1.6, 19.11, 1.10.3, 1.11, 1.11.1, 2.pre1•JMSN, a pure java Microsoft MSN messenger clone–0.9a, 0.9.2, 0.9.7, 0.9.8b7, 0.9.9b1•Crossword Sage, a tool for creating (and solving) professional looking crosswords withpowerful word suggestion capabilities–0.1, 0.2, 0.3.0, 0.3.1, 0.3.2, and 0.3.516FreeMind Bugs1.NullPointerException when trying to open a non-existent file (0.0.2, 0.1.0)2.FileNotFoundException when trying to save a file with a very long le name(0.0.2, 0.1.0, 0.4)3.NullPointerException when clicking on some buttons on the main toolbarwhen no le is open (0.1.0);4.NullPointerException when clicking on some menu items if no le is open(0.1.0, 0.4, 0.7.1, 0.8.0RC5)5.NullPointerException when trying to save a “blank”file (0.1.0)6.NullPointerException when adding a new node after toggling folded node(0.4)7.FileNotFoundException when trying to import a nonexistent file (0.4, 0.7.1,0.8.0RC5, 0.8.0)8.FileNotFoundException when trying to export a file with a very long lename (0.7.1, 0.8.0RC5, 0.8.0)9.NullPointerException when trying to split a node in “Edit a long node”window (0.7.1, 0.8.0RC5, 0.8.0)10.NumberFormatException when setting non-numeric input while expectinga number in “preferences setting”window (0.8.0RC5, 0.8.0)17Gantt Project Bugs1.NumberFormatException when setting non-numeric inputs whileexpecting a number in “New task”window (1.6)2.FileNotFoundException when trying to open a nonexistent file (1.6)3.FileNotFoundException when trying to save a file with a very longle name (1.6, 1.9.11, 1.10.3, 1.11, 1.11.1, 2.pre1)4.NullPointerException after confirming any preferences setting(1.9.11)5.NullPointerException when trying to save the content to a server(1.9.11)6.NullPointerException when trying to import a nonexistent file(1.9.11, 1.10.3, 1.11, 1.11.1, 2.pre1)7.InterruptedException when trying to open a new window (1.10.3)8.Runtime error when trying to send e-mail (1.11, 1.11.1, 2.pre1)18JMSN Bugs1.InvocationTargetException when trying torefresh the buddy list (0.9a, 0.9.2)2.FileNotFoundException when trying tosubmit a bug/request report because the submission page doesn't exist (0.9a,0.9.2, 0.9.5, 0.9.7, 0.9.8b7, 0.9.9b2);3.NullPointerException when trying to checkthe validity of the login data (0.9.7,0.9.8b7, 0.9.9b2)4.SocketException and NullPointerExceptionwhen stopping a socket that has beenstarted (0.9.8b7, 0.9.9b2)19Crossword Sage Bugs •NullPointerException in Crossword Builder when trying to delete a word (0.3.0, 0.3.1)•NullPointerException in Crossword Builder when trying to suggest a new word (0.3.0, 0.3.1, 0.3.2, 0.3.5)•NullPointerException in Crossword Builder when trying to write a clue for a word (0.3.0, 0.3.1, 0.3.2, 0.3.5)•NullPointerException when loading a new crossword file (0.3.5)•NullPointerException when splitting a word (0.3.5)•NullPointerException when publishing the crossword (0.3.5)204.1 Results21Results•The some bug existed across applications Ùshared open-source GUI components (FileSave) Îsanitize inputs•Many bugs are persistent across versions•There are fewer bugs in the first version than in later versions Ùconsistent with our experience22the reasons for crash1.Invalid text input: validity, size2.Widget enabled when it should bedisabled3.Object declared but not initialized4.Obsolete external resources235. Conclusions (1/2)•Recognition the nature of the WWW –enables the separation of GUI testing steps by level of automation, feedback, and resourceutilization.•Demonstration that resources may be better utilized by defining a concentric loop-based GUI testing approach.•Demonstration that popular GUI-based OSS developed on the WWW have flaws that can be detected by our fully automated approach.24Conclusions (2/2)• A more detailed study of the overall benefits of this technique•Extended subject application Ætest TerpOffice incrementally•Web application that have complex back-ends•The interaction between three test loops –Whether one loop can benefit from theexecution of the inner loops–Study the need of additional loops25。

软件测试常用英语词汇

软件测试常用英语词汇

软件测试常用英语词汇静态测试:Non-Execution-Based Testing或Static testing 代码走查:Walkthrough代码审查:Code Inspection技术评审:Review动态测试:Execution-Based Testing白盒测试:White-Box Testing黑盒测试:Black-Box Testing灰盒测试:Gray-Box Testing软件质量保证SQA:Software Quality Assurance软件开发生命周期:Software Development Life Cycle冒烟测试:Smoke Test回归测试:Regression Test功能测试:Function Testing性能测试:Performance Testing压力测试:Stress Testing负载测试:Volume Testing易用性测试:Usability Testing安装测试:Installation Testing界面测试:UI Testing配置测试:Configuration Testing文档测试:Documentation Testing兼容性测试:Compatibility Testing安全性测试:Security Testing恢复测试:Recovery Testing单元测试:Unit Test集成测试:Integration Test系统测试:System Test验收测试:Acceptance Test测试计划应包括:测试对象:The Test Objectives测试范围: The Test Scope测试策略: The Test Strategy测试方法: The Test Approach,测试过程: The test procedures,测试环境: The Test Environment,测试完成标准:The test Completion criteria测试用例:The Test Cases测试进度表:The Test Schedules风险:Risks接口:Interface最终用户:The End User正式的测试环境:Formal Test Environment确认需求:Verifying The Requirements有分歧的需求:Ambiguous Requirements运行和维护:Operation and Maintenance.可复用性:Reusability可靠性: Reliability/Availability电机电子工程师协会IEEE:The Institute of Electrical and Electronics Engineers) 正确性:Correctness实用性:Utility健壮性:Robustness可靠性:Reliability软件需求规格说明书:SRS (software requirement specification )概要设计:HLD (high level design )详细设计:LLD (low level design )统一开发流程:RUP (rational unified process )集成产品开发:IPD (integrated product development )能力成熟模型:CMM (capability maturity model )能力成熟模型集成:CMMI (capability maturity model integration )戴明环:PDCA (plan do check act )软件工程过程组:SEPG (software engineering process group )集成测试:IT (integration testing )系统测试:ST (system testing )关键过程域:KPA (key process area )同行评审:PR (peer review )用户验收测试:UAT (user acceptance testing )验证和确认:V&V (verification & validation )控制变更委员会:CCB (change control board )图形用户界面:GUI (graphic user interface )配置管理员:CMO (configuration management officer )平均失效间隔时间:(MTBF mean time between failures )平均修复时间:MTTR (mean time to restoration )平均失效时间:MTTF (mean time to failure )工作任务书:SOW (statement of work )α测试:alpha testingβ测试:beta testing适应性:Adaptability可用性:Availability功能规格说明书:Functional Specification。

ENCOUNTERTRUE-TIMEATPG

ENCOUNTERTRUE-TIMEATPG

Encounter True-Time ATPGPart of the Encounter Test family, Encounter True-Time ATPG offers robust automated test patterngeneration (ATPG) engines, proven to generate the highest quality tests for all standard design-for-test (DFT) methods, styles, and flows. It supports not only industry-standard stuck-at and transition fault models, but also raises the bar on fault detection by providing defect-based, user-definable modeling capability with its patented pattern fault technology.Pattern fault technology is what enables the Encounter “gate-exhaustive” coverage(GEC) methodology, proven to be two-to-four times more efficient at detecting gate intrinsic faults than any other static methodologies available on the market (e.g. SSF, N-Detect).For delay test, True-Time ATPGincludes a dynamic timing engine and uses either circuit timing information or constraints to automaticallygenerate transition-based fault tests and faster-than-at-speed tests for identifying very deep sub-micron design-process feature defects (e.g. certain small delay defects).Figure 1: Encounter True-Time ATPG provides a timing-based ATPG engine driven by SDF or SDC informationOn-product clock generation (OPCG) produces and applies patterns to effectively capture this class of faults while minimizing false failures. Use of SDF or SDC information ensures the creation of a highly accurate timing-based pattern set.True-Time ATPG optimizes test coverage through a combination of topological random resistant fault analysis (RRFA) and deterministic fault analysis (DFA)with automated test point insertion—far superior to traditional test coverage algorithms. RRFA is used for early optimi-zation of test coverage, pattern density, and runtime performance. DFA is applied downstream for more detailed circuit-level fault analysis when the highest quality goals must be met.To reduce scan test time while maintaining the highest test coverage, True-Time technology provides intelligent ATPG with on-chip compression (XOR- or MISR-based). It is also power-aware and uses patented technologies to significantly reduce and manage power consumption during manufacturing test.True-Time ATPG also offers a customizable environment to suityour project development needs.The GUI provides highly interactive capabilities for coverage analysis and debug; it includes a powerful sequence analyzer that boosts productivity. Encounter True-Time ATPG is available in two offerings: Basic and Advanced.Benefits• Ensures high quality of shipped silicon with production-proven 2-4x reduction in test escapes• Provides superior partial scan coverage with proprietary pattern fault modeling and sequential ATPG algorithms• Optimizes test coverage with RRFA and DFA test point insertion methodology • Boosts productivity by integrating with Encounter RTL Compiler• Delivers superior runtime throughput with high-performance model build and fault simulation engines as well as distributed ATPG • Lowers cost of test with patterncompaction and compressiontechniques that maintain fullscan coverage• Balances tester costs with diagnosticsmethodologies by offering flexiblecompression architectures with fullX masking capabilities (includingOPMISR+ and XOR-based solutions)• Supports low pin-count testingvia JTAG control of MBIST andhigh-compression ratio technology• Supports reduced pin-count testing forI/O test• Interfaces with Encounter Power Systemfor accurate power calculation andpattern IR drop analysis• Reduces circuit and switching activityduring manufacturing test to managepower consumption• Reduces false failures due tovoltage drop• Provides a GUI with powerfulinteractive analysis capabilitiesincluding a schematic viewer andsequence analyzerEncounter TestPart of the Encounter digital design andimplementation platform, the EncounterTest product family delivers an advancedsilicon verification and yield learningsystem. Encounter Test comprises threeproduct technologies:• Encounter DFT Architect: ensuresease of use, productivity, and predict-ability in generating ATPG-readynetlists containing DFT structures, fromthe most basic to the most complex;available as an add-on option toEncounter RTL Compiler• Encounter True-Time ATPG: ensuresthe fewest test escapes and the highestquality shipped silicon at the lowestdevelopment and production costs• Encounter Diagnostics: delivers themost accurate volume and precisiondiagnostics capabilities to accelerateyield ramp and optimize device andfault modelingEncounter Test also offers a flexible APIusing the PERL language to retrieve designdata from its pervasive database. Thisunique capability allows you to customizeSoC Test Infrastructure• Maximize productivity• Maximize predictabilityTest Pattern Generation• Maximize product quality• Minimize test costsDiagnostic• Maximize yeld and ramp• Maximize silicon bring-upEncounter DFT Architect• Full-chip test infrastructure• Scan compression(XOR and MISR), BIST,IEEE1500, 1149.1/6• ATPG-aware insertionverification• Power-aware DFT and ATPGEncounter True-Time ATPG• Stuck-at, at-speed, andfaster-than-at-speed testing• Design timing drivestest timing• High-quality ATPGEncounter Diagnostics• Volume mode finds criticalyield limiters• Precision mode locatesroot cause• Unsurpassed silicon bring-upprecisionSiliconFigure 2: Encounter Test offers a complete RTL-to-silicon verification flow and methodologies that enable the highest quality IC devices at the lowest costreporting, trace connections in the design, and obtain information that might be helpful for debugging design issues or diagnostics.FeaturesTrue-Time ATPG BasicTrue-Time ATPG Basic contains thestuck-at ATPG engine, which supports:• High correlation test coverage, easeof use, and productivity through integration with the Encounter RTL Compiler synthesis environment• Full scan, partial scan, and sequential ATPG for edge-triggered andLSSD designs• Stuck-at, IDDQ, and I/O parametric fault models• Core-based testing, test data migration, and test reuse• Special support for custom designs such as data pipelines, scan control pipelines, and safe-scan• Test pattern volume optimization using RRFA-based test point insertion• Test coverage optimization usingDFA-based test point insertion• Pre-defined (default) and user-defined defect-based fault modeling andgate-exhaustive coverage based on pattern fault technology• Powerful GUI with interactive analysis capabilitiesPattern fault capability enables defect-based testing with a patented technology for accurately modeling the behavior of nanometer defects, such as bridges and opens for ATPG and diagnostics, and for specifying the complete test of a circuit. The ATPG engine, in turn, uses this definition wherever the circuit is instan-tiated within a design. By default, pattern faults are used to increase coverage of XOR, LATCH, FLOP, TSD, and MUX primi-tives. They can also be used to model unique library cells and transition and delay-type defects.True-Time ATPG AdvancedTrue-Time ATPG Advanced offers thesame capabilities as the Basic configu-ration, plus delay test ATPG functionality.It uses post-layout timing data from theSDF file to calculate the path delay of allpaths in the design, including distributiontrees of test clocks and controls. Usingthis information, you can decide on thebest cycle time(s) to test for in a givenclock domain.True-Time ATPG Advanced is capableof generating tests at multiple testfrequencies to detect potential early yieldfailures and certain small delay defects.You can specify your own cycle time orlet True-Time ATPG calculate one basedon path lengths. It avoids generating testsalong paths that exceed tester cycle timeand/or mask transitions along paths thatexceed tester cycle time. True-Time ATPGgenerates small delay defect patternsbased on longest path analysis to ensurepattern efficiency.A unique feature of the Advancedoffering is its ability to generate faster-than-at-speed tests to detect small delaydefects that would otherwise fail duringsystem test or result in early field failures.True-Time ATPG Advanced also usestester-specific constraint informationduring test pattern generation. Thecombination of actual post-layout timingand tester constraint information withTrue-Time ATPG Advanced algorithmsensures that the test patterns will work“first pass” on the tester.The test coverage optimizationmethodology is expanded beyond RRFAand DFA-based test point insertion(TPI) technology. The combinationof both topological and circuit-levelfault analysis with automated TPIprovides the most advanced capabilityfor ensuring the highest possible testcoverage while controlling the numberof inserted test points. DFA-based TPIBridge TestingFigure 3: Pattern faults model any type ofbridge behavior; net pair lists automaticallycreate bridging fault models; ATPG anddiagnostics use the models to detect andisolate bridgesFigure 4: Power-aware ATPG for scan and capture modes prevents voltage-drop–induced failures in test modeCadence is transforming the global electronics industry through a vision called EDA360.With an application-driven approach to design, our software, hardware, IP, and services helpcustomers realize silicon, SoCs, and complete systems efficiently and profitably. © 2012 Cadence Design Systems, Inc. All rights reserved. Cadence, the Cadence logo, Conformal, Encounter, and VoltageStorm are registered trademarks of Cadence Design Systems, Inc. All other s are properties of their respective holders.has links to Encounter Conformal ® Equivalence Checker to ensure the most efficient, logically equivalent netlist modifications with maximum controllability and observability.The ATPG engine works with multiple compression architectures to generate tests that cut costs by reducing scan test time and data volume. Actual compression ratios are driven by the compression architecture as well asdesign characteristics (e.g. available pins, block-level structures). Users can achieve compression ratios exceeding 100x.Flexible compression options allow you to select a multiple input signature register (MISR) architecture with the highest compression ratio, or an exclusive-or (XOR)–based architecture that enables a highly efficientcombinational compression ratio and a one-pass diagnostics methodology. Both architectures support a broadcast type or XOR-based decompressor.On-product MISR plus (OPMISR+) uses a MISR-based output compression, which eliminates the need to check the response at each cycle. XOR-based compression uses an XOR-tree–based output compression to enable a one-pass flow through diagnostics.Additionally, intelligent ATPG algorithms minimize full-scan correlation issues and reduce power consumption, deliv-ering demonstrated results of >99.5 stuck-at test coverage with >100x test time reduction. Optional X-state masking capability is available on a per-chain/ per-cycle basis. Masking is usuallyrequired when using delay test because delay ATPG may generate unknown states in the circuit.Using the Common Power Format (CPF), True-Time ATPG Advanced automatically generates test modes to enable individual power domains to be tested independently or in small groups. This, along with automaticrecognition and testing of power-specific structures (level shifters, isolation logic, state retention registers) ensures the highest quality for low-power devices.Power-aware ATPG uses industry-leading techniques to manage and significantly reduce power consumption due to scan and capture cycles during manufacturing test. The benefit is reduced risk of false failures due to voltage drop and fewer reliability issues due to excessive power consumption. True-Time ATPG Advanced uses algorithms that limit switching during scan testing to further reduce power consumption.Encounter Test offers a flexible API using the PERL language to retrievedesign data from its pervasive database. This unique capability allows users to customize reporting, trace connections in the design, and obtain information that might be helpful for debugging design issues or diagnostics.Platforms• Sun Solaris (64-bit)• HP-UX (64-bit)• Linux (32-bit, 64-bit)• IBM AIX (64-bit)Cadence Services and Support• Cadence application engineers can answer your technical questions by telephone, email, or Internet—they can also provide technical assistance and custom training • Cadence certified instructors teach more than 70 courses and bring their real-world experience into the classroom • More than 25 Internet Learning Series (iLS) online courses allow you the flexibility of training at your own computer via the Internet • Cadence Online Support gives you24x7 online access to a knowledgebase of the latest solutions, technicaldocumentation, software downloads, and more。

Systematic Model-Based Testing of Embedded Automotive

Systematic Model-Based Testing of Embedded Automotive

Electronic Notes in Theoretical Computer Science 111 (2005) 13–26 /locate/entcsSystematic Model-Based Testing of Embedded Automotive SoftwareMirko Conrad, Ines Fey1 ,2DaimlerChrysler AG Alt-Moabit 96a 10559 Berlin / GermanySadegh Sadeghipour3IT Power Consultants Gustav-Meyer-Allee 25 13355 Berlin / GermanyAbstract The software embedded in automotive control systems increasingly determines the functionality and properties of present-day motor vehicles. The development and test process of the systems and the embedded software becomes the limiting factor. While these challenges, on the development side, are met by employing model-based specification, design, and implementation techniques, satisfactory solutions on the testing side are slow in arriving. With regard to the systematic test design and the description of test scenarios especially, there is a lot of room for improvement. This paper introduces the model-based black-box testing (M B 3 T ) approach in order to effectively minimize these deficits by creating a systematic procedure for the design of test scenarios for embedded automotive software and its integration in the model-based development process. According to the M B 3 T approach, logical test scenarios are first defined based on the textual requirements specification of the embedded software. These test scenarios are specified at a high level of abstraction and do not contain any implementation details of the test object. Due to their close link to the requirements it is easy to check which requirements are covered by which test scenario. Subsequently, the requirement-based logical tests are refined to executable model-based test scenarios. Finally, the approach helps to check, whether or not the logical test scenarios are fully covered by the executable test scenarios. The M B 3 T approach has recently been successfully employed in a number of automotive embedded software development projects at DaimlerChrysler. Keywords: model-based development, systematic test, embedded automotive systems, model-based test1571-0661/$ – see front matter © 2004 Elsevier B.V. All rights reserved. doi:10.1016/j.entcs.2004.12.00514M. Conrad et al. / Electronic Notes in Theoretical Computer Science 111 (2005) 13–261IntroductionA large part of today’s innovation in the automotive industry is achieved by extending the functionality of vehicle software [13]. The software’s increase in scope and the increase in complexity connected with it, call for new ways of dealing with the development and the testing of embedded software. On the development side, these challenges have been met, since the mid 1990s, by a paradigm shift in the automotive software development. This leads to the traditional, document-based software development in the vehicle subsystems engine / powertrain, chassis and body / comfort being increasingly displaced by a model-based development [2,15,16,10,17].traditional software developmentrequirementsspecificationdesignimplementationtestingmodel-based software developmentrequirementsmodellingtestingFigure 1. Traditional vs. model-based software developmentAnalogous to traditional development, the model-based development process starts with a requirements phase, in which the requirements of the functionality to be realized are being specified textually by using tools such as DOORS [7]. Following that, this innovative development approach is characterized by the integrated deployment of executable models for specification, design and implementation, using commercial modeling and simulation environments such as Matlab / Simulink / Stateflow [12] or ASCET-SD [1]. These tools use block diagrams and extended state machines as modeling notations. Very early in this development procedure an executable model of the control software (functional model) is developed, which can be simulated as well as tested. This executable model is used throughout the downstream development process and forms the ’blueprint’ for the automatic or manual coding of the embedded software. In practice, this development is reflected in an evolution of the functional model from an early logical model to an implementation model and its transformation into C code (model evolution). As compared to traditional software development, where phases are clearly separate, modelbased development shows the phases specification, design and implementation1 The work described was partially performed as part of the IMMOS project funded by the German Federal Ministry of Education and Research, Project Ref. 01ISC31D. 2 Email: mirko.conrad | ines fey@ 3 Email: sadegh@itpower.deM. Conrad et al. / Electronic Notes in Theoretical Computer Science 111 (2005) 13–2615to have grown together much more strongly (Figure 1). The seamless utilization of models facilitates a highly consistent and efficient development. Also within the framework of model-based development it is essential to subject the software being developed to an appropriate combination of verification and validation measures in order to detect errors and produce confidence in the correct functioning of the software. In the industrial field, dynamic testing forms the focal point of analytical quality assurance. Since the executable model could be exploited as an additional, comprehensive source of information for testing, new possibilities and synergy potentials for the test process arise in the context of model-based development. Considering the question of efficiency, one should make the most of these possibilities. In the automotive industry, such a test process, being closely connected with the model-based development, including a combination of different test methods which complement each other, and thereby utilizing the executable model as a rich source of information for the test, is called a model-based test. So, the automotive view on model-based testing (cf. [8]) is a rather process-oriented one [11,14]. In particular, no additional models are being created for test purposes, but the functional models already existent within the model-based development are used for the test. Satisfactory solutions for model-based testing are slow in arriving. With regard to the systematic design of test scenarios especially, there is a lot of room for improvement. In order to enable a systematic model-based test, a test design from different perspectives, and with a subsequent consistency check is presented in this paper. The following description of the model-based black-box testing (MB 3 T ) approach explicates this basic concept.2Model-based Black-box TestingIn order to define tests capable of detecting errors and producing confidence in the software, test scenarios should be designed which are systematically based on the software requirements. In the case of the test scenarios being directed only towards the technical realization of the test object, there would be the danger of neglecting and not adequately testing the original requirements made on the test object. However, requirement-based, i.e. logical, test scenarios are abstract and not executable. This means that they cannot be used directly for test execution. Therefore, additional executable test scenarios are needed, which can stimulate the interface of the respective test object. Finally, it should be checked, whether or not the logical test scenarios are fully covered by the executable ones. These demands led us to define the model-based black-box testing approach, which has recently been successfully16M. Conrad et al. / Electronic Notes in Theoretical Computer Science 111 (2005) 13–26employed in a number of automotive embedded software development projects at DaimlerChrysler. The MB 3 T approach makes it possible to define test scenarios for software developed in a model-based way from two different perspectives and to create consistency between both (Figure 2):•Requirement-based test design: Early in development, logical test scenarios are defined, based on the textual requirements specification of the embedded software. The requirement-based test scenarios created in this way are specified at a high level of abstraction and do not contain any implementation details for the test object. Due to their close link to the requirements it is easy to check which requirements are covered by which test. Model-based test design: Once the functional model is available, executable test scenarios are derived from it with the help of the classification-tree method for embedded systems [4,5,11]. These are tailored to the functional model’s interface, or rather, the software derived from it, and therefore also lend themselves to test execution. Consistency check: By means of a set of checking rules, the consistency between logical, requirement-based and executable model-based test scenarios can be checked and thus guaranteed.••The three above-mentioned steps will be described in more detail in the following sections and illustrated using a chassis system [3] as an example.consistency checkrequirement-based test designmodel-based test designtest objectFigure 2. M B 3 T Testing automotive software from two different perspectives2.1 Requirement-based Test Design In the automotive field, requirements on embedded software are usually described textually. Example 2.1 For an antilock braking system (ABS) a high-level requirement (HLR) of this kind could read as follows:M. Conrad et al. / Electronic Notes in Theoretical Computer Science 111 (2005) 13–26•17HLR-1: The ABS system should guarantee near-optimum braking performance, irrespective of vehicle speed when braking. HLR-1.1: The ABS system should control the vehicle speed in the interval between vmin= 2 km/h and vmax= 250km/h.•For the design of requirement-based tests we utilize the classification-tree method (CTM) [9]. According to this black-box testing method, the input domain of a test object is analyzed on the basis of its functional specification with respect to various aspects regarded as relevant for the test. For each aspect, disjoint and complete classifications are formed. Classes resulting from these classifications may be further classified iteratively. The stepwise partition of the input domain by means of classifications is represented graphically as a tree. Subsequently, test scenarios are formed by combining classes of different classifications. This is done by using the tree as the head of a combination table in which test scenarios are specified.ABSenvironmentdrivervehicle dynamicsspeedclassificationclassdrygroundiceroadbrakestraightcurvedwetsnownot activatedstrong brakingv < Vminlowmiddle_ v > Vmaxhighweak braking1: Strong braking on ice 2: Braking on wet road 3: Braking on a curveFigure 3. Requirement-based classification-tree with logical test scenarios for ABSExample 2.2 HLR-1 suggests considering different speeds at the moment of braking, while HLR-1.1 restricts the speed interval to be considered. Consequently, the vehicle speed is a significant test aspect. Further, HLR-1 requires a near-optimum braking performance to be achieved by the ABS system (expected test object behavior). Therefore, one also has to regard aspects impacting the braking behavior of the vehicle. These are the brake pressure initiated by the driver, the ground friction and the kind of road (straight or curved). All the test aspects mentioned are illustrated by the classification-tree in Figure 3. Due to the abstractness and understandability of tests on this level we specify the classifications in a qualitative way, e.g. we use ’low’, ’middle’ and ’high’ to describe the speed classes, rather than specific values or value intervals. Note that the both classes ’v < vmin ’ and ’v > vmax ’ have been specified in order to have a complete classification for speed. They are not used in the test scenarios for HLR-1 (they may be used for testing a different requirement, for instance).18M. Conrad et al. / Electronic Notes in Theoretical Computer Science 111 (2005) 13–26The maximum number of test scenarios which could be formed from a classification-tree is the number of combinations of leaf classes. Since this number is generally too high (in the case of the classification-tree in Figure 3 this number is 120), a strategy for selecting a subset of whole possible tests is necessary. Such a strategy is related to the question of test depth determination. The determination of test depth depends on the criticality of the test object, the project guidelines, the testing standards to be met and, last but not least, the decision of the testing engineer. For our example we require to cover all the classes of the main test aspect, i.e. vehicle speed. Consequently, as shown in Figure 3, three test scenarios are selected, each of them covering a relevant speed class.env_curve env_mue vehicle_steeringAngle abs_brakePressure vehicle_accPedal vehice_brakePedal vehicle_gearvehicle_with_absvehicle_with_absInputsenv_curveenv_muevehicle_steeringAnglevehicle_acceleratorvehicle_brakevehicle_gear0]0.02,inf[]0,0.1]]0.6,1][-540,-180[0]180,540]0]0,0.5] ]0.5,1[10]0,0.5] ]0.5,1[1-1012345]0,0.02]]0.1,0.3] ]0.3,0.6][-180,0] ]0,180]Figure 4. Model interface and model-based classification-tree of ABS systemThe insights obtained during the requirement-based test design serve as the starting point for model-based testing. The classifications of the requirementbased tree determine the test aspects which have to be interpreted as model inputs. The classes indicate how the domains of the inputs considered should be partitioned. Requirement-based test scenarios describe the situations which are to be covered by model-based test scenarios.M. Conrad et al. / Electronic Notes in Theoretical Computer Science 111 (2005) 13–26192.2 Model-based Test Design Model-based test design is performed using an extended version of the classification-tree method, namely classification-tree method for embedded software (CTM/ES) [4,6,11]. The CTM/ES is a model-oriented black-box test design technique which allows a comprehensive graphical description of timedependent test scenarios by means of abstract signal waveforms that are defined stepwise for each model input. The classifications of the model-based tree are the input variables of the functional model under test (Figure 4).Example 2.3 Figure 4 shows the interface of the functional Simulink model as well as the model-based classification-tree for the antilock braking system ABS. The functional model’s input signals constitute the test aspects for the model-based test design. In contrast to the requirement-based classificationtree (Figure 3) the classifications on this level are specified using specific values or intervals of values. Figure 5 shows the model-based test scenarios for ABS. The three test scenarios specified in the combination table correspond to the logical test scenarios in Figure 3. Each test scenario consists of a number of steps, while each step is associated with a time stamp. Changing the values of a model input along the test steps is shown by marking different classes of the classification corresponding to that input. Continuous changes are defined by transitions (connecting lines) between marks in subsequent steps.Its classes are obtained by dividing the range of each variable into a set of complete and non-overlapping intervals. Furthermore, the test scenarios describe the course of these inputs over time (Figure 5). Later in the testing process, the resulting graphical test descriptions are automatically transformed into sequences of input values over time. These sequences can be deployed in different test environments, namely in modelin-the-loop (MiL), software-in-the-loop (SiL) and hardware-in-the-loop (HiL), in order to test different evolution stages of the test object. Of course, the test scenarios must be customized to the interfaces of the different test environments. This is, however, a straightforward task. In this way, the model is used as a comprehensive source of information for the definition of tests in all test phases.20M. Conrad et al. / Electronic Notes in Theoretical Computer Science 111 (2005) 13–26vehicle_with_absInputsenv_curveenv_muevehicle_steeringAnglevehicle_acceleratorvehicle_brakevehicle_gear0]0.02,inf[]0,0.1]]0.1,0.3]]0.6,1][-540,-180[[-180,0]0]180,540]0]0,0.5]]0.5,1[10]0,0.5]]0.5,1[1-1012345]0,0.02]]0.3,0.6]]0,180]1: Strong braking on ice 1.1: start 1.2: driving away 1.3: accelerating 1.4: braking 1.5: end 2: Braking on wet road 2.1: start 2.2: driving away 2.3: accelerating 2.4: braking 2.5: no braking 2.6: braking 2.7: end 3: Braking on a curve 3.1: start 3.2: driving away 3.3: accelerating 3.4: steering 3.5: braking 3.6: steering 3.7: strong braking 3.8: endTime [sec] 0 0.1 3 5 10 Time [sec] 0 0.1 3 5 7 8 10 Time [sec] 0 0.1 3 5 7 10 12 13Figure 5. Model-based classification-tree and test scenarios for ABS system2.3 Consistency Check between Requirement-based and Model-based TestsMB 3 T also provides the test engineer with a set of checking rules which make it possible to examine the consistency between the abstract, requirementbased tests and the implementation-oriented, model-based tests, defined in different phases of the development process [3]. The consistency checks in their entirety ensure the necessary degree of thoroughness and completeness for the model-based testing process. These checking rules comprise checks on the classification-trees as well as on the test scenarios (Figure 6). In detail, the consistency check is to be carried out in the following stages: (i) Compare both classification-trees. (ii) Analyze whether the requirement-based test scenarios are covered by the model-based test scenarios.M. Conrad et al. / Electronic Notes in Theoretical Computer Science 111 (2005) 13–2621model- based test scenariosvehicle_with_absInputsrequirement based test scenariosABSenv_curveenv_muevehicle_steeringAnglevehicle_acceleratorvehicle_brakevehicle_gear vehicle_gear0 ]0.02,inf[ ]0,0.02]]0,0.1] ]0.1,0.3]]0.6,1] ]0.3,0.6][-540,-180[ ]180,540] [-180,0] ]0,180] 00]0,0.5]]0.5,1[10]0,0.5]]0.5,1[1 1-1 -10 01 12 23 34 45 5environmentdrivervehicle dynamicsspeedgroundroadbrakedrywetsnowicestraightcurvenot strong activated braking weak brakingv < Vminlowv > Vmaxhigh middleconsistency check1: Strong braking on ice 2: Braking on wet road 3: Braking on a curve1: Strong braking on ice 1.1: start 1.2: driving away 1.3: accelerating 1.4: braking 1.5: end 2: Braking on wet road 2.1: start 2.2: driving away 2.3: accelerating 2.4: braking 2.5: no braking 2.6: braking 2.7: end 3: Braking on a curve 3.1: start 3.2: driving away 3.3: accelerating 3.4: steering 3.5: braking 3.6: steering 3.8: endTime [sec] Time [sec] 0 0 0.1 0.1 3 3 5 5 10 10 Time [sec] 0 0 0.1 0.1 3 3 5 5 7 7 8 8 10 10 Time [sec] Time [sec] 0 0 0.1 0.1 3 3 5 5 7 7 10 10 12 12 13 13relations between classes relations between classes relations betweenclassifications relations betweenclassificationsground ground brake brake ........... ........... .......... .......... env_mue env_mue vehicle_brake vehicle_brake .......... .......... .......... .......... 1:1 1:1 1:1 1:1 .... .... ... ...Figure 6. Consistency check between requirement-based and model-based test scenarios2.3.1 Comparison of Classification-Trees The first stage of the consistency check is the comparison of the requirementbased classification-tree (R-CT) with the model-based classification-tree (MCT). The first step here is to assign the R-CT classifications (i.e. requirement aspects regarded as relevant to the test) to the respective M-CT classifications (i.e. model inputs). These have to be in a 1:1 or 1:n relation to each other. If there is no valid relation for an R-CT classification to the M-CT classifications, this signifies that the test aspect corresponding to the model input represented by that classification was not considered when the R-CT was generated or that it is not relevant to the test after all. In the case of irrelevance, such model inputs may be set to default constant values. At the second step in the comparison of the classification-trees, the classes of the related classifications are compared. The possible relations between classes within a 1:1 classification relation are 1:1, 1:n, n:1 and m:n. The relation between classes within a 1:n classification relation is a bit more complex: For an R-CT class the following cases for the related M-CT classes are possible: - one class from each n related classification, - a set of classes from each n related classification. Example 2.4 As examples of 1:1 relations between classifications the relation between ’ground’ in the R-CT and ’env mue’ in the M-CT as well as the relation between ’road’ in the R-CT and ’env curve’ in the M-CT of the ABS system can be mentioned (see Figure 3 and Figure 4). The above-mentioned22M. Conrad et al. / Electronic Notes in Theoretical Computer Science 111 (2005) 13–26classifications represent the road friction and road curve in two different abstraction levels respectively. Figure 7 illustrates these classification relations and the corresponding 1:1 and 1:n relations between their classes.R-CT1:1 relationM-CTR-CT1:1 relationM-CTgroundenv_mueroadenv_curve1:1 relationsnow]0.1,0.3]curved1:n relation]0,0.02]]0.02,inf[Figure 7. 1:1 classification relation as well as 1:1 and 1:n class relations between R-CT and M-CTExample 2.5 An example of a 1:n relation between classifications in both trees of the ABS system is the relation between ’speed’ in the R-CT and the classifications ’vehicle accelerator”, ’vehicle-brake’ and ’vehicle gear’ in the MCT, since the vehicle speed is determined by the combination of values of the inputs mentioned. Of course in a more complex environment model further aspects such as the road slope or the wind speed would also play a role in determining the vehicle speed and should be considered while defining the MCT classifications related to ’speed”. Figure 8 illustrates the above-mentioned classification relation and the relation between the ’high’ speed class and the possible sets of the corresponding classes of the ’vehicle accelerator”, ’vehiclebrake’ and ’vehicle gear’ classifications. The depicted class relation makes it possible to, for example, relate the combination of the values• • •vehicle accelerator from the interval ]0.5, 1[ vehicle brake = 0 vehicle gear = 4to the high speed. Note that this relation indicates the possibility of reaching a high speed if the corresponding classes have been marked in the M-CT (necessary condition). The question of whether the high speed is actually reached, depends on the dynamic aspect of the test scenario, i.e. the initial test step and the duration of the test step or test steps running with the above values (sufficient condition).R-CT M-CTFigure8.1:n classification relation between R-CT and M-CT2.3.2Comparison of Test ScenariosThe second stage of the consistency check is the comparison of the requirement-based test scenarios with the model-based test scenarios.An R-CT test sce-nario is covered by an M-CT test scenario,if the combination coverage criteria for mapping of R-CT test scenarios onto M-CT test scenarios,as shown in Ta-ble1,are met.The combination coverage criteria have to be checked for all classes marked in the R-CT test scenario.As examples for combination cov-erage criteria consider the following cases of the R-CT and M-CT of the ABS system,shown in Figures3and Figure5.Example2.6’ground’in the R-CT and’env mue’in the M-CT are1:1re-lated classifications.The classes of these classifications,’snow’and]0.1,0.3], are also1:1related.According to thefirst row of Table1the use of]0.1,0.3] in the test steps of the third M-CT test scenario(Braking on a curve)fulfills the coverage criterion concerning the class’snow’.’road’in the R-CT and’env curve’in the M-CT are1:1related classifications. The classes of these classifications,’curved’and]0,0.02],]0.02,inf[,are1:n related.According to the second row of Table1the use of]0,0.02]in two test steps of the third M-CT test scenario(Braking on a curve)fulfills the coverage criterion concerning the class’curve’.”speed’in the R-CT and{”vehicle accelerator’,’vehicle brake’,’vehicle gear”} in the M-CT are1:n related classifications.Figure8shows a relation between the class”high’and the corresponding sets of classes of the M-CT.Accordingrelation relation between classes combination coverage criteria between clas-(a and a i’s are classes of R-sification CT,b and b i’s classes ofM-CT)1:1relation1:1relation Class b is used in at least one a↔b test step of a test scenario.1:n relation One of the classes b1,···,b n isa↔b1+···+b n used in at least one teststep of a test scenario.n:1relation Class b is used in at leasta1+···+a n↔b one test step of a test scenario.m:n relation One of the classes b1,···,b na1+···+a n↔b1+···+b n is used in at least one test stepof a test scenario.1:n relation one class from each n related The combination of classesclassifications b1,···,b n is used ina1+···+a n↔b1+···+b n at least one test stepof a test scenario.a set of classes from each n A combination of classesrelated classifications b1i,···,b kj is used ina↔(b11+···+b1n)×at least one test step(b k1+···+b kn of a test scenario.bination coverage criteria for mapping of R-CT test scenarios onto M-CT testscenariosto the last row of Table1the use of1for’vehicle accelerator’,0for’ve-hicle brake’and4for’vehicle gear’in the third step of thefirst M-CT test scenario(Strong braking on ice)fulfills the coverage criterion concerning the class’high”.Accomplishing the combination coverage check,as shown in the above three examples,for all classes marked in the R-CT test scenarios shall provide the guarantee for the consistency between the M-CT and R-CT test scenarios of the ABS system.3SummaryMB3T enhances the model-based development of embedded automotive soft-ware with a systematic approach for deriving tests from two different per-spectives.By using the executable model as a rich source of information for the testing process,the MB3T approach provides a solution,which has been proven in practice,to the challenges of model-based testing in the automotive industry.Requirement-based test design with the aid of the classification-tree method makes it possible to systematically create logical test scenarios early in the de-velopment process and also assures the requirements coverage necessary.As a rule,this normally results inflexible m:n relations between the requirements and the logical test scenarios thus created.This represents an improvement in comparison to the rigid1:1relation which is currently much in use.Model-based test design with the classification-tree method for embedded systems guarantees the systematic derivation of time-variant test scenarios for the executable artifacts of model-based development(e.g.the logical model, implementation model,C code).As this method is based on the data-oriented partitioning of the input domain into equivalence classes,the data range cov-erage of the test scenarios can easily be achieved.A methodology for analyzing the relations between requirement-based and model-based test scenarios,building on the graphical means of description used by the classification-tree method,reduces the gap between these test scenarios.It is much easier to understand which requirements are covered by which executable tests by using the intermediate stage of requirement-based tests.The comparatively higher effort,resulting from the tests being designed from two different angles,leads to a better and more systematic way of gen-erating tests which,in turn,ensure a test coverage with regard to two com-plementary coverage criteria.Thus,the MB3T approach should be employed especially in the case of software developed in a model-based way which has high safety and reliability requirements.References[1]ASCET-SD,ETAS GmbH,/products/ascet sd/.[2]Broy,M.,“Automotive Software Engineering.Proc.25th Intl.Conference on SoftwareEngineering”,(ICSE03),2003,Portland,Oregon,pp.719-720[3]Burckhardt,M.,“Fahrwerktechnik-Radschlupfregelsysteme”,Vogel Verlag,W¨u rzburg,Germany,1993[4]Conrad,M.,D¨o rr,H.,Fey,I.,Yap, A.,“Model-based Generation and StructuredRepresentation of Test Scenarios”,Workshop on Software-Embedded Systems Testing (WSEST),Gaithersburg,USA,Nov.1999.[5]Conrad,M.,Pohlheim,H.,Fey,I.,Sadeghipour,S.,“Modell-basierter Black-box-Test f¨u rFahrfunktionen”,Techn.Report RIC/S-2002-001,DaimlerChrysler AG,Research Information and Communication,Berlin,2002。

端到端测试

端到端测试

并发测试(concurrency testing) (图 5)剖析了当多个用户同时访问 同一段应用代码、同一个模块或者数据库纪录时的效果。它鉴别并度 量了系统加锁和死锁的级别以及系统中单线程代码和加锁信号的使 用。从技术角度讲,并发测试可以归为一种功能测试,不过它常常和 可伸缩性/负载测试配合使用,因为它需要多用个户或者虚拟用户来 驱动系统。

很显然,一种新的,有效的质量强化策略对成功的软件开发和部 署是必须的。 最有效的策略是将环境中单个组件的测试和环境的整体 测试结合起来。在这种策略中,组件级和系统级的测试都必须包含功 能测试来确保数据完整性, 还要包含可伸缩性和性能测试来确保不同 的系统负荷下的可接受的响应时间。 在评估性能和可伸缩性方面, 这些并行分析模式能够帮助您发现系 统架构的优势和缺陷, 并在解决与性能和可伸缩性有关的问题时确定 必须检查哪些组件。与此类似的功能测试策略(即全部数据完整性验 证) ,正显得越来越关键,因为现在数据可能是从分散的数据源派生 而来。通过评估组件内外的数据完整性(包括处理过程中的任何功能 性的数据转换) ,测试人员可以定位每个潜在的错误,并使系统集成 和缺陷隔离成为标准的开发过程的一部分。端到端架构测试(End to End Architecture Testing)指的是这样一种概念,它测试计算环境中所 有的访问点, 并在组件级和系统级的测试中将功能测试和性能测试整 。 合在一起(见图 2)
A. B. C. SOAP D. XML
C/S?
图 1:现在典型的多层架构
图 2:端到端架构测试包含所有访问点的功能测试及性能测试
(
1.1.
End to End Testing)
端到端测试类似于系统测试,测试级的 “宏大 ” 的端点,涉及整 个应用系统环境在一个现实世界使用时的模拟情形的所有测试。 例如与数据库对话,用网络通讯,或与外部硬件、应用系统或适 当的系统对话。端到端架构测试包含所有访问点的功能测试及性 能测试。端到端架构测试实质上是一种 "灰盒 " 测试,一种集合了 白盒测试和黑盒测试的优点的测试方法。

An event-flow model of gui-based applications for testing

An event-flow model of gui-based applications for testing

An Event-flow Model ofGUI-Based Applications for TestingAtif M.MemonDepartment of Computer ScienceUniversity of Maryland,College Park,MD20742e-mail:atif@AbstractGraphical user interfaces(GUIs)are by far the most popular means used to interact with today’s software.The functional correctness of a GUI is required to ensure thesafety,robustness and usability of an entire software system.GUI testing techniquesused in practice are resource intensive;model-based automated techniques are rarelyemployed.A key reason for reluctance in adoption of model-based solutions proposedby researchers is their limited applicability;moreover,the models are expensive tocreate.Over the past few years,we have been developing different models for variousaspects of GUI testing.This paper consolidates all the models into one scalable event-flow model and outlines algorithms to semi-automatically reverse engineer the modelfrom an implementation.Earlier work on model-based test case generation,test oraclecreation,coverage evaluation,and regression testing is recast in terms of this modelby defining event-space exploration strategies(ESES)and creating an end-to-end GUItesting process.Three such ESES are described:for checking the event-flow model,test case generation,and test oracle creation.Two demonstrational scenarios show theapplication of the model and the three ESES for experimentation and application inGUI testing.Keywords:event-driven software,graphical user interfaces,event-flow model,event-flow graph,integration tree,test oracles,test case generation,model checking.1IntroductionGraphical user interfaces(GUIs)are becoming ubiquitous as a means of interacting with today’s software.Recognizing the importance of GUIs,software developers are dedicating an increasingly large portion of software code to implementing GUIs–up to60%of the total software code[1–5].Although the use of GUIs continues to grow,GUI testing for functional correctness has remained,until recently,a neglected research area[6].Adequately testing a GUI is required to help ensure the safety,robustness and usability of an entire software system[7].Current GUI testing techniques used in practice(discussed in detail in Section2)require a substantial amount of manual effort on the part of the test designer.Some models and techniques have been developed to address automation of specific aspects of the GUI testing process(e.g.,test-case generation[8–13],test-oracle creation[14],and regression testing [15,16]).The author’s earlier work has used goal-directed search for GUI test-case generation [10,11],function composition for automated test oracle creation[14],metrics from graph theory to define test coverage criteria for GUIs[17],graph-traversal to obtain smoke test cases for GUIs that are used to stabilize daily software builds[12,13],and graph matching algorithms to repair previously unusable GUI test cases for regression testing[16].However, because the models were developed to address specific(often one)GUI testing problem(s), they have narrow focus;some of these models are expensive to create and have limited applicability;therefore they are unable to address the full scope of the problem of GUI testing.This paper consolidates all the author’s above existing GUI models into one general event-flow model that represents events and event interactions.In much the same way as a control-flow model represents all possible execution paths in a program[18],and a data-flow model represents all possible definitions and uses of a memory location[19],the event-flow model represents all possible sequences of events that can be executed on the GUI.More specifically,a GUI is decomposed into a hierarchy of modal dialogs(discussed in Section3); this hierarchy is represented as an integration tree;each modal dialog is represented as an event-flow graph that shows all possible event execution paths in the dialog;individual events are represented using their preconditions and effects.An overview of the event-flow model with associated algorithms to semi-automatically reverse engineer the model from an executing GUI software is presented.Because the event-flow model is not tied to a specific aspect of the GUI testing process,it may be used it to perform a wide variety of testing tasks by defining specialized model-based techniques called event-space exploration strategies(ESES).These ESES use the event-flow model in a number of ways to develop an end-to-end GUI testing process.All the previously published techniques are recast as ESES.This paper describes three ESES that address three complex problems faced during model-based testing,namely checking the model,test-case generation,and test oracle creation.Two demonstrational scenarios show the application of this model for research and prac-tice in automated testing.More specifically,these scenarios show that(1)once the event-flow model is created,it can be used to generate a large number of GUI test cases with very little cost and effort,(2)automated tools,built around the event-flow model,greatly simplify both model building as well as many GUI testing tasks,and(3)the model and tools enable large experiments in GUI testing.The contributions of this paper include:•The consolidation of several existing GUI testing models into one event-flow model and its separation from specific testing tasks.•The enhancement of the general event-flow model with customized event-space ex-ploration strategies to perform specific GUI testing tasks,leading to a more general solution to the GUI testing problem.•The employment of three ESES(for model checking,test-case generation,test oracle creation)to develop an end-to-end GUI testing process and the demonstration of this process on non-trivial GUI-based applications via a scenario.•Demonstration that the tools developed around the event-flow model are useful for experimentation in GUI testing.Structure of the paper:The next section provides an assessment of models,techniques, and tools currently used for GUI testing.Section3describes the event-flow model.Section4 describes algorithms to create the event-flow model.Section5outlines some ways of using the model via the development of three ESES for checking the model,test case generation, and test oracle creation.Section6describes two scenarios that demonstrate the usefulness of the event-flow model.Finally,Section7concludes with a summary of its limitations and on-going work.2Assessment of Current GUI Models,Techniques and ToolsThere have been a few attempts to develop models to automate some aspects of GUI testing. The most popular amongst them are state-machine models that have been proposed to generate test cases for GUIs[20–23].The key idea of using these models is that a test designer represents a GUI’s behavior as a state machine;each input event may trigger an abstract state transition in the machine.A path,i.e.,sequence of edges followed during transitions,in the state machine represents a test case.The state-machine’s abstract states may be used to verify the GUI’s concrete state during test case execution.Shehady et al.[24] argue that the commonly usedfinite-state machines have scaling problems for large GUIs; they have developed variations called variablefinite state machines that reduce the number of abstract states by adding variables to the model.In practice,these models are created manually.Also,if these models are used for verification,via an automated test oracle, effective mappings between the machine’s abstract states and the GUI’s concrete state also need to be developed manually.White et al.present a different state-machine model for GUI test case generation that partitions the state space into different machines based on user tasks[8,9].The test de-signer/expert identifies a responsibility,i.e.,a user task that can be performed with the GUI. For each responsibility,the test designer then identifies a machine model called the“com-plete interaction sequence”(CIS).The CIS is then used to generate focused test cases.This approach is successful in partitioning the GUI into manageable functional units that can be tested separately.The definition of responsibilities and the subsequent CIS is relatively straightforward;the large manual effort is in designing the FSM model for testing,especially when code is not available.Models of user behavior have also been used to generate test cases,specifically to mimic novice users’inputs[25].This approach relies on an expert tofirst manually generate a sequence of GUI events for a given user task.A genetic algorithm technique is then used to modify and lengthen the sequence,thereby mimicking a novice user[26,27].The underlyingassumption is that novice users often take indirect paths through GUIs,whereas expert users take shorter,more direct paths.This technique requires a substantial amount of manual effort,and cannot be used to generate other types of test cases.All the above techniques have been developed for test-case generation,side-stepping com-plex issues of test oracle creation,test coverage,and regression testing.The only exception is White[15]who proposes a Latin square method to reduce the size of a GUI regression test suite.The underlying assumption is that it is enough to check pairwise interactions between components of the GUI.The technique requires that each menu item appears in at least one test case.Redundant test cases,i.e.,those that do not use any new menu items, are subsequently removed,resulting in a smaller regression test suite.The technique needs to be extended to GUI items other than menus.In another research reported by White et al.[28],the CIS is extend to develop a selective regression approach based on identifying the changed and affected objects and CISs.This technique is aided by memory diagnostic tools such as Memory Doctor and WinGauge to reveal complex problems.Since the above models were developed with one aspect of GUI testing in mind(most for test-case generation),they suffer from their lack of applicability to other aspects of GUI testing.Moreover,all the above techniques require substantial manual effort.Consequently, none of these techniques are in common use in industry.Current GUI testing techniques used in practice are incomplete,ad-hoc,and largely man-ual.The most popular tools used to test GUIs are capture/replay tools such as WinRunner1 that provide very little automation[29],especially for creating test cases and test oracles,and to evaluate test coverage.A tester uses these tools in two phases:a capture and then a replay phase.During the capture phase,a tester manually interacts with the GUI being tested and performs events.The tool records the interactions and a part of the GUI’s response/state as specified by the tester.The recorded test cases can be replayed automatically on(a modified version of)the software using the replay part of the tool.The previously captured state can (with some limitations)be used to check the GUI’s execution for correctness.As can be imagined,these tools require a significant amount of manual effort.Testers who employ these tools typically come up with a small number of test cases[10].Moreover,as the GUI evolves,many test cases become obsolete and should be removed from the test suite[16].Other tools used for GUI testing require programming each GUI test case.These tools include extensions of JUnit such as JFCUnit,Abbot,Pounder,and Jemmy Module2.Not only must the tester manually program the event interactions to test,the tester must also specify expected GUI behavior.All the above models and techniques have been leveraged to develop a new consolidated model that is general,in that it can be used throughout the GUI testing process.Developed around the event-flow model are automated tools that can be used by a tester to semi-automatically create the model and then perform GUI testing.The event-flow model and the tools are described in subsequent sections.3Event-flow ModelThe eventflow model contains two parts.Thefirst part encodes each event in terms of preconditions,i.e.,the state in which the event may be executed,and effects,i.e.,the changes to the state after the event has executed.The second part represents all possible sequences of events that can be executed on the GUI as a set of directed graphs.Because details of each part of the representation have been presented in earlier reported work,a brief overview is given here.Both these parts play important,often complementary roles for various GUI testing tasks.For example,the preconditions/effects have been used for goal-directed test-case gen-eration[11],test oracle creation[14],and checking the model;the second part has been used for graph-traversal based test-case generation[30],test coverage evaluation[17],ge-netic algorithm based test-case generation,and Bayesian network based test-case generation. Together,the two parts are used to represent user profiles[31]and check for run-time con-sistency of the GUI.This paper combines these two parts of the event-flow model to detect two,complementary,classes of faults:(Type1)the GUI software does not perform as doc-umented in the specifications and/or design and(Type2)unacceptable software behavior for which there may be no explicit specifications or design(e.g.,software crashes,screen “freezing”).3.1What is a GUI?To provide focus,this paper will discuss the“core”event-flow model that is sufficient to rep-resent an important class of GUIs.The important characteristics of GUIs in this class include their graphical orientation,event-driven input,hierarchical structure of menus and windows, the objects(widgets,windows,frames)they contain,and the properties(attributes)of those objects.Formally,the class of GUIs of interest may be defined as follows:•Definition:A G ser I3.2Modeling EventsAn important part of the event-flow model is the behavior of each event.Each event is represented in terms of how it modifies the state of a GUI when it is executed.The state of the GUI is the collective state of each of its widgets(e.g.,buttons,menus)and containers (e.g.,frames,windows)that contain other widgets(all widgets and containers will be referred to as GUI objects).Each GUI object is represented by a set of properties of those objects (background-color,font,caption,etc.).The set of objects and their properties is used to create a model of the state of the GUI.The state of a GUI at a particular time t is the set P of all the properties of all the objects O that the GUI contains.A complete description of the state would contain information about the types of all the objects currently extant (not all possible objects)in the GUI,as well as all of the property values(not all possible property values)of each of those objects.Events performed on the GUI change its state and hence are modeled as state transducers. The events E={e1,e2,...,e n}associated with a GUI are functions from one state of the GUI to another state of the GUI.Since events may be performed on different types of objects,in different contexts,yielding different behavior,they are parameterized with objects and property values.It is of course infeasible to give exhaustive specifications of the state mapping for each event:in principle,as there is no limit to the number of objects a GUI can contain at any point in time,there can be infinitely many states of the GUI.3Hence,GUI events are represented using operators,which specify their preconditions and effects.This representation for encoding operators is used in the AI planning literature[32–34].This representation has been adopted for GUI testing because of its power to express complex actions,and the ability to reuse a large library of tools(e.g.,parsers,checkers)that are available from the AI planning community.Finally,this representation allows the use of AI planning for test-case generation[10]and,as will be seen later in the demonstrational scenarios,for checking the model.3.3Modeling Event InteractionsThe goal of the event-flow model is to represent all possible event interactions in the GUI. Such a representation of the event interaction space will enable the customization of various ESES that traverse parts of this space for automated testing.A graph model of the GUI is constructed–each vertex represents an event(e.g.,click-on-Edit,click-on-Paste).4An edge from vertex x to vertex y shows that an event y can be performed immediately after event x(i.e.,y follows x).This graph is analogous to a control-flow graph in which vertices represent program statements(in some cases basic blocks)and edges represent possible execution ordering between the statements.Also note that a state machine model that is equivalent to this graph can be constructed–the state would capture the possible events that can be executed on the GUI at any instant;transitions cause state changes whenever the number and type of available events changes.For a pathological GUI that has no restrictions on event ordering and no windows/menus,such a graph would be fully connected.In practice,however,GUIs are hierarchical,and this hierarchy may be exploited to identify groups of GUI events that may be modeled in isolation.One hierarchy of the GUI and the one used in this research is obtained by examining the structure of modal windows5in the GUI.A modal window is a GUI window that,once invoked,monopolizes the GUI interaction, restricting the focus of the user to a specific range of events within the window,until the window is explicitly terminated.Other windows in the GUI that do not restrict the user’s focus are called modeless windows;they merely expand the set of GUI events available to the user.At all times during interaction with the GUI,the user interacts with events within a modal dialog.This modal dialog consists of a modal window X and a set of modeless windows that have been invoked,either directly or indirectly from X.The modal dialog remains in place until X is explicitly terminated.By definition,a GUI user cannot interleave events of one modal dialog with events of other modal dialogs;the user must either explicitly terminate the currently active modal dialog or invoke another modal dialog to execute events in different dialogs.This property of modal dialogs enables the decomposition of a GUI into parts–each part can be tested separately.Event interactions within a modal dialog may be represented as an event-flow graph.An event-flow graph of a modal dialog represents all possible event sequences that can be executed in the dialog.Once all the modal dialogs of the GUI have been represented as event-flow graphs,the remaining step is to identify interactions between modal dialogs.A structure called an integration tree is constructed to identify interactions (invocations)between modal dialogs.Modal dialog MD x invokes modal dialog MD y if MD x contains an event e x that invokes MD y.The integration tree shows the invokes relationship among all the modal dialogs in a GUI.This decomposition of the GUI makes the overall testing process intuitive for the test designer since the test designer can focus on a specific part of the GUI.Moreover,it simplifies the design of algorithms and makes the overall testing process more efficient.4Obtaining the Event-flow ModelThis section describes techniques to construct the event-flow model for a given GUI.It contains three parts:(1)algorithms to create event-flow graphs and an integration tree,(2) a technique to partially create operators from the event-flow graphs,(3)outline of a reverse engineering process that is used to obtain the event-flow model semi-automatically.4.1Construction of Event-flow Graphs and Integration TreeThe construction of event-flow graphs and integration tree is based on the identification of modal dialogs,and hence the identification of modal and modeless windows and their invokes relationships.A classification of GUI events is used to identify modal and mode-less windows.Restricted-focus events open modal windows.Unrestricted-focus events open modeless windows.Termination events close modal windows.Menu-open events are usedALGORITHM:GetFollows(v:Vertex or Event){1 IF EventType(v)=menu-open2 IF v∈B of the modal dialog that contains v3 return(MenuChoices(v)∪{v})∪B)4 ELSE5 return(MenuChoices(v)∪{v}∪follows(parent(v)));6 IF EventType(v)=system-interaction7 return(B);8 IF EventType(v)=termination9 return(B of Invoking modal dialog);10 IF EventType(v)=unrestricted-focus11 return(B∪B of Invoked modal dialog);12 IF EventType(v)=restricted-focus13 return(B of Invoked modal dialog);14 }Figure1:Computing follows(v)for a Vertex v.to open menus.System-interaction events are not used to manipulate the structure of the GUI;rather,they are used to interact with the underlying software to perform some action.The classification of events is used by an algorithm to construct event-flow graphs for a GUI.The algorithm computes the set of follows for each event.These sets are then used to create the edges of the event-flow graph.The set of follows(v)can be determined using the algorithm in Figure1for each vertex v.The recursive algorithm contains a switch structure that assigns follows(v)according to the type of each event.If the type of the event v is a menu-open event(line2)and v∈B (events that are available when a modal dialog is invoked),then the user may either perform v again,its sub-menu choices,or any event in B(line4).However,if v∈B then the user may either perform all sub-menu choices of v,v itself,or all events in follows(parent(v)) (line6);parent(v)is defined as any event that makes v available.If v is a system-interaction event,then after performing v,the GUI reverts back to the events in B(line 8).If v is a termination event,i.e.,an event that terminates a modal dialog,then follows(v) consists of all the top-level events of the invoking modal dialog(line10).If the event type of v is an unrestricted-focus event,then the available events are all top-level events of the invoked modal dialog available as well as all events of the invoking modal dialog(line12). Lastly,if v is a restricted-focus event,then only the events of the invoked modal dialog are available.It is relatively straightforward to obtain the integration tree from the computation of follows.Modifying Lines13..14of the algorithm shown in Figure1,one can keep track of the modal dialogs invoked.Once all the modal dialogs in the GUI have been identified, the integration tree may be constructed by adding,for each restricted-focus event e x,the element(MD x,MD y)to a new structure B where MD x is the modal dialog that contains e x and MD y is the modal dialog that it invokes.4.2OperatorsSome of the information encoded in the event-flow graphs and integration tree can be repre-sented as preconditions and effects.This information is retrieved and used to partly create the operators automatically.More specifically,templates are automatically generated for each operator and structural properties of the GUI are automaticallyfilled-in.The test designer has to hand-code the other non-structural properties of the GUI.In practice,the test designer may use an iterative process to code the operators.Such a process would involve coding a set of operators for a group of related events(i.e.,those that could be used together to perform a task)followed by model checking,a well-known quality assurance activity commonly used to check the correctness of such models.As correctness problems(called errors)are discovered during model checking,the operators arefixed and rechecked.A similar process is used in the demonstrational scenarios in Section6.4.3Reverse EngineeringDeveloping the event-flow model can be tedious and error-prone.Therefore a tool has been developed called the“GUI Ripper”to automatically obtain event-flow graphs and the integration tree.A detailed discussion of the tool is beyond the scope of this paper; the interested reader is referred to previously published work[35]for details.In short, the GUI Ripper combines reverse engineering techniques with the algorithms presented in previous sections to automatically construct the event-flow graphs and integration tree. During“GUI Ripping”,the GUI application is executed automatically;the application’s windows are opened in a depth-first manner.The GUI Ripper extracts all the widgets and their properties from the GUI.During the reverse engineering process,in addition to widget properties,additional key attributes of each widget are recovered(e.g.,whether it is enabled, it opens a modal/modeless window,it opens a menu,it closes a window,it is a button,it is an editable text-field).These attributes are used to construct the event-flow graphs and integration tree.As can be imagined,the GUI Ripper is not perfect,i.e.,parts of the retrieved informa-tion may be incomplete/mon examples include(1)missing windows in certain cases,e.g.,if the button that opens that window is disabled during GUI Ripping,(2)failure to recognize that a button closes a window,and(3)incorrectly identifying a modal window as a modeless window or vise versa.The specific problems that are encountered depend on the platform used to implement the GUI.For example,for GUIs implemented using Java Swing,the ripper is unable to retrieve the contents of the“Print”dialog;in MS Windows, the ripper is unable to correctly identify modal/modeless windows.Recognizing that such problems may occur during reverse engineering,tools have been developed that may be used to manually“edit”the event-flow graphs and integration tree andfix these problems.For example,in the demonstrational scenarios,once the“Print”dialog was missed during the reverse engineering process,the test designer manually interacted with the subject appli-cation,and opened the Print dialog;one of the tools then reverse-engineered the dialog, created its event-flow graph automatically and added it to the integration tree.A test designer also does not have to code each operator from scratch because the reverse engineering technique creates operator templates andfills in those preconditions and effectsthat describe the structure of the GUI.Such preconditions and effects are automatically derived during the reverse engineering process in a matter of seconds.There are no errors in these templates because the structure has already been manually examined and corrected in the event-flow graphs and integration trees.The number of operators is the same as the number of events in the GUI,as there is exactly one operator per executable event.5Using the Event-flow Model viaEvent Space Exploration StrategiesThe definition of the event-flow model,i.e.,in terms of all possible event interactions and preconditions/effects of each event,allows the use of the model for numerous applications via the definition of simple ESES.In this section,three of these simple ESES are described, focusing on how they have been used for automated model-based testing.These ESES will be used in the scenarios in Section6.5.1Goal-directed Search for Model CheckingModel checking involves checking whether a model(i.e.,the operators in this case)allows certain invalid states to be reached from a known valid state.For example,the current spec-ifications for TerpSpreadSheet(a spreadsheet application used in the scenarios in Section6) do not allow non-contigous areas of the spreadsheet to be selected.This“invalid state”(i.e.,non-contigous areas selected)is described to a model checker to determine whether the model allows this state to be reached from a known valid state.If it does,then the model is incorrect;the model checker gives a counter-example(sequence of GUI events)that shows how the invalid state was reached.If the invalid state is not reachable,the model is correct;the model checker fails tofind a counter-example.A variety of model checkers may be used[36].For model checking,previous work on using AI planning for GUI testing[11]was lever-aged.Because of this previous work,the operators were described in the PDDL language that is used by AI planners.The planner is now used as a model checker.Planning is a goal-directed search technique used tofind sequences of actions to accomplish a specific task. For the purpose of model checking,given a task(encoded as a pair of inital and goal states) and a set of actions(encoded as a set of operators),the planner returns a sequence of in-stantiated actions that,when executed,will transform the initial state to the goal state.6If no such sequence exists then the operators cannot be used for the task and thus the planner returns“no plan”.In this way,the planner has been used as a model checker,i.e.,the goal state is a description of the invalid state.If the planner is successful infinding a valid plan to reach the invalid state,the plan itself becomes the counter example.。

带你了解Model based testing(MBT) 基于模型的测试

带你了解Model based testing(MBT) 基于模型的测试

带你了解Model based testing(MBT)基于模型的测试Bad programmers worry about the code.Good programmers worry about data structures and their relationships.在Agile Testing领域比较知名的一位专家Elisabeth Hendrickson 也从测试的角度对数据的重要性做过一些阐述,她列举了常见的测试设计技术:Test Design Technique等价类边界值Data Type AttacksCRUDDifferent configurationsCount (user count, resource count)然后做出了归纳,Its all about the variables.所以我们可以看到,如果能建立系统的数据模型,无论对于(开发)和测试都是很有帮助的。

以计算机行业常见的和磁盘相关的测试,来说明如何用数据模型对测试(需求)建模:然后我们把它翻译成框架可以识别的格式,用了一些非常简单的python语法来描述。

然后可以通过框架生成自动化用例:另外一种常见的模型是状态机模型,主要是针对用户的行为建模:比如Bug管理系统的工作流模型或者SIP 协议的呼叫模型:SIP协议呼叫相关的测试:首先我们还是把它描述成框架可以识别的格式:我们看到,可能的测试路径是很多的,那怎么才可以以最小的测试代价达到我们的覆盖率的目标呢?假设我们的覆盖率目标定义成以最小的成本覆盖所有状态之间的迁移,那我们面临的其实是一个数学问题,那就是在一个有向图上,如何找到最短的路径覆盖。

这其实是一个著名图论问题。

邮递员从邮局出发送信,要求对辖区内每条街,都至少通过一次,再回邮局。

在此条件下,怎样选择一条最短路线?此问题由中国数学家管梅谷于1960年首先研究并给出算法,故名。

通过邮递员算法,我们看到我们一共需要28个步骤才能覆盖所有图中的变化:然后我们也可以把它变成可执行的Robot Case。

软件测拭英语术语

软件测拭英语术语

软件测试中英文词汇汇总表2008-04-03 09:36作者:csdn出处:天极网责任编辑:孙蓬阳Acceptance testing : 验收测试Acceptance Testing:可接受性测试Accessibility test : 软体适用性测试actual outcome:实际结果Ad hoc testing : 随机测试Algorithm analysis : 算法分析algorithm:算法Alpha testing : α测试analysis:分析anomaly:异常application software:应用软件Application under test (AUT) : 所测试的应用程序Architecture : 构架Artifact : 工件ASQ:自动化软件质量(Automated Software Quality)Assertion checking : 断言检查Association : 关联Audit : 审计audit trail:审计跟踪Automated Testing:自动化测试Backus-Naur Form:BNF范式baseline:基线Basic Block:基本块basis test set:基本测试集Behaviour : 行为Bench test : 基准测试benchmark:标杆/指标/基准Best practise : 最佳实践Beta testing : β测试Black Box Testing:黑盒测试Blocking bug : 阻碍性错误Bottom-up testing : 自底向上测试boundary value coverage:边界值覆盖boundary value testing:边界值测试Boundary values : 边界值Boundry Value Analysis:边界值分析branch condition combination coverage:分支条件组合覆盖branch condition combination testing:分支条件组合测试branch condition coverage:分支条件覆盖branch condition testing:分支条件测试branch condition:分支条件Branch coverage : 分支覆盖branch outcome:分支结果branch point:分支点branch testing:分支测试branch:分支Breadth Testing:广度测试Brute force testing: 强力测试Buddy test : 合伙测试Buffer : 缓冲Bug : 错误Bug bash : 错误大扫除bug fix : 错误修正Bug report : 错误报告Bug tracking system: 错误跟踪系统bug:缺陷Build : 工作版本(内部小版本)Build Verfication tests(BVTs): 版本验证测试Build-in : 内置Capability Maturity Model (CMM): 能力成熟度模型Capability Maturity Model Integration (CMMI): 能力成熟度模型整合capture/playback tool:捕获/回放工具Capture/Replay Tool:捕获/回放工具CASE:计算机辅助软件工程(computer aided software engineering)CAST:计算机辅助测试cause-effect graph:因果图certification :证明change control:变更控制Change Management :变更管理Change Request :变更请求Character Set : 字符集Check In :检入Check Out :检出Closeout : 收尾code audit :代码审计Code coverage : 代码覆盖Code Inspection:代码检视Code page : 代码页Code rule : 编码规范Code sytle : 编码风格Code Walkthrough:代码走读code-based testing:基于代码的测试coding standards:编程规范Common sense : 常识Compatibility Testing:兼容性测试complete path testing :完全路径测试completeness:完整性complexity :复杂性Component testing : 组件测试Component:组件computation data use:计算数据使用computer system security:计算机系统安全性Concurrency user : 并发用户Condition coverage : 条件覆盖condition coverage:条件覆盖condition outcome:条件结果condition:条件configuration control:配置控制Configuration item : 配置项configuration management:配置管理Configuration testing : 配置测试conformance criterion:一致性标准Conformance Testing:一致性测试consistency :一致性consistency checker:一致性检查器Control flow graph : 控制流程图control flow graph:控制流图control flow:控制流conversion testing:转换测试Core team : 核心小组corrective maintenance:故障检修correctness :正确性coverage :覆盖率coverage item:覆盖项crash:崩溃criticality analysis:关键性分析criticality:关键性CRM(change request management): 变更需求管理Customer-focused mindset : 客户为中心的理念体系Cyclomatic complexity : 圈复杂度data corruption:数据污染data definition C-use pair:数据定义C-use使用对data definition P-use coverage:数据定义P-use覆盖data definition P-use pair:数据定义P-use使用对data definition:数据定义data definition-use coverage:数据定义使用覆盖data definition-use pair :数据定义使用对data definition-use testing:数据定义使用测试data dictionary:数据字典Data Flow Analysis : 数据流分析data flow analysis:数据流分析data flow coverage:数据流覆盖data flow diagram:数据流图data flow testing:数据流测试data integrity:数据完整性data use:数据使用data validation:数据确认dead code:死代码Debug : 调试Debugging:调试Decision condition:判定条件Decision coverage : 判定覆盖decision coverage:判定覆盖decision outcome:判定结果decision table:判定表decision:判定Defect : 缺陷defect density : 缺陷密度Defect Tracking :缺陷跟踪Deployment : 部署Depth Testing:深度测试design for sustainability :可延续性的设计design of experiments:实验设计design-based testing:基于设计的测试Desk checking : 桌前检查desk checking:桌面检查Determine Usage Model : 确定应用模型Determine Potential Risks : 确定潜在风险diagnostic:诊断DIF(decimation in frequency) : 按频率抽取dirty testing:肮脏测试disaster recovery:灾难恢复DIT (decimation in time): 按时间抽取documentation testing :文档测试domain testing:域测试domain:域DTP DETAIL TEST PLAN详细确认测试计划Dynamic analysis : 动态分析dynamic analysis:动态分析Dynamic Testing:动态测试embedded software:嵌入式软件emulator:仿真End-to-End testing:端到端测试Enhanced Request :增强请求entity relationship diagram:实体关系图Encryption Source Code Base:加密算法源代码库Entry criteria : 准入条件entry point :入口点Envisioning Phase : 构想阶段Equivalence class : 等价类Equivalence Class:等价类equivalence partition coverage:等价划分覆盖Equivalence partition testing : 等价划分测试equivalence partition testing:参考等价划分测试equivalence partition testing:等价划分测试Equivalence Partitioning:等价划分Error : 错误Error guessing : 错误猜测error seeding:错误播种/错误插值error:错误Event-driven : 事件驱动Exception handlers : 异常处理器exception:异常/例外executable statement:可执行语句Exhaustive Testing:穷尽测试exit point:出口点expected outcome:期望结果Exploratory testing : 探索性测试Failure : 失效Fault : 故障fault:故障feasible path:可达路径feature testing:特性测试Field testing : 现场测试FMEA:失效模型效果分析(Failure Modes and Effects Analysis)FMECA:失效模型效果关键性分析(Failure Modes and Effects Criticality Analysis) Framework : 框架FTA:故障树分析(Fault Tree Analysis)functional decomposition:功能分解Functional Specification :功能规格说明书Functional testing : 功能测试Functional Testing:功能测试G11N(Globalization) : 全球化Gap analysis : 差距分析Garbage characters : 乱码字符glass box testing:玻璃盒测试Glass-box testing : 白箱测试或白盒测试Glossary : 术语表GUI(Graphical User Interface): 图形用户界面Hard-coding : 硬编码Hotfix : 热补丁I18N(Internationalization): 国际化Identify Exploratory Tests –识别探索性测试IEEE:美国电子与电器工程师学会(Institute of Electrical and Electronic Engineers)Incident 事故Incremental testing : 渐增测试incremental testing:渐增测试infeasible path:不可达路径input domain:输入域Inspection : 审查inspection:检视installability testing:可安装性测试Installing testing : 安装测试instrumentation:插装instrumenter:插装器Integration :集成Integration testing : 集成测试interface : 接口interface analysis:接口分析interface testing:接口测试interface:接口invalid inputs:无效输入isolation testing:孤立测试Issue : 问题Iteration : 迭代Iterative development: 迭代开发job control language:工作控制语言Job:工作Key concepts : 关键概念Key Process Area : 关键过程区域Keyword driven testing : 关键字驱动测试Kick-off meeting : 动会议L10N(Localization) : 本地化Lag time : 延迟时间LCSAJ:线性代码顺序和跳转(Linear Code Sequence And Jump)LCSAJ coverage:LCSAJ覆盖LCSAJ testing:LCSAJ测试Lead time : 前置时间Load testing : 负载测试Load Testing:负载测试Localizability testing: 本地化能力测试Localization testing : 本地化测试logic analysis:逻辑分析logic-coverage testing:逻辑覆盖测试Maintainability : 可维护性maintainability testing:可维护性测试Maintenance : 维护Master project schedule :总体项目方案Measurement : 度量Memory leak : 内存泄漏Migration testing : 迁移测试Milestone : 里程碑Mock up : 模型,原型modified condition/decision coverage:修改条件/判定覆盖modified condition/decision testing :修改条件/判定测试modular decomposition:参考模块分解Module testing : 模块测试Monkey testing : 跳跃式测试Monkey Testing:跳跃式测试mouse over:鼠标在对象之上mouse leave:鼠标离开对象MTBF:平均失效间隔实际(mean time between failures)MTP MAIN TEST PLAN主确认计划MTTF:平均失效时间(mean time to failure)MTTR:平均修复时间(mean time to repair)multiple condition coverage:多条件覆盖mutation analysis:变体分析N/A(Not applicable) : 不适用的Negative Testing : 逆向测试, 反向测试, 负面测试negative testing:参考负面测试Negative Testing:逆向测试/反向测试/负面测试off by one:缓冲溢出错误non-functional requirements testing:非功能需求测试nominal load:额定负载N-switch coverage:N切换覆盖N-switch testing:N切换测试N-transitions:N转换Off-the-shelf software : 套装软件operational testing:可操作性测试output domain:输出域paper audit:书面审计Pair Programming : 成对编程partition testing:分类测试Path coverage : 路径覆盖path coverage:路径覆盖path sensitizing:路径敏感性path testing:路径测试path:路径Peer review : 同行评审Performance : 性能Performance indicator: 性能(绩效)指标Performance testing : 性能测试Pilot : 试验Pilot testing : 引导测试Portability : 可移植性portability testing:可移植性测试Positive testing : 正向测试Postcondition : 后置条件Precondition : 前提条件precondition:预置条件predicate data use:谓词数据使用predicate:谓词Priority : 优先权program instrumenter:程序插装progressive testing:递进测试Prototype : 原型Pseudo code : 伪代码pseudo-localization testing:伪本地化测试pseudo-random:伪随机QC:质量控制(quality control)Quality assurance(QA): 质量保证Quality Control(QC) : 质量控制Race Condition:竞争状态Rational Unified Process(以下简称RUP):瑞理统一工艺Recovery testing : 恢复测试recovery testing:恢复性测试Refactoring : 重构regression analysis and testing:回归分析和测试Regression testing : 回归测试Release : 发布Release note : 版本说明release:发布Reliability : 可靠性reliability assessment:可靠性评价reliability:可靠性Requirements management tool: 需求管理工具Requirements-based testing : 基于需求的测试Return of Investment(ROI): 投资回报率review:评审Risk assessment : 风险评估risk:风险Robustness : 强健性Root Cause Analysis(RCA): 根本原因分析safety critical:严格的安全性safety:(生命)安全性Sanity testing : 健全测试Sanity Testing:理智测试Schema Repository : 模式库Screen shot : 抓屏、截图SDP:软件开发计划(software development plan)Security testing : 安全性测试security testing:安全性测试security.:(信息)安全性serviceability testing:可服务性测试Severity : 严重性Shipment : 发布simple subpath:简单子路径Simulation : 模拟Simulator : 模拟器SLA(Service level agreement): 服务级别协议SLA:服务级别协议(service level agreement)Smoke testing : 冒烟测试Software development plan(SDP): 软件开发计划Software development process: 软件开发过程software development process:软件开发过程software diversity:软件多样性software element:软件元素software engineering environment:软件工程环境software engineering:软件工程Software life cycle : 软件生命周期source code:源代码source statement:源语句Specification : 规格说明书specified input:指定的输入spiral model :螺旋模型SQAP SOFTWARE QUALITY ASSURENCE PLAN 软件质量保证计划SQL:结构化查询语句(structured query language)Staged Delivery:分布交付方法state diagram:状态图state transition testing :状态转换测试state transition:状态转换state:状态Statement coverage : 语句覆盖statement testing:语句测试statement:语句Static Analysis:静态分析Static Analyzer:静态分析器Static Testing:静态测试statistical testing:统计测试Stepwise refinement : 逐步优化storage testing:存储测试Stress Testing : 压力测试structural coverage:结构化覆盖structural test case design:结构化测试用例设计structural testing:结构化测试structured basis testing:结构化的基础测试structured design:结构化设计structured programming:结构化编程structured walkthrough:结构化走读stub:桩sub-area:子域Summary:总结SVVP SOFTWARE Vevification&Validation PLAN:软件验证和确认计划symbolic evaluation:符号评价symbolic execution:参考符号执行symbolic execution:符号执行symbolic trace:符号轨迹Synchronization : 同步Syntax testing : 语法分析system analysis:系统分析System design : 系统设计system integration:系统集成System Testing : 系统测试TC TEST CASE 测试用例TCS TEST CASE SPECIFICATION 测试用例规格说明TDS TEST DESIGN SPECIFICATION 测试设计规格说明书technical requirements testing:技术需求测试Test : 测试test automation:测试自动化Test case : 测试用例test case design technique:测试用例设计技术test case suite:测试用例套test comparator:测试比较器test completion criterion:测试完成标准test coverage:测试覆盖Test design : 测试设计Test driver : 测试驱动test environment:测试环境test execution technique:测试执行技术test execution:测试执行test generator:测试生成器test harness:测试用具Test infrastructure : 测试基础建设test log:测试日志test measurement technique:测试度量技术Test Metrics :测试度量test procedure:测试规程test records:测试记录test report:测试报告Test scenario : 测试场景Test Script.:测试脚本Test Specification:测试规格Test strategy : 测试策略test suite:测试套Test target : 测试目标Test ware : 测试工具Testability : 可测试性testability:可测试性Testing bed : 测试平台Testing coverage : 测试覆盖Testing environment : 测试环境Testing item : 测试项Testing plan : 测试计划Testing procedure : 测试过程Thread testing : 线程测试time sharing:时间共享time-boxed : 固定时间TIR test incident report 测试事故报告ToolTip:控件提示或说明top-down testing:自顶向下测试TPS TEST PEOCESS SPECIFICATION 测试步骤规格说明Traceability : 可跟踪性traceability analysis:跟踪性分析traceability matrix:跟踪矩阵Trade-off : 平衡transaction:事务/处理transaction volume:交易量transform. analysis:事务分析trojan horse:特洛伊木马truth table:真值表TST TEST SUMMARY REPORT 测试总结报告Tune System : 调试系统TW TEST WARE :测试件Unit Testing :单元测试Usability Testing:可用性测试Usage scenario : 使用场景User acceptance Test : 用户验收测试User database :用户数据库User interface(UI) : 用户界面User profile : 用户信息User scenario : 用户场景V&V (Verification & Validation) : 验证&确认validation :确认verification :验证version :版本Virtual user : 虚拟用户volume testing:容量测试VSS(visual source safe):VTP Verification TEST PLAN验证测试计划VTR Verification TEST REPORT验证测试报告Walkthrough : 走读Waterfall model : 瀑布模型Web testing : 网站测试White box testing : 白盒测试Work breakdown structure (WBS) : 任务分解结构Zero bug bounce (ZBB) : 零错误反弹。

软件测试英语术语缩写

软件测试英语术语缩写

软件测试常用英语词汇静态测试:Non-Execution-Based Testing或Static testing 代码走查:Walkthrough代码审查:Code Inspection技术评审:Review动态测试:Execution-Based Testing白盒测试:White-Box Testing黑盒测试:Black-Box Testing灰盒测试:Gray-Box Testing软件质量保证SQA:Software Quality Assurance软件开发生命周期:Software Development Life Cycle冒烟测试:Smoke Test回归测试:Regression Test功能测试:Function Testing性能测试:Performance Testing压力测试:Stress Testing负载测试:Volume Testing易用性测试:Usability Testing安装测试:Installation Testing界面测试:UI Testing配置测试:Configuration Testing文档测试:Documentation Testing兼容性测试:Compatibility Testing安全性测试:Security Testing恢复测试:Recovery Testing单元测试:Unit Test集成测试:Integration Test系统测试:System Test验收测试:Acceptance Test测试计划应包括:测试对象:The Test Objectives测试范围: The Test Scope测试策略: The Test Strategy测试方法: The Test Approach,测试过程: The test procedures,测试环境: The Test Environment,测试完成标准:The test Completion criteria测试用例:The Test Cases测试进度表:The Test Schedules风险:Risks接口:Interface最终用户:The End User正式的测试环境:Formal Test Environment确认需求:Verifying The Requirements有分歧的需求:Ambiguous Requirements运行和维护:Operation and Maintenance.可复用性:Reusability可靠性: Reliability/Availability电机电子工程师协会IEEE:The Institute of Electrical and Electronics Engineers) 正确性:Correctness实用性:Utility健壮性:Robustness可靠性:Reliability软件需求规格说明书:SRS (software requirement specification )概要设计:HLD (high level design )详细设计:LLD (low level design )统一开发流程:RUP (rational unified process )集成产品开发:IPD (integrated product development )能力成熟模型:CMM (capability maturity model )能力成熟模型集成:CMMI (capability maturity model integration )戴明环:PDCA (plan do check act )软件工程过程组:SEPG (software engineering process group )集成测试:IT (integration testing )系统测试:ST (system testing )关键过程域:KPA (key process area )同行评审:PR (peer review )用户验收测试:UAT (user acceptance testing )验证和确认:V&V (verification & validation )控制变更委员会:CCB (change control board )图形用户界面:GUI (graphic user interface )配置管理员:CMO (configuration management officer )平均失效间隔时间:(MTBF mean time between failures )平均修复时间:MTTR (mean time to restoration )平均失效时间:MTTF (mean time to failure )工作任务书:SOW (statement of work )α测试:alpha testingβ测试:beta testing适应性:Adaptability可用性:Availability功能规格说明书:Functional Specification软件开发中常见英文缩写和各类软件开发文档的英文缩写:英文简写文档名称MRD market requirement document (市场需求文档)PRD product requirement document (产品需求文档)SOW 工作任务说明书PHB Process Handbook (项目过程手册)EST Estimation Sheet (估计记录)PPL Project Plan (项目计划)CMP Software Management Plan( 配置管理计划)QAP Software Quality Assurance Plan (软件质量保证计划)RMP Software Risk Management Plan (软件风险管理计划)TST Test Strategy(测试策略)WBS Work Breakdown Structure (工作分解结构)BRS Business Requirement Specification(业务需求说明书)SRS Software Requirement Specification(软件需求说明书)STP System Testing plan (系统测试计划)STC System Testing Cases (系统测试用例)HLD High Level Design (概要设计说明书)ITP Integration Testing plan (集成测试计划)ITC Integration Testing Cases (集成测试用例)LLD Low Level Design (详细设计说明书)UTP Unit Testing Plan ( 单元测试计划)UTC Unit Testing Cases (单元测试用例)UTR Unit Testing Report (单元测试报告)ITR Integration Testing Report (集成测试报告)STR System Testing Report (系统测试报告)RTM Requirements Traceability Matrix (需求跟踪矩阵)CSA Configuration Status Accounting (配置状态发布)CRF Change Request Form (变更申请表)WSR Weekly Status Report (项目周报)QSR Quality Weekly Status Report (质量工作周报)QAR Quality Audit Report(质量检查报告)QCL Quality Check List(质量检查表)PAR Phase Assessment Report (阶段评估报告)CLR Closure Report (项目总结报告)RFF Review Finding Form (评审发现表)MOM Minutes of Meeting (会议纪要)MTX Metrics Sheet (度量表)CCF ConsistanceCheckForm(一致性检查表)BAF Baseline Audit Form(基线审计表)PTF Program Trace Form(问题跟踪表)领测国际科技(北京)有限公司领测软件测试网软件测试中英文对照术语表A• Abstract test case (High level test case) :概要测试用例• Acceptance:验收• Acceptance criteria:验收标准• Acceptance testing:验收测试• Accessibility testing:易用性测试• Accuracy:精确性• Actual outcome (actual result) :实际输出/实际结果• Ad hoc review (informal review) :非正式评审• Ad hoc testing:随机测试• Adaptability:自适应性• Agile testing:敏捷测试• Algorithm test (branch testing) :分支测试• Alpha testing:alpha 测试• Analyzability:易分析性• Analyzer:分析员• Anomaly:异常• Arc testing:分支测试• Attractiveness:吸引力• Audit:审计• Audit trail:审计跟踪• Automated testware:自动测试组件• Availability:可用性B• Back-to-back testing:对比测试• Baseline:基线• Basic block:基本块• Basis test set:基本测试集• Bebugging:错误撒播• Behavior:行为• Benchmark test:基准测试• Bespoke software:定制的软件• Best practice:最佳实践• Beta testing:Beta 测试领测国际科技(北京)有限公司领测软件测试网 Big-bang testing:集成测试• Black-box technique:黑盒技术• Black-box testing:黑盒测试• Black-box test design technique:黑盒测试设计技术• Blocked test case:被阻塞的测试用例• Bottom-up testing:自底向上测试• Boundary value:边界值• Boundary value analysis:边界值分析• Boundary value coverage:边界值覆盖率• Boundary value testing:边界值测试• Branch:分支• Branch condition:分支条件• Branch condition combination coverage:分支条件组合覆盖率• Branch condition combination testing:分支条件组合测试• Branch condition coverage:分支条件覆盖率• Branch coverage:分支覆盖率• Branch testing:分支测试• Bug:缺陷• Business process-based testing:基于商业流程的测试C• Capability Maturity Model (CMM) :能力成熟度模型• Capability Maturity Model Integration (CMMI) :集成能力成熟度模型• Capture/playback tool:捕获/回放工具• Capture/replay tool:捕获/重放工具• CASE (Computer Aided Software Engineering) :电脑辅助软件工程• CAST (Computer Aided Software Testing) :电脑辅助软件测试• Cause-effect graph:因果图• Cause-effect graphing:因果图技术• Cause-effect analysis:因果分析• Cause-effect decision table:因果判定表• Certification:认证• Changeability:可变性• Change control:变更控制• Change control board:变更控制委员会• Checker:检查人员• Chow's coverage metrics (N-switch coverage) :N 切换覆盖率• Classification tree method:分类树方法• Code analyzer:代码分析器• Code coverage:代码覆盖率领测国际科技(北京)有限公司领测软件测试网 Code-based testing:基于代码的测试• Co-existence:共存性• Commercial off-the-shelf software:商用离岸软件• Comparator:比较器• Compatibility testing:兼容性测试• Compiler:编译器• Complete testing:完全测试/穷尽测试• Completion criteria:完成标准• Complexity:复杂性• Compliance:一致性• Compliance testing:一致性测试• Component:组件• Component integration testing:组件集成测试• Component specification:组件规格说明• Component testing:组件测试• Compound condition:组合条件• Concrete test case (low level test case) :详细测试用例• Concurrency testing:并发测试• Condition:条件表达式• Condition combination coverage:条件组合覆盖率• Condition coverage:条件覆盖率• Condition determination coverage:条件判定覆盖率• Condition determination testing:条件判定测试• Condition testing:条件测试• Condition outcome:条件结果• Confidence test (smoke test) :信心测试(冒烟测试)• Configuration:配置• Configuration auditing:配置审核• Configuration control:配置控制• Configuration control board (CCB) :配置控制委员会• Configuration identification:配置标识• Configuration item:配置项• Configuration management:配置管理• Configuration testing:配置测试• Confirmation testing:确认测试• Conformance testing:一致性测试• Consistency:一致性• Control flow:控制流• Control flow graph:控制流图• Control flow path:控制流路径• Conversion testing:转换测试• COTS (Commercial Off-The-Shelf software) :商业离岸软件• Coverage:覆盖率• Coverage analysis:覆盖率分析领测国际科技(北京)有限公司领测软件测试网 Coverage item:覆盖项• Coverage tool:覆盖率工具• Custom software:定制软件• Cyclomatic complexity:圈复杂度• Cyclomatic number:圈数D• Daily build:每日构建• Data definition:数据定义• Data driven testing:数据驱动测试• Data flow:数据流• Data flow analysis:数据流分析• Data flow coverage:数据流覆盖率• Data flow test:数据流测试• Data integrity testing:数据完整性测试• Database integrity testing:数据库完整性测试• Dead code:无效代码• Debugger:调试器• Debugging:调试• Debugging tool:调试工具• Decision:判定• Decision condition coverage:判定条件覆盖率• Decision condition testing:判定条件测试• Decision coverage:判定覆盖率• Decision table:判定表• Decision table testing:判定表测试• Decision testing:判定测试技术• Decision outcome:判定结果• Defect:缺陷• Defect density:缺陷密度• Defect Detection Percentage (DDP) :缺陷发现率• Defect management:缺陷管理• Defect management tool:缺陷管理工具• Defect masking:缺陷屏蔽• Defect report:缺陷报告• Defect tracking tool:缺陷跟踪工具• Definition-use pair:定义-使用对• Deliverable:交付物• Design-based testing:基于设计的测试• Desk checking:桌面检查领测国际科技(北京)有限公司领测软件测试网 Development testing:开发测试• Deviation:偏差• Deviation report:偏差报告• Dirty testing:负面测试• Documentation testing:文档测试• Domain:域• Driver:驱动程序• Dynamic analysis:动态分析• Dynamic analysis tool:动态分析工具• Dynamic comparison:动态比较• Dynamic testing:动态测试E• Efficiency:效率• Efficiency testing:效率测试• Elementary comparison testing:基本组合测试• Emulator:仿真器、仿真程序• Entry criteria:入口标准• Entry point:入口点• Equivalence class:等价类• Equivalence partition:等价区间• Equivalence partition coverage:等价区间覆盖率• Equivalence partitioning:等价划分技术• Error:错误• Error guessing:错误猜测技术• Error seeding:错误撒播• Error tolerance:错误容限• Evaluation:评估• Exception handling:异常处理• Executable statement:可执行的语句• Exercised:可执行的• Exhaustive testing:穷尽测试• Exit criteria:出口标准• Exit point:出口点• Expected outcome:预期结果• Expected result:预期结果• Exploratory testing:探测测试领测国际科技(北京)有限公司领测软件测试网 Fail:失败• Failure:失败• Failure mode:失败模式• Failure Mode and Effect Analysis (FMEA) :失败模式和影响分析• Failure rate:失败频率• Fault:缺陷• Fault density:缺陷密度• Fault Detection Percentage (FDP) :缺陷发现率• Fault masking:缺陷屏蔽• Fault tolerance:缺陷容限• Fault tree analysis:缺陷树分析• Feature:特征• Field testing:现场测试• Finite state machine:有限状态机• Finite state testing:有限状态测试• Formal review:正式评审• Frozen test basis:测试基线• Function Point Analysis (FPA) :功能点分析• Functional integration:功能集成• Functional requirement:功能需求• Functional test design technique:功能测试设计技术• Functional testing:功能测试• Functionality:功能性• Functionality testing:功能性测试G• glass box testing:白盒测试H• Heuristic evaluation:启发式评估• High level test case:概要测试用例• Horizontal traceability:水平跟踪领测国际科技(北京)有限公司领测软件测试网 Impact analysis:影响分析• Incremental development model:增量开发模型• Incremental testing:增量测试• Incident:事件• Incident management:事件管理• Incident management tool:事件管理工具• Incident report:事件报告• Independence:独立• Infeasible path:不可行路径• Informal review:非正式评审• Input:输入• Input domain:输入范围• Input value:输入值• Inspection:审查• Inspection leader:审查组织者• Inspector:审查人员• Installability:可安装性• Installability testing:可安装性测试• Installation guide:安装指南• Installation wizard:安装向导• Instrumentation:插装• Instrumenter:插装工具• Intake test:入口测试• Integration:集成• Integration testing:集成测试• Integration testing in the large:大范围集成测试• Integration testing in the small:小范围集成测试• Interface testing:接口测试• Interoperability:互通性• Interoperability testing:互通性测试• Invalid testing:无效性测试• Isolation testing:隔离测试• Item transmittal report:版本发布报告• Iterative development model:迭代开发模型K• Key performance indicator:关键绩效指标领测国际科技(北京)有限公司领测软件测试网 Keyword driven testing:关键字驱动测试L• Learnability:易学性• Level test plan:等级测试计划• Link testing:组件集成测试• Load testing:负载测试• Logic-coverage testing:逻辑覆盖测试• Logic-driven testing:逻辑驱动测试• Logical test case:逻辑测试用例• Low level test case:详细测试用例M• Maintenance:维护• Maintenance testing:维护测试• Maintainability:可维护性• Maintainability testing:可维护性测试• Management review:管理评审• Master test plan:综合测试计划• Maturity:成熟度• Measure:度量• Measurement:度量• Measurement scale:度量粒度• Memory leak:内存泄漏• Metric:度量• Migration testing:移植测试• Milestone:里程碑• Mistake:错误• Moderator:仲裁员• Modified condition decision coverage:改进的条件判定覆盖率• Modified condition decision testing:改进的条件判定测试• Modified multiple condition coverage:改进的多重条件判定覆盖率• Modified multiple condition testing:改进的多重条件判定测试• Module:模块• Module testing:模块测试• Monitor:监视器• Multiple condition:多重条件• Multiple condition coverage:多重条件覆盖率领测国际科技(北京)有限公司领测软件测试网 Multiple condition testing:多重条件测试• Mutation analysis:变化分析• Mutation testing:变化测试N• N-switch coverage:N 切换覆盖率• N-switch testing:N 切换测试• Negative testing:负面测试• Non-conformity:不一致• Non-functional requirement:非功能需求• Non-functional testing:非功能测试• Non-functional test design techniques:非功能测试设计技术O• Off-the-shelf software:离岸软件• Operability:可操作性• Operational environment:操作环境• Operational profile testing:运行剖面测试• Operational testing:操作测试• Oracle:标准• Outcome:输出/结果• Output:输出• Output domain:输出范围• Output value:输出值P• Pair programming:结队编程• Pair testing:结队测试• Partition testing:分割测试• Pass:通过• Pass/fail criteria:通过/失败标准• Path:路径• Path coverage:路径覆盖• Path sensitizing:路径敏感性• Path testing:路径测试领测国际科技(北京)有限公司领测软件测试网 Peer review:同行评审• Performance:性能• Performance indicator:绩效指标• Performance testing:性能测试• Performance testing tool:性能测试工具• Phase test plan:阶段测试计划• Portability:可移植性• Portability testing:移植性测试• Postcondition:结果条件• Post-execution comparison:运行后比较• Precondition:初始条件• Predicted outcome:预期结果• Pretest:预测试• Priority:优先级• Probe effect:检测成本• Problem:问题• Problem management:问题管理• Problem report:问题报告• Process:流程• Process cycle test:处理周期测试• Product risk:产品风险• Project:项目• Project risk:项目风险• Program instrumenter:编程工具• Program testing:程序测试• Project test plan:项目测试计划• Pseudo-random:伪随机Q• Quality:质量• Quality assurance:质量保证• Quality attribute:质量属性• Quality characteristic:质量特征• Quality management:质量管理领测国际科技(北京)有限公司领测软件测试网 Random testing:随机测试• Recorder:记录员• Record/playback tool:记录/回放工具• Recoverability:可复原性• Recoverability testing:可复原性测试• Recovery testing:可复原性测试• Regression testing:回归测试• Regulation testing:一致性测试• Release note:版本说明• Reliability:可靠性• Reliability testing:可靠性测试• Replaceability:可替换性• Requirement:需求• Requirements-based testing:基于需求的测试• Requirements management tool:需求管理工具• Requirements phase:需求阶段• Resource utilization:资源利用• Resource utilization testing:资源利用测试• Result:结果• Resumption criteria:继续测试标准• Re-testing:再测试• Review:评审• Reviewer:评审人员• Review tool:评审工具• Risk:风险• Risk analysis:风险分析• Risk-based testing:基于风险的测试• Risk control:风险控制• Risk identification:风险识别• Risk management:风险管理• Risk mitigation:风险消减• Robustness:健壮性• Robustness testing:健壮性测试• Root cause:根本原因S• Safety:安全领测国际科技(北京)有限公司领测软件测试网 Safety testing:安全性测试• Sanity test:健全测试• Scalability:可测量性• Scalability testing:可测量性测试• Scenario testing:情景测试• Scribe:记录员• Scripting language:脚本语言• Security:安全性• Security testing:安全性测试• Serviceability testing:可维护性测试• Severity:严重性• Simulation:仿真• Simulator:仿真程序、仿真器• Site acceptance testing:定点验收测试• Smoke test:冒烟测试• Software:软件• Software feature:软件功能• Software quality:软件质量• Software quality characteristic:软件质量特征• Software test incident:软件测试事件• Software test incident report:软件测试事件报告• Software Usability Measurement Inventory (SUMI) :软件可用性调查问卷• Source statement:源语句• Specification:规格说明• Specification-based testing:基于规格说明的测试• Specification-based test design technique:基于规格说明的测试设计技术• Specified input:特定输入• Stability:稳定性• Standard software:标准软件• Standards testing:标准测试• State diagram:状态图• State table:状态表• State transition:状态迁移• State transition testing:状态迁移测试• Statement:语句• Statement coverage:语句覆盖• Statement testing:语句测试• Static analysis:静态分析• Static analysis tool:静态分析工具• Static analyzer:静态分析工具• Static code analysis:静态代码分析• Static code analyzer:静态代码分析工具• Static testing:静态测试• Statistical testing:统计测试领测国际科技(北京)有限公司领测软件测试网 Status accounting:状态统计• Storage:资源利用• Storage testing:资源利用测试• Stress testing:压力测试• Structure-based techniques:基于结构的技术• Structural coverage:结构覆盖• Structural test design technique:结构测试设计技术• Structural testing:基于结构的测试• Structured walkthrough:面向结构的走查• Stub: 桩• Subpath: 子路径• Suitability: 符合性• Suspension criteria: 暂停标准• Syntax testing: 语法测试• System:系统• System integration testing:系统集成测试• System testing:系统测试T• Technical review:技术评审• Test:测试• Test approach:测试方法• Test automation:测试自动化• Test basis:测试基础• Test bed:测试环境• Test case:测试用例• Test case design technique:测试用例设计技术• Test case specification:测试用例规格说明• Test case suite:测试用例套• Test charter:测试宪章• Test closure:测试结束• Test comparator:测试比较工具• Test comparison:测试比较• Test completion criteria:测试比较标准• Test condition:测试条件• Test control:测试控制• Test coverage:测试覆盖率• Test cycle:测试周期• Test data:测试数据• Test data preparation tool:测试数据准备工具领测国际科技(北京)有限公司领测软件测试网 Test design:测试设计• Test design specification:测试设计规格说明• Test design technique:测试设计技术• Test design tool: 测试设计工具• Test driver: 测试驱动程序• Test driven development: 测试驱动开发• Test environment: 测试环境• Test evaluation report: 测试评估报告• Test execution: 测试执行• Test execution automation: 测试执行自动化• Test execution phase: 测试执行阶段• Test execution schedule: 测试执行进度表• Test execution technique: 测试执行技术• Test execution tool: 测试执行工具• Test fail: 测试失败• Test generator: 测试生成工具• Test leader:测试负责人• Test harness:测试组件• Test incident:测试事件• Test incident report:测试事件报告• Test infrastructure:测试基础组织• Test input:测试输入• Test item:测试项• Test item transmittal report:测试项移交报告• Test level:测试等级• Test log:测试日志• Test logging:测试记录• Test manager:测试经理• Test management:测试管理• Test management tool:测试管理工具• Test Maturity Model (TMM) :测试成熟度模型• Test monitoring:测试跟踪• Test object:测试对象• Test objective:测试目的• Test oracle:测试标准• Test outcome:测试结果• Test pass:测试通过• Test performance indicator:测试绩效指标• Test phase:测试阶段• Test plan:测试计划• Test planning:测试计划• Test policy:测试方针• Test Point Analysis (TPA) :测试点分析• Test procedure:测试过程领测国际科技(北京)有限公司领测软件测试网 Test procedure specification:测试过程规格说明• Test process:测试流程• Test Process Improvement (TPI) :测试流程改进• Test record:测试记录• Test recording:测试记录• Test reproduceability:测试可重现性• Test report:测试报告• Test requirement:测试需求• Test run:测试运行• Test run log:测试运行日志• Test result:测试结果• Test scenario:测试场景• Test script:测试脚本• Test set:测试集• Test situation:测试条件• Test specification:测试规格说明• Test specification technique:测试规格说明技术• Test stage:测试阶段• Test strategy:测试策略• Test suite:测试套• Test summary report:测试总结报告• Test target:测试目标• Test tool:测试工具• Test type:测试类型• Testability:可测试性• Testability review:可测试性评审• Testable requirements:需求可测试性• Tester:测试人员• Testing:测试• Testware:测试组件• Thread testing:组件集成测试• Time behavior:性能• Top-down testing:自顶向下的测试• Traceability:可跟踪性U• Understandability:易懂性• Unit:单元• unit testing:单元测试• Unreachable code:执行不到的代码领测国际科技(北京)有限公司领测软件测试网 Usability:易用性• Usability testing:易用性测试• Use case:用户用例• Use case testing:用户用例测试• User acceptance testing:用户验收测试• User scenario testing:用户场景测试• User test:用户测试V• V -model:V 模式• Validation:确认• Variable:变量• Verification:验证• Vertical traceability:垂直可跟踪性• Version control:版本控制• Volume testing:容量测试W• Walkthrough:走查• White-box test design technique:白盒测试设计技术• White-box testing:白盒测试• Wide Band Delphi:Delphi 估计方法。

软件测试英语术语+缩写

软件测试英语术语+缩写

软件测试常用英语词汇静态测试:Non-Execution-Based Testing或Static testing 代码走查:Walkthrough代码审查:Code Inspection技术评审:Review动态测试:Execution-Based Testing白盒测试:White-Box Testing黑盒测试:Black-Box Testing灰盒测试:Gray-Box Testing软件质量保证SQA:Software Quality Assurance软件开发生命周期:Software Development Life Cycle冒烟测试:Smoke Test回归测试:Regression Test功能测试:Function Testing性能测试:Performance Testing压力测试:Stress Testing负载测试:Volume Testing易用性测试:Usability Testing安装测试:Installation Testing界面测试:UI Testing配置测试:Configuration Testing文档测试:Documentation Testing兼容性测试:Compatibility Testing安全性测试:Security Testing恢复测试:Recovery Testing单元测试:Unit Test集成测试:Integration Test系统测试:System Test验收测试:Acceptance Test测试计划应包括:测试对象:The Test Objectives测试范围: The Test Scope测试策略: The Test Strategy测试方法: The Test Approach,测试过程: The test procedures,测试环境: The Test Environment,测试完成标准:The test Completion criteria测试用例:The Test Cases测试进度表:The Test Schedules风险:Risks接口:Interface最终用户:The End User正式的测试环境:Formal Test Environment确认需求:Verifying The Requirements有分歧的需求:Ambiguous Requirements运行和维护:Operation and Maintenance.可复用性:Reusability可靠性: Reliability/Availability电机电子工程师协会IEEE:The Institute of Electrical and Electronics Engineers) 正确性:Correctness实用性:Utility健壮性:Robustness可靠性:Reliability软件需求规格说明书:SRS (software requirement specification )概要设计:HLD (high level design )详细设计:LLD (low level design )统一开发流程:RUP (rational unified process )集成产品开发:IPD (integrated product development )能力成熟模型:CMM (capability maturity model )能力成熟模型集成:CMMI (capability maturity model integration )戴明环:PDCA (plan do check act )软件工程过程组:SEPG (software engineering process group )集成测试:IT (integration testing )系统测试:ST (system testing )关键过程域:KPA (key process area )同行评审:PR (peer review )用户验收测试:UAT (user acceptance testing )验证和确认:V&V (verification & validation )控制变更委员会:CCB (change control board )图形用户界面:GUI (graphic user interface )配置管理员:CMO (configuration management officer )平均失效间隔时间:(MTBF mean time between failures )平均修复时间:MTTR (mean time to restoration )平均失效时间:MTTF (mean time to failure )工作任务书:SOW (statement of work )α测试:alpha testingβ测试:beta testing适应性:Adaptability可用性:Availability功能规格说明书:Functional Specification软件开发中常见英文缩写和各类软件开发文档的英文缩写:英文简写文档名称MRD market requirement document (市场需求文档)PRD product requirement document (产品需求文档)SOW 工作任务说明书PHB Process Handbook (项目过程手册)EST Estimation Sheet (估计记录)PPL Project Plan (项目计划)CMP Software Management Plan( 配置管理计划)QAP Software Quality Assurance Plan (软件质量保证计划)RMP Software Risk Management Plan (软件风险管理计划)TST Test Strategy(测试策略)WBS Work Breakdown Structure (工作分解结构)BRS Business Requirement Specification(业务需求说明书)SRS Software Requirement Specification(软件需求说明书)STP System Testing plan (系统测试计划)STC System Testing Cases (系统测试用例)HLD High Level Design (概要设计说明书)ITP Integration Testing plan (集成测试计划)ITC Integration Testing Cases (集成测试用例)LLD Low Level Design (详细设计说明书)UTP Unit Testing Plan ( 单元测试计划)UTC Unit Testing Cases (单元测试用例)UTR Unit Testing Report (单元测试报告)ITR Integration Testing Report (集成测试报告)STR System Testing Report (系统测试报告)RTM Requirements Traceability Matrix (需求跟踪矩阵)CSA Configuration Status Accounting (配置状态发布)CRF Change Request Form (变更申请表)WSR Weekly Status Report (项目周报)QSR Quality Weekly Status Report (质量工作周报)QAR Quality Audit Report(质量检查报告)QCL Quality Check List(质量检查表)PAR Phase Assessment Report (阶段评估报告)CLR Closure Report (项目总结报告)RFF Review Finding Form (评审发现表)MOM Minutes of Meeting (会议纪要)MTX Metrics Sheet (度量表)CCF ConsistanceCheckForm(一致性检查表)BAF Baseline Audit Form(基线审计表)PTF Program Trace Form(问题跟踪表)领测国际科技(北京)有限公司领测软件测试网 /软件测试中英文对照术语表A• Abstract test case (High level test case) :概要测试用例• Acceptance:验收• Acceptance criteria:验收标准• Acceptance testing:验收测试• Accessibility testing:易用性测试• Accuracy:精确性• Actual outcome (actual result) :实际输出/实际结果• Ad hoc review (informal review) :非正式评审• Ad hoc testing:随机测试• Adaptability:自适应性• Agile testing:敏捷测试• Algorithm test (branch testing) :分支测试• Alpha testing:alpha 测试• Analyzability:易分析性• Analyzer:分析员• Anomaly:异常• Arc testing:分支测试• Attractiveness:吸引力• Audit:审计• Audit trail:审计跟踪• Automated testware:自动测试组件• Availability:可用性B• Back-to-back testing:对比测试• Baseline:基线• Basic block:基本块• Basis test set:基本测试集• Bebugging:错误撒播• Behavior:行为• Benchmark test:基准测试• Bespoke software:定制的软件• Best practice:最佳实践• Beta testing:Beta 测试领测国际科技(北京)有限公司领测软件测试网 /• Big-bang testing:集成测试• Black-box technique:黑盒技术• Black-box testing:黑盒测试• Black-box test design technique:黑盒测试设计技术• Blocked test case:被阻塞的测试用例• Bottom-up testing:自底向上测试• Boundary value:边界值• Boundary value analysis:边界值分析• Boundary value coverage:边界值覆盖率• Boundary value testing:边界值测试• Branch:分支• Branch condition:分支条件• Branch condition combination coverage:分支条件组合覆盖率• Branch condition combination testing:分支条件组合测试• Branch condition coverage:分支条件覆盖率• Branch coverage:分支覆盖率• Branch testing:分支测试• Bug:缺陷• Business process-based testing:基于商业流程的测试C• Capability Maturity Model (CMM) :能力成熟度模型• Capability Maturity Model Integration (CMMI) :集成能力成熟度模型• Capture/playback tool:捕获/回放工具• Capture/replay tool:捕获/重放工具• CASE (Computer Aided Software Engineering) :电脑辅助软件工程• CAST (Computer Aided Software Testing) :电脑辅助软件测试• Cause-effect graph:因果图• Cause-effect graphing:因果图技术• Cause-effect analysis:因果分析• Cause-effect decision table:因果判定表• Certification:认证• Changeability:可变性• Change control:变更控制• Change control board:变更控制委员会• Checker:检查人员• Chow's coverage metrics (N-switch coverage) :N 切换覆盖率• Classification tree method:分类树方法• Code analyzer:代码分析器• Code coverage:代码覆盖率领测国际科技(北京)有限公司领测软件测试网 /• Code-based testing:基于代码的测试• Co-existence:共存性• Commercial off-the-shelf software:商用离岸软件• Comparator:比较器• Compatibility testing:兼容性测试• Compiler:编译器• Complete testing:完全测试/穷尽测试• Completion criteria:完成标准• Complexity:复杂性• Compliance:一致性• Compliance testing:一致性测试• Component:组件• Component integration testing:组件集成测试• Component specification:组件规格说明• Component testing:组件测试• Compound condition:组合条件• Concrete test case (low level test case) :详细测试用例• Concurrency testing:并发测试• Condition:条件表达式• Condition combination coverage:条件组合覆盖率• Condition coverage:条件覆盖率• Condition determination coverage:条件判定覆盖率• Condition determination testing:条件判定测试• Condition testing:条件测试• Condition outcome:条件结果• Confidence test (smoke test) :信心测试(冒烟测试)• Configuration:配置• Configuration auditing:配置审核• Configuration control:配置控制• Configuration control board (CCB) :配置控制委员会• Configuration identification:配置标识• Configuration item:配置项• Configuration management:配置管理• Configuration testing:配置测试• Confirmation testing:确认测试• Conformance testing:一致性测试• Consistency:一致性• Control flow:控制流• Control flow graph:控制流图• Control flow path:控制流路径• Conversion testing:转换测试• COTS (Commercial Off-The-Shelf software) :商业离岸软件• Coverage:覆盖率• Coverage analysis:覆盖率分析领测国际科技(北京)有限公司领测软件测试网 /• Coverage item:覆盖项• Coverage tool:覆盖率工具• Custom software:定制软件• Cyclomatic complexity:圈复杂度• Cyclomatic number:圈数D• Daily build:每日构建• Data definition:数据定义• Data driven testing:数据驱动测试• Data flow:数据流• Data flow analysis:数据流分析• Data flow coverage:数据流覆盖率• Data flow test:数据流测试• Data integrity testing:数据完整性测试• Database integrity testing:数据库完整性测试• Dead code:无效代码• Debugger:调试器• Debugging:调试• Debugging tool:调试工具• Decision:判定• Decision condition coverage:判定条件覆盖率• Decision condition testing:判定条件测试• Decision coverage:判定覆盖率• Decision table:判定表• Decision table testing:判定表测试• Decision testing:判定测试技术• Decision outcome:判定结果• Defect:缺陷• Defect density:缺陷密度• Defect Detection Percentage (DDP) :缺陷发现率• Defect management:缺陷管理• Defect management tool:缺陷管理工具• Defect masking:缺陷屏蔽• Defect report:缺陷报告• Defect tracking tool:缺陷跟踪工具• Definition-use pair:定义-使用对• Deliverable:交付物• Design-based testing:基于设计的测试• Desk checking:桌面检查领测国际科技(北京)有限公司领测软件测试网 /• Development testing:开发测试• Deviation:偏差• Deviation report:偏差报告• Dirty testing:负面测试• Documentation testing:文档测试• Domain:域• Driver:驱动程序• Dynamic analysis:动态分析• Dynamic analysis tool:动态分析工具• Dynamic comparison:动态比较• Dynamic testing:动态测试E• Efficiency:效率• Efficiency testing:效率测试• Elementary comparison testing:基本组合测试• Emulator:仿真器、仿真程序• Entry criteria:入口标准• Entry point:入口点• Equivalence class:等价类• Equivalence partition:等价区间• Equivalence partition coverage:等价区间覆盖率• Equivalence partitioning:等价划分技术• Error:错误• Error guessing:错误猜测技术• Error seeding:错误撒播• Error tolerance:错误容限• Evaluation:评估• Exception handling:异常处理• Executable statement:可执行的语句• Exercised:可执行的• Exhaustive testing:穷尽测试• Exit criteria:出口标准• Exit point:出口点• Expected outcome:预期结果• Expected result:预期结果• Exploratory testing:探测测试领测国际科技(北京)有限公司领测软件测试网 /F• Fail:失败• Failure:失败• Failure mode:失败模式• Failure Mode and Effect Analysis (FMEA) :失败模式和影响分析• Failure rate:失败频率• Fault:缺陷• Fault density:缺陷密度• Fault Detection Percentage (FDP) :缺陷发现率• Fault masking:缺陷屏蔽• Fault tolerance:缺陷容限• Fault tree analysis:缺陷树分析• Feature:特征• Field testing:现场测试• Finite state machine:有限状态机• Finite state testing:有限状态测试• Formal review:正式评审• Frozen test basis:测试基线• Function Point Analysis (FPA) :功能点分析• Functional integration:功能集成• Functional requirement:功能需求• Functional test design technique:功能测试设计技术• Functional testing:功能测试• Functionality:功能性• Functionality testing:功能性测试G• glass box testing:白盒测试H• Heuristic evaluation:启发式评估• High level test case:概要测试用例• Horizontal traceability:水平跟踪领测国际科技(北京)有限公司领测软件测试网 /I• Impact analysis:影响分析• Incremental development model:增量开发模型• Incremental testing:增量测试• Incident:事件• Incident management:事件管理• Incident management tool:事件管理工具• Incident report:事件报告• Independence:独立• Infeasible path:不可行路径• Informal review:非正式评审• Input:输入• Input domain:输入范围• Input value:输入值• Inspection:审查• Inspection leader:审查组织者• Inspector:审查人员• Installability:可安装性• Installability testing:可安装性测试• Installation guide:安装指南• Installation wizard:安装向导• Instrumentation:插装• Instrumenter:插装工具• Intake test:入口测试• Integration:集成• Integration testing:集成测试• Integration testing in the large:大范围集成测试• Integration testing in the small:小范围集成测试• Interface testing:接口测试• Interoperability:互通性• Interoperability testing:互通性测试• Invalid testing:无效性测试• Isolation testing:隔离测试• Item transmittal report:版本发布报告• Iterative development model:迭代开发模型K• Key performance indicator:关键绩效指标领测国际科技(北京)有限公司领测软件测试网 /• Keyword driven testing:关键字驱动测试L• Learnability:易学性• Level test plan:等级测试计划• Link testing:组件集成测试• Load testing:负载测试• Logic-coverage testing:逻辑覆盖测试• Logic-driven testing:逻辑驱动测试• Logical test case:逻辑测试用例• Low level test case:详细测试用例M• Maintenance:维护• Maintenance testing:维护测试• Maintainability:可维护性• Maintainability testing:可维护性测试• Management review:管理评审• Master test plan:综合测试计划• Maturity:成熟度• Measure:度量• Measurement:度量• Measurement scale:度量粒度• Memory leak:内存泄漏• Metric:度量• Migration testing:移植测试• Milestone:里程碑• Mistake:错误• Moderator:仲裁员• Modified condition decision coverage:改进的条件判定覆盖率• Modified condition decision testing:改进的条件判定测试• Modified multiple condition coverage:改进的多重条件判定覆盖率• Modified multiple condition testing:改进的多重条件判定测试• Module:模块• Module testing:模块测试• Monitor:监视器• Multiple condition:多重条件• Multiple condition coverage:多重条件覆盖率领测国际科技(北京)有限公司领测软件测试网 /• Multiple condition testing:多重条件测试• Mutation analysis:变化分析• Mutation testing:变化测试N• N-switch coverage:N 切换覆盖率• N-switch testing:N 切换测试• Negative testing:负面测试• Non-conformity:不一致• Non-functional requirement:非功能需求• Non-functional testing:非功能测试• Non-functional test design techniques:非功能测试设计技术O• Off-the-shelf software:离岸软件• Operability:可操作性• Operational environment:操作环境• Operational profile testing:运行剖面测试• Operational testing:操作测试• Oracle:标准• Outcome:输出/结果• Output:输出• Output domain:输出范围• Output value:输出值P• Pair programming:结队编程• Pair testing:结队测试• Partition testing:分割测试• Pass:通过• Pass/fail criteria:通过/失败标准• Path:路径• Path coverage:路径覆盖• Path sensitizing:路径敏感性• Path testing:路径测试领测国际科技(北京)有限公司领测软件测试网 / • Peer review:同行评审• Performance:性能• Performance indicator:绩效指标• Performance testing:性能测试• Performance testing tool:性能测试工具• Phase test plan:阶段测试计划• Portability:可移植性• Portability testing:移植性测试• Postcondition:结果条件• Post-execution comparison:运行后比较• Precondition:初始条件• Predicted outcome:预期结果• Pretest:预测试• Priority:优先级• Probe effect:检测成本• Problem:问题• Problem management:问题管理• Problem report:问题报告• Process:流程• Process cycle test:处理周期测试• Product risk:产品风险• Project:项目• Project risk:项目风险• Program instrumenter:编程工具• Program testing:程序测试• Project test plan:项目测试计划• Pseudo-random:伪随机Q• Quality:质量• Quality assurance:质量保证• Quality attribute:质量属性• Quality characteristic:质量特征• Quality management:质量管理领测国际科技(北京)有限公司领测软件测试网 /R• Random testing:随机测试• Recorder:记录员• Record/playback tool:记录/回放工具• Recoverability:可复原性• Recoverability testing:可复原性测试• Recovery testing:可复原性测试• Regression testing:回归测试• Regulation testing:一致性测试• Release note:版本说明• Reliability:可靠性• Reliability testing:可靠性测试• Replaceability:可替换性• Requirement:需求• Requirements-based testing:基于需求的测试• Requirements management tool:需求管理工具• Requirements phase:需求阶段• Resource utilization:资源利用• Resource utilization testing:资源利用测试• Result:结果• Resumption criteria:继续测试标准• Re-testing:再测试• Review:评审• Reviewer:评审人员• Review tool:评审工具• Risk:风险• Risk analysis:风险分析• Risk-based testing:基于风险的测试• Risk control:风险控制• Risk identification:风险识别• Risk management:风险管理• Risk mitigation:风险消减• Robustness:健壮性• Robustness testing:健壮性测试• Root cause:根本原因S• Safety:安全领测国际科技(北京)有限公司领测软件测试网 /• Safety testing:安全性测试• Sanity test:健全测试• Scalability:可测量性• Scalability testing:可测量性测试• Scenario testing:情景测试• Scribe:记录员• Scripting language:脚本语言• Security:安全性• Security testing:安全性测试• Serviceability testing:可维护性测试• Severity:严重性• Simulation:仿真• Simulator:仿真程序、仿真器• Site acceptance testing:定点验收测试• Smoke test:冒烟测试• Software:软件• Software feature:软件功能• Software quality:软件质量• Software quality characteristic:软件质量特征• Software test incident:软件测试事件• Software test incident report:软件测试事件报告• Software Usability Measurement Inventory (SUMI) :软件可用性调查问卷• Source statement:源语句• Specification:规格说明• Specification-based testing:基于规格说明的测试• Specification-based test design technique:基于规格说明的测试设计技术• Specified input:特定输入• Stability:稳定性• Standard software:标准软件• Standards testing:标准测试• State diagram:状态图• State table:状态表• State transition:状态迁移• State transition testing:状态迁移测试• Statement:语句• Statement coverage:语句覆盖• Statement testing:语句测试• Static analysis:静态分析• Static analysis tool:静态分析工具• Static analyzer:静态分析工具• Static code analysis:静态代码分析• Static code analyzer:静态代码分析工具• Static testing:静态测试• Statistical testing:统计测试领测国际科技(北京)有限公司领测软件测试网 /• Status accounting:状态统计• Storage:资源利用• Storage testing:资源利用测试• Stress testing:压力测试• Structure-based techniques:基于结构的技术• Structural coverage:结构覆盖• Structural test design technique:结构测试设计技术• Structural testing:基于结构的测试• Structured walkthrough:面向结构的走查• Stub: 桩• Subpath: 子路径• Suitability: 符合性• Suspension criteria: 暂停标准• Syntax testing: 语法测试• System:系统• System integration testing:系统集成测试• System testing:系统测试T• Technical review:技术评审• Test:测试• Test approach:测试方法• Test automation:测试自动化• Test basis:测试基础• Test bed:测试环境• Test case:测试用例• Test case design technique:测试用例设计技术• Test case specification:测试用例规格说明• Test case suite:测试用例套• Test charter:测试宪章• Test closure:测试结束• Test comparator:测试比较工具• Test comparison:测试比较• Test completion criteria:测试比较标准• Test condition:测试条件• Test control:测试控制• Test coverage:测试覆盖率• Test cycle:测试周期• Test data:测试数据• Test data preparation tool:测试数据准备工具领测国际科技(北京)有限公司领测软件测试网 / • Test design:测试设计• Test design specification:测试设计规格说明• Test design technique:测试设计技术• Test design tool: 测试设计工具• Test driver: 测试驱动程序• Test driven development: 测试驱动开发• Test environment: 测试环境• Test evaluation report: 测试评估报告• Test execution: 测试执行• Test execution automation: 测试执行自动化• Test execution phase: 测试执行阶段• Test execution schedule: 测试执行进度表• Test execution technique: 测试执行技术• Test execution tool: 测试执行工具• Test fail: 测试失败• Test generator: 测试生成工具• Test leader:测试负责人• Test harness:测试组件• Test incident:测试事件• Test incident report:测试事件报告• Test infrastructure:测试基础组织• Test input:测试输入• Test item:测试项• Test item transmittal report:测试项移交报告• Test level:测试等级• Test log:测试日志• Test logging:测试记录• Test manager:测试经理• Test management:测试管理• Test management tool:测试管理工具• Test Maturity Model (TMM) :测试成熟度模型• Test monitoring:测试跟踪• Test object:测试对象• Test objective:测试目的• Test oracle:测试标准• Test outcome:测试结果• Test pass:测试通过• Test performance indicator:测试绩效指标• Test phase:测试阶段• Test plan:测试计划• Test planning:测试计划• Test policy:测试方针• Test Point Analysis (TPA) :测试点分析• Test procedure:测试过程领测国际科技(北京)有限公司领测软件测试网 /• Test procedure specification:测试过程规格说明• Test process:测试流程• Test Process Improvement (TPI) :测试流程改进• Test record:测试记录• Test recording:测试记录• Test reproduceability:测试可重现性• Test report:测试报告• Test requirement:测试需求• Test run:测试运行• Test run log:测试运行日志• Test result:测试结果• Test scenario:测试场景• Test script:测试脚本• Test set:测试集• Test situation:测试条件• Test specification:测试规格说明• Test specification technique:测试规格说明技术• Test stage:测试阶段• Test strategy:测试策略• Test suite:测试套• Test summary report:测试总结报告• Test target:测试目标• Test tool:测试工具• Test type:测试类型• Testability:可测试性• Testability review:可测试性评审• Testable requirements:需求可测试性• Tester:测试人员• Testing:测试• Testware:测试组件• Thread testing:组件集成测试• Time behavior:性能• Top-down testing:自顶向下的测试• Traceability:可跟踪性U• Understandability:易懂性• Unit:单元• unit testing:单元测试• Unreachable code:执行不到的代码领测国际科技(北京)有限公司领测软件测试网 /• Usability:易用性• Usability testing:易用性测试• Use case:用户用例• Use case testing:用户用例测试• User acceptance testing:用户验收测试• User scenario testing:用户场景测试• User test:用户测试V• V -model:V 模式• Validation:确认• Variable:变量• Verification:验证• Vertical traceability:垂直可跟踪性• Version control:版本控制• Volume testing:容量测试W• Walkthrough:走查• White-box test design technique:白盒测试设计技术• White-box testing:白盒测试• Wide Band Delphi:Delphi 估计方法。

Silicon Labs携手Edge Impulse加速实现机器学习应用

Silicon Labs携手Edge Impulse加速实现机器学习应用

敬请登录网站在线投稿(t o u ga o .m e s n e t .c o m.c n )2021年第4期29方法也有所不同㊂例如本文中所举的两个实例,如果运行在已经定制好内核的O K 2410终端板,是不需要进行界面再调试的,P C 端的程序直接下载即可㊂此时,H 模型只需要完成上半部分㊂在嵌入式开发中,还可以采取介于基于控件和基于绘图之间的做法,即把控件随意拖拽至窗体上,而后通过代码调整彼此之间的相对位置,完成界面的初步实现,其后的调试仍旧可按照H 模型的方法来完成㊂经过实践,说明H 模型方法具有较好的实用性和可操作性㊂但是,随着界面复杂性的提高,对于H 模型的适应要求则更高,需要对其进行更深入的研究与探索㊂参考文献[1]张银奎.软件调试[M ].北京:电子工业出版社,2008.[2]S P S r e e j a ,N a v e e n K u m a r S ,M a n o h a r R .A R e v i e w o n S o f t w a r e T e s i n g M e t h o d o l o gi e s [J ].I n t e r n a t i o n a l J o u r n a l o f R e c e n t T r e n d s i n E n g i n e e r i n g &R e s e a r c h ,2018,4(3):229234.[3]J o h n R o b b i n s .应用程序调试技术[M ].潘文林,等译.北京:清华大学出版社,2001.[4]V a h i d G a r o u s i ,M i c h a e l F e l d e r e r .W h a t w e k n o w a b o u t t e s -t i n g e m b e d d e d s o f t w a r e [J ].I E E E S o f t w a r e ,2018,35(4):6269.[5]蔡建平.嵌入式软件测试实用技术[M ].北京:清华大学出版社,2010.[6]J K R S a s t r y ,M L a k s h m i P r a s a d .T e s t i n g e m b e d d e d s ys t e m t h r o u g h o p t i m a l m i n i n g t e c h n i qu e (OMT )b a s e d o n m u l t i i n pu t d o m a i n [J ].I n t e r n a t i o n a l J o u r a n a l o f E l e c t r i c a l a n d C o m p u t e r E n g i i n e e r i n g,2019,9(3):21412151.[7]张光兰,万莹.移动应用G U I 测试技术综述[J ].现代计算机,2019(10):4448.[8]I s a b e l l a ,E m i R e t n a .S t u d y P a pe r o n T e s t C a s e g e n e r a t i o nf o r G U I B a s e d T e s t i ng [J ].I n t e r n a t i o n a l J o u r n a l o f S o f t w a r e E n g i n e e r i n g &A p pl i c a t i o n s ,2012,3(1):139147.[9]吴建湘.基于嵌入式L i n u x 系统下的Q t 测试软件开发[J ].电脑迷,2018(11):59.[10]沈炜,王晓聪.基于Q t 的嵌入式图形界面的研究与应用[J ].工业控制计算机,2016,29(1):101102,104.[11]戴莉萍.基于Q t 与A n d r o i d 的实验查错系统设计与实现[J ].实验室研究与探索,2017(1):132135.[12]龚丽.浅谈Q t 中的布局管理[J ].电脑知识与技术(学术交流),2014(9):58835886.[13]R i m a n t a s S e i n a u s k a s ,V yt e n i s S e i n a u s k a s .E x a m i n a t i o n o f t h e p o s s i b i l i t i e s f o r i n t e g r a t e d t e s t i n g o f e m b e d d e d s ys t e m s [J ].A m e r i c a n J o u r n a l o f E m b e d d e d S y s t e m s a n d A p pl i c a -t i o n s ,2013,1(1):112.[14]M a t t T e l l e r s ,Y u a n H s i e h .程序调试思想与实践[M ].邓劲生,等译.北京:中国水利水电出版社,2002.[15]G l e n f o r d J M ye r s .软件测试的艺术[M ].王峰,等译.北京:机械工业出版社,2006.[16]安晓辉.Q t o n A n d r o i d 核心编程[M ].北京:电子工业出版社,2014.戴莉萍(讲师),主要研究方向为软件工程㊂(责任编辑:薛士然 收稿日期:2020-10-10)S i l i c o n L a b s 携手E d g e I m pu l s e 加速实现机器学习应用S i l i c o n L a b s (亦称 芯科科技 )宣布与领先的边缘设备机器学习(M L )开发平台E d g e I m pu l s e 携手合作,实现在S i l i c o n L a b s E F R 32无线片上系统(S o C )和E F M 32微控制器(M C U )上快速开发和部署机器学习应用㊂E d g e I m pu l s e 工具可在低功耗且内存受限的远程边缘设备上实现复杂的运动检测(m o t i o n d e t e c t i o n)㊁声音识别和图像分类㊂研究表明,往往由于人工智能(A I )/机器学习方面的挑战,87%的数据科学项目从未实现量产㊂通过S i l i c o n L a b s 与E d ge I m -p u l s e 之间的这种新合作,设备开发人员只需轻点按钮,即可直接生成机器学习模型并将其导出至设备或S i m p l i c i t y St u d i o (S i l i c o n L a b s 的集成开发环境),在数分钟内便可实现机器学习功能㊂S i l i c o n L a b s 物联网副总裁M a t t S a u n d e r s 表示: S i l i c o n L a b s 相信,我们努力将机器学习融入到边缘设备中,将会使物联网更加智能㊂E d g e I m p u l s e 提供安全㊁私密且容易使用的工具,在实现机器学习时为开发人员节省了时间和资金,并为从预测性维护㊁资产跟踪到监控和人员检测等实际商业应用带来了令人惊叹的新用户体验㊂通过在S i m p l i c i t y S t u d i o 中集成部署,E d g e I m pu l s e 可使开发人员免费在各种S i l i c o n L a b s 产品上快速创建神经网络㊂通过在E F R 32和E F M 32器件(例如MG 12㊁MG 21和G G 11)中嵌入最先进的T i n yM L 模型,该解决方案能够实现以下功能:真实的传感器数据收集和存储㊁高级信号处理和数据特征提取㊁机器学习㊁深度神经网络(D N N )模型训练㊁优化嵌入式代码部署㊂E d ge I m -p u l s e 工具还可以利用E d g e I m p u l s e 的E d g e O pt i m i z e d N e u r a l (E O N )技术来优化内存使用和推理时间㊂E d g e I m p u l s e 联合创始人兼首席执行官Z a c h S h e l b y 表示:嵌入式机器学习在工业㊁企业和消费领域的应用是无止境的㊂将机器学习与S i l i c o n L a b s 的先进开发工具和多协议解决方案整合在一起,将为客户带来绝佳的无线开发机遇㊂。

软件测试中英文对照外文翻译文献

软件测试中英文对照外文翻译文献

STUDY PAPER ON TEST CASE GENERATION FOR GUI BASED TESTINGABSTRACTWith the advent of WWW and outburst in technology and software development, testing the softwarebecame a major concern. Due to the importance of the testing phase in a software development lifecycle,testing has been divided into graphical user interface (GUI) based testing, logical testing, integrationtesting, etc.GUI Testing has become very important as it provides more sophisticated way to interact withthe software. The complexity of testing GUI increased over time. The testing needs to be performed in away that it provides effectiveness, efficiency, increased fault detection rate and good path coverage. Tocover all use cases and to provide testing for all possible (success/failure) scenarios the length of the testsequence is considered important. Intent of this paper is to study some techniques used for test casegeneration and process for various GUI based software applications.KEYWORDSGUI Testing, Model-Based Testing, Test Case, Automated Testing, Event Testing.1. INTRODUCTIONGraphical User Interface (GUI) is a program interface that takes advantage of the computer'sgraphics capabilities to make the program easier to use. Graphical User Interface (GUI) providesuser an immense way to interact with the software [1]. The most eminent and essential parts ofthe software that is being used today are Graphical User Interfaces (GUIs) [8], [9]. Even thoughGUIs provides user an easy way to use the software, they make the development process of the software tangled [2].Graphical user interface (GUI) testing is the process of testing software's graphical user interfaceto safeguard it meets its written specifications and to detect if application is working functionally correct. GUI testing involves performing some tasks and comparing the result with the expected output. This is performed using test cases. GUI Testing can be performed either manually byhumans or automatically by automated methods.Manual testing is done by humans such as testers or developers itself in some cases and it is oftenerror prone and there are chances of most of the test scenarios left out. It is very time consumingalso. Automated GUI Testing includes automating testing tasks that have been done manually before, using automated techniques and tools. Automated GUI testing is more, efficient, precise, reliable and cost effective.A test case normally consists of an input, output, expected result and the actual result. More thanone test case is required to test the full functionality of the GUI application. A collection of testcases are called test suite. A test suite contains detailed guidelines or objectives for eachcollection of test cases.Model Based Testing (MBT) is a quick and organized method which automates the testing process through automated test suite generation and execution techniques and tools [11]. Model based testing uses the directed graph model of the GUI called event-interaction graph (EIG) [4] and event semantic interaction graph (ESIG). Event interaction graph is a refinement of event flow graph (EFG) [1]. EIG contains events that interact with the business logic of the GUI application. Event Semantic Interaction (ESI) is used to identify set of events that need to be tested together in multi-way interactions [3] and it is more useful when partitioning the events according to its functionality.This paper is organized as follow: Section 2 provides some techniques, algorithms used to generate test cases, a method to repair the infeasible test suites are described in section 3, GUI testing on various types of softwares or under different conditions are elaborated in section 4, section 5 describes about testing the GUI application by taking event context into consideration and last section concludes the paper.2. TEST CASE GENERATION2.1. Using GUI Run-Time State as FeedbackXun Yuan and Atif M Memon [3], used GUI run time state as feedback for test case generation and the feedback is obtained from the execution of a seed test suite on an Application Under Test (AUT).This feedback is used to generate additional test cases and test interactions between GUI events in multiple ways. An Event Interaction Graph (EIG) is generated for the application to be tested and seed test suites are generated for two-way interactions of GUI events. Then the test suites are executed and the GUI’s run time state is recorded. This recorded GUI run time state is used to obtain Event Semantic Interaction(ESI) relationship for the application and these ESI are used to obtain the Event Semantic Interaction Graph(ESIG).The test cases are generated and ESIGs is capable of managing test cases for more than two-way interactions and hence forth 2-, 3-,4-,5- way interactions are tested. The newly generated test cases are tested and additional faults are detected. These steps are shown in Figure 1. The fault detection effectiveness is high than the two way interactions and it is because, test cases are generated and executed for combination of events in different execution orders.There also some disadvantages in this feedback mechanism. This method is designed focusing on GUI applications. It will be different for applications that have intricate underlying business logic and a simple GUI. As multi-way interactions test cases are generated, large number of test cases will be generated. This feedback mechanism is not automated.Figure 1. Test Case Generation Using GUI Runtime as Feedback2.2. Using Covering Array TechniqueXun Yuan et al [4], proposed a new automated technique for test case generation using covering arrays (CA) for GUI testing. Usually 2-way covering are used for testing. Because as number of events in a sequence increases, the size of test suite grows large, preventing from using sequences longer than 3 or 4. But certain defects are not detected using this coverage strength. Using this technique long test sequences are generated and it is systematically sampled at particular coverage strength. By using covering arrays t-way coverage strength is being maintained, but any length test sequences can be generated of at least t. A covering array, CA(N; t, k, v), is an N × k array on v symbols with the property that every N × t sub-array contains all ordered subsets of size t of the v symbols at least once.As shown in Figure 2, Initially EIG model is created which is then partitioned into groups of interacting events and then constraints are identified and used to generate abstract model for testing. Long test cases are generated using covering array sampling. Event sequences are generated and executed. If any event interaction is missed, then regenerate test cases and repeat the steps.The disadvantages are event partition and identifying constraints are done manually.Figure 2. Test Generation Using Covering Array2.3. Dynamic Adaptive Automated test GenerationXun Yuan et al [5], suggested an algorithm to generate test suites with fewer infeasible test cases and higher event interaction coverage. Due to dynamic state based nature of GUIs, it is necessary and important to generate test cases based on the feedback from the execution of tests. The proposed framework uses techniques from combinatorial interaction testing to generate tests and basis for combinatorial interaction testing is a covering array. Initially smoke tests are generated and this is used as a seed to generate Event Semantic Interaction (ESI) relationships. Event Semantic Interaction Graph is generated from ESI. Iterative refinement is done through genetic algorithm. An initial model of the GUI event interactions and an initial set of test sequences based on the model are generated. Then a batch of test cases are generated and executed. Code coverage is determined and unexecutable test cases are identified. Once the infeasible test cases are identified, it is removed and the model is updated and new batch of test cases are generated and the steps are followed till all the uncovered ESI relationships are covered. These automated test case generation process is shown in Figure 3. This automated test generation also provides validation for GUIs.The disadvantages are event contexts are not incorporated and need coverage and test adequacy criteria to check how these impacts fault detection.Figure 3. Automated Test Case Generation3. REPAIRING TEST SUITESSi Huang et al [6], proposed a method to repair GUI test suites using Genetic algorithm. New test cases are generated that are feasible and Genetic algorithm is used to develop test cases that provide additional test suite coverage by removing infeasible test cases and inserting new feasible test cases. A framework is used to automatically repair infeasible test cases. A graph model such as EFG, EIG, ESIG and the ripped GUI structure are used as input. The main controller passesgenerator along with the strength of testing. This covering array generator generates an initial set of event sequences. The covering array information is send to test case assembler and it assembles this into concrete test cases. These are passed back to the controller and test suite repair phase begins. Feasible test cases are returned by the framework once the repair phase is complete. Genetic algorithm is used as a repair algorithm. An initial set of test cases are executed and if there is no infeasible test cases, it exits and is done. If infeasible test cases are present, it then begins the repair phase. A certain number of iterations are set based on an estimate of how large the repaired test suite will be allowed to grow and for each iteration the genetic algorithm is executed. The algorithm adds best test case to the final test suites. Stopping criteria’s are used to stop the iterations.The advantages are it generates smaller test suites with better coverage on the longer test sequences. It provides feasible test cases. But it is not scalable for larger applications as execution time is high. As GUI ripping is used, the programs that contain event dependencies may not be discovered.4. GUI TESTING ON VARIOUS APPLICATIONS4.1. Industrial Graphical User Interface SystemsPenelope Brooks et al [7], developed GUI testing methods that are relevant to industry applications that improve the overall quality of GUI testing by characterizing GUI systems using data collected from defects detected to assist testers and researchers in developing more effective test strategies. In this method, defects are classified based on beizer’s defect taxonomy. Eight levels of categories are present each describing specific defects such as functional defects, functionality as implemented, structural defects, data defects, implementation defects, integration defects, system defects and test defects. The categories can be modified and added according to the need. If any failures occur, it is analyzed under which defect category it comes and this classification is used to design better test oracle to detect such failures, better test case algorithm may be designed and better fault seeding models may be designed.Goal Question Metric (GQM) Paradigm is used. It is used to analyze the test cases, defects and source metrics from the tester / researcher point of view in the context of industry-developed GUI software. The limitations are, the GUI systems are characterized based on system events only. User Interactions are not included.4.2. Community-Driven Open Source GUI ApplicationsQing Xie and Atif M. Memon [8], presented a new approach for continuous integration testing of web-based community-driven GUI-based Open Source Software(OSS).As in OSS many developers are involved and make changes to the code through WWW, it is prone to more defects and the changes keep on occurring. Therefore three nested techniques or three concentric loops are used to automate model-based testing of evolving GUI-based OSS. Crash testing is the innermost technique operates on each code check-in of the GUI software and it is executed frequently with an automated GUI testing intervention and performs quickly also. It reports the software crashes back to the developer who checked in the code. Smoke testing is the second technique operates on each day's GUI build and performs functional reference testing of the newly integrated version of the GUI, using the previously tested version as a baseline. Comprehensive Testing is the outermost third technique conducts detailed comprehensive GUI integration testing of a major GUI release and it is executed after a major version of GUI is available. Problems are reported to all the developers who are part of the development of the particular version.flaws that persist across multiple versions GUI-based OSS are detected by this approach fully automatically. It provides feedback. The limitation is that the interactions between the three loops are not defined.4.3. Continuously Evolving GUI-Based Software ApplicationsQing Xie and Atif M. Memon [9], developed a quality assurance mechanism to manage the quality of continuously evolving software by Presenting a new type of GUI testing, called crash testing to help rapidly test the GUI as it evolves. Two levels of crash testing is being described: immediate feedback-based crash testing in which a developer indicates that a GUI bug was fixed in response to a previously reported crash; only the select crash test cases are re run and the developer is notified of the results in a matter of seconds. If any code changes occur, new crash test cases are generated and executed on the GUI. Test cases are generated that can be generated and executed quickly and cover all GUI functionalities. Once EIG is obtained, a boolean flag is associated with each edge in the graph. During crash testing, once test cases that cover that particular edge are generated, then the flag is set. If any changes occur, boolean flag for each edge is retained. Test cases are executed and crashes during test execution are used to identify serious problems in the software. The crash testing process is shown in Figure 4. The effectiveness of crash test is known by the total number of test cases used to detect maximum faults. Significantly, test suite size has no impact on number of bugs revealed.This crash testing technique is used to maintain the quality of the GUI application and it also helps in rapidly testing the application. The drawbacks are, this technique is used for only testing GUI application and cannot used in web applications, Fault injection or seeding technique, which is used to evaluate the efficiency of the method used is not applied here.Figure 4. Crash Testing Process4.4. Rapidly Evolving SoftwareAtif M. Memon et al [10], made several contributions in the area of GUI smoke testing in terms of GUI smoke test suites, their size, fault detection ability and test oracle. Daily Automated Regression Tester (DART) framework is used to automate GUI smoke testing. Developers work on the code during day time and DART automatically launches the Application Under Test (AUT) during night time, builds it and runs GUI smoke tests. Coverage and error report are mailed to developer. In DART all the process such as Analyzing the AUT’s GUI structure using GUI ripper, Test case generation, Test oracle generation, Test case executor, Examining theand test oracles are generated. Fault seeding is used to evaluate fault detection techniques used. An adequate number of faults of each fault type are seeded fairly.The disadvantages are Some part of code are missed by smoke tests, Some of the bugs reported by DART are false positive, Overall effectiveness of DART depends on GUI ripper capabilities, Not available for industry based application testing, Faults that are not manifested on the GUI will go undetected5. INCORPORATING EVENT CONTEXTXun Yuan et al [1], developed a new criterion for GUI testing. They used a combinatorial interaction testing technique. The main motivation of using combinatorial interaction is to incorporate context and it also considers event combinations, sequence length and include all possible event. Graph models are used and covering array is used to generate test cases which are the basis for combinatorial interaction testing.A tool called GUITAR (GUI Testing Framework) is used for testing and this provides functionalities like generate test cases, execute test cases, verify correctness and obtain coverage reports. Initially using GUI ripper, a GUI application is converted into event graph and then the events are grouped depending on functionality and constraints are identified. Covering array is generated and test sequences are produced. Test cases are generated and executed. Finally coverage is computed and a test adequacy criterion is analyzed.The advantages are: contexts are incorporated, detects more faults when compared to the previous techniques used. The disadvantages are infeasible test cases make some test cases unexecutable, grouping events and identifying constraints are not automated.Figure 5. Testing Process6. CONCLUSIONSIn this paper, some of the various test case generation methods and various types of GUI testing adapted for different GUI applications and techniques are studied. Different approaches are being used under various testing environment. This study helps to choose the test case generation technique based on the requirements of the testing and it also helps to choose the type of GUI test to perform based on the application type such as open source software, industrial software and the software in which changes are checked in rapidly and continuously.REFERENCES[1][2]Xun Yuan, Myra B. Cohen, Atif M. Memon, (2010) “GUI Interaction Testing: Incorporating Event Context”, IEEE Transactions on Software Engineering, vol. 99.A. M. Memon, M. E. Pollack, and M. L. Soffa, (2001) “Hierarchical GUI test case generation using automated planning”, IEEE Transactions on Software Engineering, Vol. 27, no. 2, pp. 144-155.X. Yuan and A. M. M emon, (2007) “Using GUI run-time state as feedback to generate test cases”, in International Conference on Software Engineering (ICSE), pp. 396-405.X. Yuan, M. Cohen, and A. M. Memon, (2007) “Covering array sampling of input event sequences for automated GUI testing”, in International Conference on Automated Software Engineering (ASE), pp. 405-408.X. Yuan, M. Cohen, and A. M. Memon, (2009) “Towards dynamic adaptive automated test generation for graphical user interfaces”, in First International Workshop on TESTing Techniques & Experimentation Benchmarks for Event-Driven Software (TESTBEDS), pp. 1-4.Si Huang, Myra Cohen, and Atif M. Memon, (2010) “Repairing GUI Test Suites Using a Genetic Algorithm, “in Proceedings of the 3rd IEEE InternationalConference on Software Testing Verification and Validation (ICST).P. Brooks, B. Robinson, and A. M. Memon, (2009) “An initial characterization of industrial graphical user interface systems”, in ICST 2009: Proceedings of the 2nd IEEE International Conference on Software Testing, Verification and Validation, Washington, DC, USA: IEEE Computer Society.Q. Xie, and A.M. Memon (2006) “Model-based testing of community driven open-source GUI applications”, in International Conference on Software Maintenance (ICSM), pp. 145-154.Q. Xie and A. M. Memon, (2005) “Rapid “crash testing” for continuously evolving GUI- based software applications”, in International Conference on Software Maintenance (ICSM), pp. 473-482.A. M. Memon and Q. Xie, (2005) “Studying the fault-detection effectiveness of GUI test cases for rapidly evolving software”, IEEE Transactions on Software Engineering, vol. 31, no. 10, pp. 884-896.U. Farooq, C. P. Lam, and H. Li, (2008) “Towards automated test sequence generation”, in Australian Software Engineering Conference, pp. 441-450.[3][4][5][6][7][8][9][10][11]研究基于GUI测试生成的测试用例摘要随着 WWW的出现和信息技术与软件开发的发展,软件测试成为一个主要问题。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Model-Based Testing Through a GUIAntti Kervinen 1,Mika Maunumaa 1,Tuula Pääkkönen 2,and Mika Katara 11Tampere University of Technology,Institute of Software SystemsP.O.Box 553,FI-33101Tampere,FINLAND{stname}@tut.fi 2Nokia Technology PlatformsP.O.Box 68,FI-33721Tampere,FINLANDAbstract.So far,model-based testing approaches have mostly been used in test-ing through various kinds of APIs.In practice,however,testing through a GUI is another equally important application area,which introduces new challenges.In this paper,we introduce a new methodology for model-based GUI testing.This includes using Labeled Transition Systems (LTSs)in conjunction with ac-tion word and keyword techniques for test modeling.We have also conducted an industrial case study where we tested a mobile device and were able to find previ-ously unreported defects.The test environment included a standard MS Windows GUI testing tool as well as components implementing our approach.Assessment of the results from an industrial point of view suggests directions for future de-velopment.1IntroductionSystem testing through a GUI can be considered as one of the most challenging types of testing.It is often done by a separate testing team of domain experts that can validate that the clients’requirements have been fulfilled.However,the domain experts often lack programming skills and require easy-to-use tools to support their pared to application programming interface (API)testing,GUI testing is made more complex by the various user interface issues that need to be dealt with.Such issues include in-put of user commands and interpretation of the output results,for instance,using text recognition in some cases.Developers are often reluctant to implement system level APIs only for the purposes of testing.Moreover,general-purpose testing tools need to be adapted to use such APIs.In contrast,a GUI is often available and there are several general-purpose GUI test-ing tools,which can be easily taken into use.Among the test automation community,however,GUI testing tools are not considered an optimal solution.This is largely due to bad experiences in using so-called capture/replay tools that capture key presses,as well as mouse movement,and replay those in regression tests.The bad experiences are mostly involved with high maintenance costs associated with such a tool [1].The GUI is often the most volatile part of the system and possible changes to it affect the GUI test automation scripts.In the worst case,the selected capture/replay tool uses bitmap comparisons to verify the results of the test runs.False negative results can then be obtained from minor changes in the look and feel of the system.In practice,such test automation needs maintenance whenever the GUI is changed.In Proceedings of the 5th International Workshop on Formal Approaches to Testing of Software (FATES 2005), Edinburgh, Scotland, UK, July 2005. Number 3997 in Lecture Notes in Computer Science, pages 16-31. Springer 2006. © Springer-Verlag.The state of the art in GUI testing is represented by so-called keyword and action word techniques[2,3].They help in maintenance problems by providing a clear sepa-ration of concerns between business logic and the GUI navigation needed to implement the logic.Keywords correspond to key presses and menu navigation,such as“click the OK button”,while action words describe user events at a higher level of abstraction. For instance,a single action word can be defined to open a selectedfile whose name can be given as a parameter.The idea is that domain experts can design the test cases easily using action words even before the system implementation has been started.Test automation engineers then define the keywords that implement the action words using the scripting language provided by the GUI automation tool.Although some tools use smarter comparison techniques than pure bitmaps,and provide advanced test design concepts,such as keywords and action words,the main-tenance costs can still be significant.Moreover,such tools seldomfind new bugs and return the investment only when the same test suites are run several times,such as in regression testing.The basic problem is in the static and linear nature of the test cases. Even if only10%of the test cases would need to be updated whenever the system under test changes,this can mean modifying one hundred test cases from the test suite of one thousand regression tests.Our goal is to improve the status of GUI testing with model-based techniques. Firstly,by using test models to generate test runs,we will not run into difficulties with maintaining large test suites.Secondly,we have better chances offinding previously undetected defects,since we are able to vary the order of events.Towards these ends, we propose a test automation approach based on Labeled Transition Systems(LTSs)as well as action words and keywords.The idea is to describe a test model as a LTS whose transitions correspond to action words.This should be made as easy as possible for also testers with no programming skills.The maintenance effort should localize to a single model or few component models.The action machines we introduce are composed in parallel with refinement machines mapping the action words to sequences of keywords. The resulting composite LTS is then read into a general-purpose GUI testing tool that interprets the keywords and walks through the model using some heuristics.The tool also verifies the test results and handles the reporting.The contributions of this paper are in formalizing the above scheme,introducing novel test model architecture and applying the approach in an industrial case study. Finally,we have assessed the results from an industrial point of view.The rest of the paper is structured as follows.Sections2and3describe our approach in detail as well as the case study we have conducted.The assessment of the results is given in Section4. Related work is discussed in Section5and conclusions drawn in Section6.2Building a Test Model ArchitectureIn the following,we will develop a layered test model architecture for testing several concurrently running applications through a GUI.The basis for layering is in keyword and action word techniques,and therefore we willfirst introduce how to adapt these concepts to model-based testing.As a running example,we will use testing of Symbian applications.Symbian[4]is an operating system targeted for mobile devices such as smartphones and PDAs.The variety of features available resembles testing of PC applications,but there are also characteristics of embedded systems.For instance,there is no access to the resources of the GUI.In the following,the term system under test(SUT)will be used to refer to a device running Symbian OS.2.1Action Words and KeywordsAs Buwalda[3]recommends in the description of action-based testing,test design-ers should focus on high-level concepts in test design.This means modeling business processes and picking interesting sequences of events for discovering possible errors. These high-level events are called action words.The test automation engineer then im-plements the action words with keywords,which act as a concrete implementation layer of test automation.An example of a keyword from our Symbian test environment is kwPressKey mod-eling a key press.The keyword could be used,for instance,in a sequence that models starting a calculator application.Such a sequence would correspond to a single ac-tion word,say awStartCalculator.Thus,action words represent abstract operations like “make a phone call”,“open Calculator”etc.Implementation of action words can consist of sequences of keywords with related parameters as test data.However,the difference between keywords and action words is somewhat in the eye of the beholder.The most generic keywords can almost be considered as action words in the sense of functional-ity;the main difference is in the purpose of use and the level of abstraction.Our focus is on the state machine side of the action-based testing.We do not con-sider decision tables,which are recommended as one alternative for handling test com-binations[3].However,there have been some industrial implementations using spread-sheets to describe keyword combinations to run test cases,and they have proven quite useful.Experiences also suggest that the keywords need to be well described and agreed upon jointly,so that the same test cases can be shared throughout an organization.2.2Test Model,Action Machines and Refinement MachinesWe use the TVT verification toolset[5]to create test models.With the tools,the most natural way to express the behavior of a SUT is an LTS.We use two tools in the toolset: a graphical editor for drawing LTSs,and a parallel composition tool for calculating the behavior of a system where its input LTSs run in parallel.We will compose our test model LTS from small,hand-drawn LTSs with the parallel composition tool.The test model specifies a part of the externally observable behavior of the SUT.At most that part will be tested when the model is used in test runs.In our test model architecture,hand-drawn LTSs are divided in two classes.Action machines are the model-based equivalent for test cases expressed with action words, whereas refinement machines give executable contents to action words,that is,refine-ment from action words to keywords.In the following we formalize these concepts.= ,awVerifyC }(k P = R (A s ,R )e nd _Fig.1.Transition splitter and parallel compositionDefinition 1(LTS).A labeled transition system,abbreviated LTS,is defined as a quadru-ple (S ,Σ,∆,ˆs )where S denotes a set of states ,Σis a set of actions (alphabet ),∆⊆S ×Σ×S is a set of transitions and ˆs ∈S is an initial state . Our test model is a deterministic LTS.An LTS (S ,Σ,∆,ˆs )is deterministic if there is no state in which any leaving transitions share the same action name (label).For example,there are four such LTSs in Figure 1,with their initial states marked with filled circles.Action machines and refinement machines are LTSs whose alphabets include action words and keywords,respectively.In Figure 1,A is an action machine and R is a re-finement machine.Action machines describe what should be tested at action word level.In A ,application should be first started,then verified to be running and finally quitted.After quitting,the application should be started again,and so on.Refinement machines specify how action words in action machines can be implemented.Keyword sequences that implement an action word a are written in-between start_a and end_a transitions.In Figure 1,R refines two action words in A .Firstly,it provides two alternative im-plementations to action word awStartC .To start an application,a user can either use a short cut key (by pressing “SoftRight”)or select the application from a menu.Secondly,verification that the application is running is always carried out by checking that there is text “Camera”on the screen.The action word for quitting the application is not refined by R ,but another refinement machine can be used to do that.During the test execution,we keep track of the current state of the test model,start-ing from the initial state.One of the transitions leaving the current state is chosen.If the label of the transition is not a keyword,the test execution continues from the destination state of the transition.Otherwise,the action corresponding to the keyword is taken:a key is pressed,a text is searched on the display etc.These actions either succeed or fail.For example,text verification succeeds if and only if the searched text can be found on the display.Because sometimes failing an action is allowed,or even required,we need a way to specify the expected successfulnesses of actions in the test model.For that, we use the labeling of transitions.There can be two labels(with and without a tilde) for some keywords;kwVerifyT ext<‘Clock alarm’>and˜kwVerifyT ext<‘Clock alarm’>,for instance.The former label states that in the source state of the transition searching text ‘Clock alarm’is allowed to succeed and the latter allows the search to fail.If the taken action succeeded(failed)a transition without(with)a tilde is searched in the current state.If there is no such transition,an error has been found(that is,the behavior differs from what is required in the test model).Otherwise,the test execution is continued from the destination state of the transition.Hence,our testing method resembles“exploration testing”introduced in[6].How-ever,we do not need separate output actions.This is because the only way we can ob-serve the behavior of the SUT is to examine its GUI corresponding to the latest screen capture.In addition,there are many actions that are neither input(keyword)nor output actions.They can be used in debugging(in the execution log,one can see what action word we tried to execute when an error was detected)and in measuring the coverage (for instance,covered high-level actions can be found out).2.3Composing a Test ModelWe use parallel composition for two different purposes.The main purpose is to create test models that enable extensive testing of concurrent systems.This means that we can test many applications simultaneously.It is clearly more efficient than testing only one application at a time,because now interactions between the applications are also tested. The other purpose is to refine the action machines by injecting the keywords of their refinement machines in correct places in them.Refinement could be carried out to some extent by replacing transitions labeled by action words with the sequences of transitions specified in refinement machines.How-ever,this kind of macro expansion mechanism would expand action words always to the same keywords,which might not be wanted.For example,it is handy to expand action word“show image”to keywords“select the second menu item”and“press show button”when it is executed for thefirst ter on,the second item should be se-lected by default in the image menu,and therefore the action word should be expanded to keyword“press show button”.We avoid the limits of macro expansion mechanism by using transition splitting on action machines and then letting the parallel composition to do the refinement.The transition splitter divides transitions with given labels in two by adding a new state between the original source and destination states.If the label of a split transition is“a”then the new transitions are labeled as“start_a”and“end_a”.Definition2(Transition splitter“ A”).Let L be an LTS(S,Σ,∆,ˆs)and A a set of actions.S new={s s,a,s |(s,a,s )∈∆∧a∈A}is a set of new states(S∩S new=/0).Then A(L)is an LTS(S ,Σ ,∆ ,ˆs )where–S =S∪S new–Σ =(Σ\A )∪{start_a |a ∈A }∪{end_a |a ∈A }–∆ ={(s ,a ,s )∈∆|a /∈A }∪{(s ,start_a ,s s ,a ,s )|(s ,a ,s )∈∆∧a ∈A }∪{(s s ,a ,s ,end_a ,s )|(s ,a ,s )∈∆∧a ∈A }–ˆs =ˆsIn Figure 1,LTS A s is obtained from A by splitting transitions with labels aw_StartC and aw_VerifyC .As already mentioned,we construct the test model with parallel composition.We use a parallel composition that resembles the one given in [7].Whereas traditional paral-lel compositions synchronize syntactically the same actions of participating processes,our parallel composition is given explicitly the combinations of actions that should be synchronized and the results of the synchronous executions.This way we can state,for example,that action a in process P x is synchronized with action b in process P y and their synchronous execution is observed as action c (the result).The set of combina-tions and results is called rules .The parallel composition combines component LTSs to a single composite LTS in the following way.Definition 3(Parallel composition “ R ”). R (L 1,...,L n )is the parallel composition of n LTSs according to rules R.LTS L i =(S i ,Σi ,∆i ,ˆs i ).Let ΣR be a set of resulting actions and √a “pass”symbol such that ∀i :√/∈Σi .The rule set R ⊆(Σ1∪{√})×···×(Σn ∪{√})×ΣR .Now R (L 1,...,L n )=(S ,Σ,∆,ˆs ),where–S =S 1×···×S n–Σ={a ∈ΣR |∃a 1,...,a n :(a 1,...,a n ,a )∈R }–((s 1,...,s n ),a ,(s 1,...,s n ))∈∆if and only if there is (a 1,...,a n ,a )∈R such thatfor every i (1≤i ≤n )•(s i ,a i ,s i )∈∆i or •a i =√and s i =s i –ˆs = ˆs 1,...,ˆs nA rule in a parallel composition associates an array of actions (or “pass”symbol √)of input LTSs to an action in resulting LTS.The action is the result of the synchronous execution of the actions in the array.If there is √instead of an action,the corresponding LTS will not participate in the synchronous execution described by the rule.In Figure 1,P is the parallel composition of A s and R with rulesR ={ start_awStartC ,start_awStartC ,start_awStartC ,end_awStartC ,end_awStartC ,end_awStartC , start_awVerifyC ,start_awVerifyC ,start_awVerifyC ,end_awVerifyC ,end_awVerifyC ,end_awVerifyC ,aw_QuitC ,√,aw_QuitC , √,kwPressKey<’AppMenu’>,kwPressKey<’AppMenu’> , √,kwPressKey<’Center’>,kwPressKey<’Center’> , √,kwPressKey<’SoftRight’>,kwPressKey<’SoftRight’> , √,kwVerifyT ext<’Camera’>,kwVerifyT ext<’Camera’> }A (G )R G 1R G 2A (T S )R T S A (V )R Vstart_aw*,end_aw*start_aw*,end_aw*start_aw*,end_aw*start_aw*,end_aw*FROM V IRST GINT,IRETINT,IRETAction machines Refinement machinesFig.2.Test model architectureWhen using such rules,results of the parallel composition include action words (with start_and end_prefixes)and keywords (and possibly some other actions).How-ever,when test models are walked through during a test run,communication with a SUT takes place only when keywords are encountered.2.4Test Model ArchitectureIn the SUT,several applications can be run simultaneously,but only one can be active at a time.The active application receives all user input except the one that activates a task switcher.The user can activate an already running application with the task switcher.This setting forces us to restrict the concurrency (interleavings of actions)in the test model.Otherwise,the test model would allow executing first one keyword in one application and then another keyword in another application without activating the other application first.This would lead to a situation where the test model assumes that both applications have received one input,but in reality,the first application received two inputs and the other none.Because the activation itself must be expressed as a sequence of keywords,it is natural to model the task switcher as a special application,a sort of a scheduler.The task switcher starts executing when an active application is interrupted,and stops when it activates another (or the same)application.Although the absence of interleaved ac-tions might make the parallel composition look an unnecessarily complicated tool for building the model,it is not.The composition generates a test model that contains all combinations of states in which the tested application can be inactive.Thus,it enables rigorous testing of every application in every combination of states of the other appli-cations in background.Technically,we have one action machine for every application to be tested,and one action machine for task switching:action machines G (Gallery application),V (V oice recorder application)and T S (task switcher),for instance.Action machines are synchronized with each other and with their refinement machines,as shown in Figure 2.Before the synchronization,all action words of action machines are split.In the figure,lines that connect action machines to refinement machines represent synchronizing the split action words of the connected processes.For instance,we have a rule for synchronizing A (G )and R G 1with action start_awVerifyImageList and an-other rule for A (G )and R G 2with start_awViewImage .There are also rules that allow execution of every keyword in refinement machines without synchronization.Synchronizations that take care of task switching are presented with lines that con-nect G and V to T S in thefigure.Both G and V include actions INT and IRET that represent interrupting the application and returning from interrupt.Initially,Gallery ap-plication is active.If G executes INT synchronously with T S,G goes to a state where it waits for IRET.On the other hand,T S executes keywords that activate another(or the same)application in the SUT and then execute synchronously IRET with the corre-sponding action machine.Finally,there is a connector labeled FROM V IRST G in Figure2.It represents “go to Gallery”function in V oice recorder.In our SUT,the function activates Gallery application and opens its sound clips menu.V oice recorder is deactivated but left in background.In the test model,action FROM V IRST G is the result of synchronizing actions IGOTO<Gallery>in V and IRST<VoiceRecorder>actions in G.Thefirst action leads V to an interrupted state from which it can continue only by executing IRET synchronously with T S.The second action lets G to continue from an interrupted state, but forces it to a state where sound clip menu is assumed to be on the screen.Formally,our test model T M is acquired from the expression:T M= R( A(G), A(T S), A(V),R G1,R G2,R T S,R V)where set A contains all the action words and rule set R is as outlined above.One advantage of this architecture is that it allows us to reuse the component LTSs with a variety of SUTs.For example,if the GUI of some application changes,all we need to change is the refinement machine of the corresponding action machine.If a feature in an application should not be tested,it is enough to remove the corresponding action words from application’s action machine.If an application should not be tested, we just drop out its LTSs from the parallel composition.Accordingly,if a new applica-tion should be tested,we add the action and the refinement machine for it(also T S and R T S must be changed to be able to activate the new application,but they are simple enough to be generated automatically).Moreover,if we test a new SUT with the same features but with completely different GUI,we redraw the refinement machines for the SUT but use the same action machines.While refinement machines can be changed without touching their action machines, changing an action machine causes easily changes in its refinement machines.If a new action word is introduced in an action machine,either its refinement machine has to be extended correspondingly or a new refinement machine added to the parallel compo-sition.In addition,changing the order of action words inside an action machine may cause changes in its refinement machine.For example,action word awChooseFirstIm-ageInGallery can be unfolded to different sequences of keywords depending on the state of the SUT in which the action word is executed.In one state,Gallery application may already show the image list.Then the action can be completed by a keyword that selects thefirst item in the list.However,in another state,Gallery may show a list of voice sam-ples,and therefore the refinement shouldfirstfind out the list of images before thefirst image can be selected.Thus,action words may contain hidden assumptions on SUT’s state where the action takes place.Of course,one can make these assumptions explicit, for example,by extending the action label:awChooseFirstImageInGalleryWhenImage-ListIsShown.Fig.3.Test environment3System Testing on Symbian PlatformThe above theory was developed in conjunction with an industrial case study.In this section,we will describe the case study including the test environment and setting that we used.Moreover,we outline the implementation of our model-based test engine,and explain the modeling process concerning keyword selection and creation of the test model itself.In addition,we will briefly evaluate our results.3.1Test Environment and SettingThe system we tested was a Symbian-based mobile device with Bluetooth capability. The test execution system was installed on a PC,and it consisted of two main com-ponents:test automation tool,including our test execution engine,and remote control software for the SUT.The test environment is depicted as a UML deployment diagram in Figure3.We applied TVT tools for creating the test model.As a test automation tool we used Mercury’s QuickTest Pro(QTP)[8].QTP is a GUI testing tool for MS Windows capable of capturing information about window resources,such as buttons and textfields,and providing access to those resources through an API.The tool also enables writing and executing test procedures using a scripting language(Visual Basic Script,VBScript in the following)and recording a test log when requested.The remote control tool we used was m-Test by Intuwave[9].It provides access to the GUI of the SUT and to some internal information such as a list of running processes. m-Test makes it possible to remotely navigate through the GUI(see Figure4,on the left-hand side).GUI resources visible on the display cannot be obtained,only the bitmap of the display is available.m-Test can also recognize the text visible on the display back to characters(see Figure4,on the right-hand side).m-Test supports various ways to connect to the SUT;in the study we used a Bluetooth radio link.Moreover,in the beginning of the study,we obtained a VBScript function library.It was originally developed to serve as a library of keyword implementations for conven-tional test procedures in system testing of the SUT.For example,for pushing a button there was a function called’Key’etc.Fig.4.Inputs and outputs of SUT as seen from m-Test3.2Test EngineThe test execution engine,which executes the LTS state machine,consisted of four parts:execution engine TestRunner,state model Model,transition selector Chooser, and keyword proxy SUTAdapter(see Figure3).TestRunner was responsible for ex-ecuting transition events selected by Chooser using the keyword function library via SUTAdapter.Based on the result of executing a keyword,TestRunner determines if the test run should continue or not.If the run can continue,the cycle continues until the number of executed transitions exceeds the maximum number of steps.The test designer determines the step limit that is provided as a parameter.Model was constructed from states(State),transitions(Transition),and their con-nections to each other.The test model(LTS)is read from afile and translated to a state object model,which provides access to the states and transitions.Chooser selects a transition to be executed in the current state.The selection method can be random or probabilistic based on weights attached to the transitions.Naturally, more advanced Chooser could also be based on an operation profile[10]or game theory [11],for instance.Since the schedule of our case study was tight,we chose the random selection algorithm because it was the easiest to implement.The keyword function library that we obtained served as our initial keyword im-plementation.However,during the early phases of the study it became apparent that it was not suitable for our purposes.The library had too much built-in control over the test procedure.In contrast,our approach requires that the test verdict must be given bythe test engine.The reason is that sometimes a failure of a SUT is actually the desiredTable1.Keyword categoriesKeyword Param.kwPressKey’keyLeft’kwWriteText’Hello’Navigate Select a menu itemActivates an application if startedkwVerifyText’Move’Control Activates a device to receive subsequent commandsStart an applicationkwIsMenuSelected’Move’result.In addition,since theflow control was partly embedded in the library,we did not have any keyword that would report the status of the SUT.For that reason,we cre-ated a keyword proxy(SUTAdapter).Its purpose was to hide original function library keywords,use or re-implement them tofit our purpose,and to add some new keywords.3.3Keyword CategoriesWe discovered that there must be at leastfive types of keywords:command,navigate, query,control,and state verification.As an example,some keywords from each cate-gory are shown in Table1.The command type keywords are the most obvious ones: They send input to the SUT,for instance,“press key”or“write text”.Navigation key-words,such as“select menu item”,are used to navigate in the GUI.Query keywords are used to compare texts or images on the display.Control keywords are used to manage the state of the SUT.These four keyword groups are well suited for most of the common testing situations.However,our approach allowed us to create several situations where also the state verification keywords were needed.State verification keywords verify that the SUT is in some particular state(for in-stance,“Is menu text selected“)or that some sporadic event,like a phone call,has occurred.These keywords were essential,because the environment did not allow us to capture such information otherwise.The state of the SUT was only available through indirect clues that were extracted from the display bitmap.Because of this,the test model occasionally misinterpreted the state of the SUT or missed an event.This made test modeling somewhat more complicated than we anticipated.The biggest difference between the query and state verification keywords is in the intent of their use.Queries are used to determine the presence of texts etc.on the display,whereas state verification keywords check if the GUI is in a required state.The latter are used to detect if the SUT is in a wrong state,i.e.the failure has occurred.The missing of an event was the most common error in the model,which occurred often when exact timing was required(like testing an alarm).This problem was prob-ably caused by the slow communication between QTP and m-Test.There were several occasions when some event was missed just because the execution of a keyword was too slow or the execution time varied between runs.。

相关文档
最新文档