测试英语短语
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
测试英语短语
软件测试常用单词:
1.静态测试:Non-Execution-Based Testing 或Static testing
代码走查:Walkthrough
代码审查:Code Inspection
技术评审:Review
2.动态测试:Execution-Based Testing 3.白盒测试:White-Box Testing
4.黑盒测试:Black-Box Testing
5.灰盒测试:Gray-Box Testing
6.软件质量保证SQA:Software Quality Assurance
7.软件开发生命周期:Software Development Life Cycle
8.冒烟测试:Smoke Test
9.回归测试:Regression Test
10.功能测试:Function Testing
11.性能测试:Performance Testing
12.压力测试:Stress Testing
13.负载测试:Volume Testing
14.易用性测试:Usability Testing
15.安装测试:Installation Testing
16.界面测试:UI Testing
17.配置测试:Configuration Testing 18.文档测试:Documentation Testing 19.兼容性测试:Compatibility Testing 20.安全性测试:Security Testing
21.恢复测试:Recovery Testing
22.单元测试:Unit Test
23.集成测试:Integration Test
24.系统测试:System Test
25.验收测试:Acceptance Test
26.测试计划应包括:
测试对象:The Test Objectives,
测试范围:The Test Scope,
测试策略:The Test Strategy
测试方法:The Test Approach,
测试过程:The test procedures,
测试环境:The Test Environment,
测试完成标准:The test Completion criteria
测试用例:The Test Cases
测试进度表:The Test Schedules
风险:Risks
Etc
27.主测试计划:a master test plan
28.需求规格说明书:The Test Specifications 29.需求分析阶段:The Requirements Phase 30.接口:Interface
31.最终用户:The End User
31.正式的测试环境:Formal Test Environment
32.确认需求:Verifying The Requirements 33.有分歧的需求:Ambiguous Requirements 34.运行和维护:Operation and Maintenance. 35.可复用性:Reusability
36.可靠性:Reliability/Availability
37.电机电子工程师协会IEEE:The Institute of Electrical and Electronics Engineers) 38.要从以下几方面测试软件:
正确性:Correctness
实用性:Utility
性能:Performance
健壮性:Robustness
可靠性:Reliability
关于Bugzilla:
1.Bug按严重程度(Severity)分为:Blocker,阻碍开发和/或测试工作
Critical,死机,丢失数据,内存溢出
Major,较大的功能缺陷
Normal,普通的功能缺陷
Minor,较轻的功能缺陷
Trivial,产品外观上的问题或一些不影响使用的小毛病,如菜单或对话框中的文字拼写或字体问题等等
Enhancement,建议或意见
2.Bug按报告状态分类(Status)
待确认的(Unconfirmed)
新提交的(New)
已分配的(Assigned)
问题未解决的(Reopened)
待返测的(Resolved)
待归档的(Verified)
已归档的(Closed)
3.Bug处理意见(Resolution)
已修改的(Fixed)
不是问题(Invalid)
无法修改(Wontfix)
以后版本解决(Later)
保留(Remind)
重复(Duplicate)
无法重现(Worksforme)
Testing activity
Testing activity
Testing is an integral componentυof the software process
Testing is a critical element of softwa reυquality assurance
Testing is an activity that must be ca rried outυthroughout the software devel opment life cycle
Software Testing Principles:
All tests should be traceable to custome r requirements.
Tests should be planned long before tes
ting begins
The Pareto principle applies to software testing. (80/20 rule)
Testing should begin “in the small” and progress toward testing “in the large.”Exhaustive testing is not possible.
To be most effective, testing should be conducted by an independent third party.
Attributes of A “Good” Test
A good test has a highυprobability o f finding an error.
A good test is not redundant.υ
Aυgood test should be neither too si mple nor too complex.
υ Else?
What Should Be Tested?
Correctness
Utility
Performance
Robustness
Reliability
Correctness
The extent to which a program satisfies its specification and fulfills the custome r’s mission objectives.
If input that satisfies the input specificati ons is provided and the product is give n all the resources it needs, then the pr oduct is correct if the output satisfies th e output specification.
If a product satisfies its specification, th en this product is correct.
Questions:
Suppose a product has been tested suc cessfully against a broad variety of test data. Does this mean that the product is acceptable?
UTILITY
Utility is the extent to which a user’s ne eds are met when a correct product is u
sed under condition permitted by its spe cifications.
It focus on how easy the product is to use, whether the product performs usefu l functions, and whether the product is cost effective compared to competing pr oducts.
If the product is not cost effective, there is no point in buying it.
And unless the product is easy to use, i t will not be used at all or it will be use d incorrectly.
Therefore, when considering buying an e xisting product, the utility of the product should be tested first; and if the produ ct fails on that score, testing should sto p.
Performance
It is the extent to which the product me ets its constraints with regard to respon se time or space requirements.
Performance is measured by processing speed, response time, resource consump tion, throughput and efficiency.
Performance: For example, a nuclear rea ctor control system may have to sample the temperature of the core and proces s the data every 10th of a second. If the system is not fast enough to be able t o handle interrupts from the temperature sensor every 10th of a second, then da ta will be lost and there is no way of ev er recovering the data; the next time tha t the system receives temperature data, t hey will be the current temperature, not the reading that was missed. If the react or is on the point of a meltdown, then it is critical that all relevant information b e both received and processed as laid d own in the specifications.
With all real-time system, the performanc
e must meet every time constraint listed in the specifications.
Robustness
Robustness essentially is aυfunction of a number of factors, such as the ran ge of operating conditions, the possibilit y of unacceptable results with valid inpu t, and the acceptability of effects when t he product is given invalid input.
A product with a wideυrange of per missible operating condition is more rob ust than a product that is more restrictiv e.
It is difficult to come up with a preci seυdefinition…
A robust product should not yield un acceptable results whenυthe input satisf ies its specifications.
υ
For example, when theυtester gives
a system (or program) with a invalid dat a, the system responds with a message such as “Incorrect data, try again”, it is more robust than a system that crashes whenever the data deviate even slightly from what is required.
υ
Reliability
If a program repeatedly and frequentl yυfails to perform, it matters little whet her other software quality factors are ac ceptable.
Software Reliability is defined in stati stical terms as “theυprobability of failur e-free operation of a computer program i n a specified environment for a specified time.”
It is necessary to know how often th eυproduct fails. (MTBF)
When a product fails, an important is sue is howυlong it takes, on average ,
to repair it. (MTTR)
Measure of Reliability:υ MTBF = MT TF + MTTR
MTBF: mean-time-between-failureυ
MTTF:υmean-time-to-failure
MTTR: mean-time-to-repairυ
υ
Softwareυavailability is the probabilit y that a program is operating according to requirements at a given point in time.
Measure of Reliability:υ
υ MTBF = MTTF + MTTR
Measure of Availability:υ
Availabilityυ= [MTTF/(MTTF + MTT R)] * 100%
υ
How can it be know when to stop testin g?
This can be difficult to determine. Many modern software applications are so co
mplex, and run in such an interdepende nt environment, that complete testing ca n never be done. Common factors in de ciding when to stop are:
Deadlines (release deadlines, testing dea dlines, etc.)
Test budget depleted
Test cases completed with certain perce ntage passed
Coverage of code/functionality/requireme nts reaches a specified point
Bug rate falls below a certain level
What if there isn’t enough time for thoro ugh testing?
Use risk analysis to determine where tes ting should be focused. Risk analysis is appropriate to most software developme nt projects. This requires judgment skills, common sense, and experience. Consid erations can include:
Which functionality is most important to
the project's intended purpose?
Which functionality is most visible to the user?
Which functionality has the largest safet y/financial impact?
Which aspects of similar/related previous projects had large maintenance expense s?
Which parts of the code are most compl ex, and thus most subject to errors?。