测试英语短语

合集下载
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

软件测试常用单词:
1.静态测试:Non-Execution-Based Testing或Static testing 代码走查:Walkthrough
代码审查:Code Inspection
技术评审:Review
2.动态测试:Execution-Based Testing
3.白盒测试:White-Box Testing
4.黑盒测试:Black-Box Testing
5.灰盒测试:Gray-Box Testing
6.软件质量保证SQA:Software Quality Assurance
7.软件开发生命周期:Software Development Life Cycle 8.冒烟测试:Smoke Test
9.回归测试:Regression Test
10.功能测试:Function Testing
11.性能测试:Performance Testing
12.压力测试:Stress Testing
13.负载测试:Volume Testing
14.易用性测试:Usability Testing
15.安装测试:Installation Testing
16.界面测试:UI Testing
17.配置测试:Configuration Testing
18.文档测试:Documentation Testing
19.兼容性测试:Compatibility Testing
20.安全性测试:Security Testing
21.恢复测试:Recovery Testing
22.单元测试:Unit Test
23.集成测试:Integration Test
24.系统测试:System Test
25.验收测试:Acceptance Test
26.测试计划应包括:
测试对象:The Test Objectives,
测试范围:The Test Scope,
测试策略:The Test Strategy
测试方法:The Test Approach,
测试过程:The test procedures,
测试环境:The Test Environment,
测试完成标准:The test Completion criteria
测试用例:The Test Cases
测试进度表:The Test Schedules
风险:Risks
Etc
27.主测试计划:a master test plan
28.需求规格说明书:The Test Specifications
29.需求分析阶段:The Requirements Phase
30.接口:Interface
31.最终用户:The End User
31.正式的测试环境:Formal Test Environment
32.确认需求:Verifying The Requirements
33.有分歧的需求:Ambiguous Requirements
34.运行和维护:Operation and Maintenance.
35.可复用性:Reusability
36.可靠性:Reliability/Availability
37.电机电子工程师协会IEEE:The Institute of Electrical and Electronics Engineers) 38.要从以下几方面测试软件:
正确性:Correctness
实用性:Utility
性能:Performance
健壮性:Robustness
可靠性:Reliability
关于Bugzilla:
1.Bug按严重程度(Severity)分为:
Blocker,阻碍开发和/或测试工作
Critical,死机,丢失数据,内存溢出
Major,较大的功能缺陷
Normal,普通的功能缺陷
Minor,较轻的功能缺陷
Trivial,产品外观上的问题或一些不影响使用的小毛病,如菜单或对话框中的文字拼写或字体问题等等
Enhancement,建议或意见
2.Bug按报告状态分类(Status)
待确认的(Unconfirmed)
新提交的(New)
已分配的(Assigned)
问题未解决的(Reopened)
待返测的(Resolved)
待归档的(Verified)
已归档的(Closed)
3.Bug处理意见(Resolution)
已修改的(Fixed)
不是问题(Invalid)
无法修改(Wontfix)
以后版本解决(Later)
保留(Remind)
重复(Duplicate)
无法重现(Worksforme)
Testing activity
Testing activity
Testing is an integral componentυof the software process
Testing is a critical element of softwareυquality assurance
Testing is an activity that must be carried outυthroughout the software develop ment life cycle
Software Testing Principles:
All tests should be traceable to customer requirements.
Tests should be planned long before testing begins
The Pareto principle applies to software testing. (80/20 rule)
Testing should begin “in the small” and progress toward testing “in the large.”Exhaustive testing is not possible.
To be most effective, testing should be conducted by an independent third party.
Attributes of A “Good” Test
A good test has a highυprobability of finding an error.
A good test is not redundant.υ
Aυgood test should be neither too simple nor too complex.
υ Else?
What Should Be Tested?
Correctness
Utility
Performance
Robustness
Reliability
Correctness
The extent to which a program satisfies its specification and fulfills the customer’s mission objectives.
If input that satisfies the input specifications is provided and the product is given al l the resources it needs, then the product is correct if the output satisfies the outp ut specification.
If a product satisfies its specification, then this product is correct.
Questions:
Suppose a product has been tested successfully against a broad variety of test dat a. Does this mean that the product is acceptable?
UTILITY
Utility is the extent to which a user’s needs are met when a correct product is use d under condition permitted by its specifications.
It focus on how easy the product is to use, whether the product performs useful fu nctions, and whether the product is cost effective compared to competing products.
If the product is not cost effective, there is no point in buying it.
And unless the product is easy to use, it will not be used at all or it will be used incorrectly.
Therefore, when considering buying an existing product, the utility of the product sh ould be tested first; and if the product fails on that score, testing should stop.
Performance
It is the extent to which the product meets its constraints with regard to response t ime or space requirements.
Performance is measured by processing speed, response time, resource consumpti on, throughput and efficiency.
Performance: For example, a nuclear reactor control system may have to sample t he temperature of the core and process the data every 10th of a second. If the sy stem is not fast enough to be able to handle interrupts from the temperature senso r every 10th of a second, then data will be lost and there is no way of ever recov ering the data; the next time that the system receives temperature data, they will b e the current temperature, not the reading that was missed. If the reactor is on the point of a meltdown, then it is critical that all relevant information be both receive d and processed as laid down in the specifications.
With all real-time system, the performance must meet every time constraint listed in the specifications.
Robustness
Robustness essentially is aυfunction of a number of factors, such as the range of operating conditions, the possibility of unacceptable results with valid input, and the acceptability of effects when the product is given invalid input.
A product with a wideυrange of permissible operating condition is more robust than a product that is more restrictive.
It is difficult to come up with a preciseυdefinition…
A robust product should not yield unacceptable results whenυthe input satisfies its specifications.
υ
For example, when theυtester gives a system (or program) with a invalid data, the system responds with a message such as “Incorrect data, try again”, it is mor e robust than a system that crashes whenever the data deviate even slightly from what is required.
υ
Reliability
If a program repeatedly and frequentlyυfails to perform, it matters little whether other software quality factors are acceptable.
Software Reliability is defined in statistical terms as “theυprobability of failure-fr ee operation of a computer program in a specified environment for a specified tim e.”
It is necessary to know how often theυproduct fails. (MTBF)
When a product fails, an important issue is howυlong it takes, on average , to repair it. (MTTR)
Measure of Reliability:υ MTBF = MTTF + MTTR
MTBF: mean-time-between-failureυ
MTTF:υmean-time-to-failure
MTTR: mean-time-to-repairυ
υ
Softwareυavailability is the probability that a program is operating according to requirements at a given point in time.
Measure of Reliability:υ
υ MTBF = MTTF + MTTR
Measure of Availability:υ
Availabilityυ= [MTTF/(MTTF + MTTR)] * 100%
υ
How can it be know when to stop testing?
This can be difficult to determine. Many modern software applications are so compl ex, and run in such an interdependent environment, that complete testing can neve r be done. Common factors in deciding when to stop are:
Deadlines (release deadlines, testing deadlines, etc.)
Test budget depleted
Test cases completed with certain percentage passed
Coverage of code/functionality/requirements reaches a specified point
Bug rate falls below a certain level
What if there isn’t enough time for thorough testing?
Use risk analysis to determine where testing should be focused. Risk analysis is a ppropriate to most software development projects. This requires judgment skills, co mmon sense, and experience. Considerations can include:
Which functionality is most important to the project's intended purpose?
Which functionality is most visible to the user?
Which functionality has the largest safety/financial impact?
Which aspects of similar/related previous projects had large maintenance expenses? Which parts of the code are most complex, and thus most subject to errors?。

相关文档
最新文档