Test Level
|
Details
|
It includes…
|
Component
Testing
(Unit/Module/Program Testing)
|
Component Testing
verifies the functioning of software (e.g. modules, programs, objects,
classes, etc.) that are separately testable.
Stubs &
Drivers are used to replace the missing software and simulate the interface
between the software components.
Stub – Stub is called from the
software component to be tested / Stub is called program.
Driver – Driver calls a component
to be tested / Driver is caller program.
Test–First
Approach or Test–Driven Development is used in Component Testing.
|
Component
Testing includes
|
Integration
Testing
|
Integration Testing
tests interfaces between components, and interfaces between systems.
Component
Integration
Testing tests the interactions between software components and is done after
component testing.
System
Integration
Testing tests the interactions between different systems and may be done
after system testing.
Approaches to
Integration Testing –
1. Big-bang Integration Testing –
All components
or systems are integrated simultaneously and after that everything is tested
as a whole.
All programs
are integrated one by one, and a test is carried out after each step.
Integration
testing may be carried out by the developers or by a separate team of
specialist integration testers.
|
Integration Testing includes
|
System
Testing
|
System Testing
is concerned with the behavior of the whole system/product as defined by the
scope of a development project or product.
System testing requires a controlled test environment and it should correspond to the final target or production environment as much as possible in order to minimize the risk of environment-specific failures not being found by testing. |
System Testing includes
|
Acceptance
Testing
|
The goal of
acceptance testing is to establish confidence in the system.
It is focused
on a validation type of testing, whereby we are trying to determine whether
the system is fit for purpose.
Finding defects
should not be the main focus in acceptance testing.
The execution
of the acceptance test requires a test environment that is for most aspects,
representative of the production environment.
Acceptance
testing may occur at more than just a single level.
User
Acceptance Test
– It focuses mainly on the functionality thereby validating the
fitness-for-use of the system by the business user. User acceptance test is performed by the
users and application managers.
Operational
Acceptance Test
(Production Acceptance Test) – It validates
whether the system meets the requirements for operation. System administration will perform the
operational acceptance test shortly before the system is released. The
operational acceptance test may include testing of backup/restore, disaster
recovery, maintenance tasks and periodic check of security vulnerabilities.
Contract
Acceptance Testing
– It is performed against a contract's acceptance criteria for producing
custom-developed software.
Compliance
Acceptance Testing
– Compliance acceptance testing or regulation acceptance testing is performed
against the regulations which must be adhered to, such as governmental, legal
or safety regulations.
Alpha
Testing – It
takes place at the developer’s site. A cross-section of potential users and
members of the developer's organization are invited to use the system.
Developers observe the users and note problems. Alpha testing may also be
carried out by an independent test team.
Beta
Testing –
Beta testing, or field testing, sends the system to a cross-section of users
who install it and use it under real-world working conditions. The users send
records of incidents with the system to the development organization where
the defects are repaired.
|
Acceptance Testing includes
|
Test Levels
Testing Glossary – II
Test Policy – A high level document
describing the principles, approach and major objectives of the organization
regarding testing.
Test Strategy – A high level description of
the test levels to be performed and testing within those levels for an
organization or program.
Test Approach – The implementation of the
test strategy for a specific project. It typically includes the decisions made
based on the project’s goal and the risk assessment carried out, starting
points regarding the process, the test design techniques to be applied, exit
criteria and test types to be performed.
Coverage (Test Coverage) – The degree, expressed as a
percentage, to which a specified coverage item has been exercised by a test
suite.
Exit Criteria – The set of generic and
specific conditions, agreed upon with stakeholders, for permitting a process to
be officially completed. The purpose of exit criteria is to prevent a task from
being considered completed when there are still outstanding parts of the task
which have not been finished. Exit criteria are used by testing to report
against and to plan when to stop testing.
Test Control – A test management task that
deals with developing and applying a set of corrective actions to get a test
project on track when monitoring shows a deviation from what was planned.
Test Monitoring – A test management task that
deals with the activities related to periodically checking the status of a test
project. Reports are prepared that compare the actual status to that which was
planned.
Test Condition – An item or event of a
component or system that could be verified by one or more test cases, e.g. a
function, transaction, feature, quality attribute, or structural element.
Test Design Specification – A document specifying the
test conditions (coverage items) for a test item, the detailed test approach and
the associated high–level test cases.
Test Procedure Specification (Test
Script, Manual Test Script)
– A document specifying a sequence of actions for the execution of a test.
Test Suite – A set of several test cases
for a component or system under test, where the post condition of one test is
often used as the precondition for the next one.
Test Execution – The process of running a
test by the component or system under test, producing actual results.
Test Log – A chronological record of
relevant details about the execution of tests.
Incident – Any event occurring that
requires investigation.
Re-testing / Confirmation
Testing – Testing
that runs test cases that failed the last time they were run, in order to
verify the success of corrective actions.
Regression Testing – Testing of a previously
tested program following modification to ensure that defects have not been
introduced or uncovered in unchanged areas of the software as a result of the changes
made.
OR
It
is the testing done to ensure that changed functionality is not affecting
unchanged functionality.
Test Summary Report – A document summarizing
testing activities and results. It also contains an evaluation of the
corresponding test items against exit criteria.
Testware – Artifacts produced during
the test process required to plan, design, and execute tests, such as
documentation, scripts, inputs, expected results, set–up and clear–up
procedures, files, databases, environment, and any additional software or
utilities used in testing.
Independence – Separation of responsibilities,
which encourages the accomplishment of objective testing.
Subscribe to:
Posts (Atom)