QA–QC Arena – Software Testing Home for beginners and experts

Test Levels


Test Level
Details
It includes…
Component Testing (Unit/Module/Program Testing)
Component Testing verifies the functioning of software (e.g. modules, programs, objects, classes, etc.) that are separately testable.

Stubs & Drivers are used to replace the missing software and simulate the interface between the software components.

Stub – Stub is called from the software component to be tested / Stub is called program.
Driver – Driver calls a component to be tested / Driver is caller program.

Test–First Approach or Test–Driven Development is used in Component Testing.
Component Testing includes
  • Functional Testing (Functionality Testing)
  • Non–Functional Testing (Performance Testing)
  • Structural Testing (Decision Coverage)
  • Regression Testing (testing of changes)
Integration Testing
Integration Testing tests interfaces between components, and interfaces between systems.
Component Integration Testing tests the interactions between software components and is done after component testing.
System Integration Testing tests the interactions between different systems and may be done after system testing.

Approaches to Integration Testing –
1. Big-bang Integration Testing
All components or systems are integrated simultaneously and after that everything is tested as a whole.
  • Advantage – Everything is finished before integration testing starts; no need to simulate parts.
  • Disadvantage – It is time-consuming and difficult to trace the cause of failures with this late integration.
2. Incremental Integration Testing
All programs are integrated one by one, and a test is carried out after each step.
  • Advantage – The defects are found early in a smaller assembly when it is relatively easy to detect the cause.
  • Disadvantage – It can be time-consuming since stubs and drivers have to be developed and used in the test.
Incremental Integration Testing possibilities
  • Top-down – Testing takes place from top to bottom, following the control flow or architectural structure (e.g. starting from the GUI or main menu). Components or systems are substituted by stubs.
  • Bottom-up – Testing takes place from the bottom of the control flow upwards. Components or systems are substituted by drivers.
  • Functional incremental – Integration and testing takes place on the basis of the functions or functionality, as documented in the functional specification.
Integration testing may be carried out by the developers or by a separate team of specialist integration testers.
Integration Testing includes
  • Functional Testing (Functionality Testing of integration between different components or systems)
  • Non–Functional Testing (Performance Testing)
  • Structural Testing
  • Regression Testing (testing of changes)

System Testing
System Testing is concerned with the behavior of the whole system/product as defined by the scope of a development project or product.

System testing requires a controlled test environment and it should correspond to the final target or production environment as much as possible in order to minimize the risk of environment-specific failures not being found by testing.
System Testing includes
  • Functional Testing (Functionality Testing)
  • Non–Functional Testing (Performance & Reliability Testing)
  • Structural Testing (to assess the thoroughness of testing elements such as menu dialog structure or web page navigation)
  • Regression Testing (testing of changes)
Acceptance Testing
The goal of acceptance testing is to establish confidence in the system.
It is focused on a validation type of testing, whereby we are trying to determine whether the system is fit for purpose.
Finding defects should not be the main focus in acceptance testing.
The execution of the acceptance test requires a test environment that is for most aspects, representative of the production environment.
Acceptance testing may occur at more than just a single level.
User Acceptance Test – It focuses mainly on the functionality thereby validating the fitness-for-use of the system by the business user.  User acceptance test is performed by the users and application managers.
Operational Acceptance Test (Production Acceptance Test) – It validates whether the system meets the requirements for operation.  System administration will perform the operational acceptance test shortly before the system is released. The operational acceptance test may include testing of backup/restore, disaster recovery, maintenance tasks and periodic check of security vulnerabilities.
Contract Acceptance Testing – It is performed against a contract's acceptance criteria for producing custom-developed software.
Compliance Acceptance Testing – Compliance acceptance testing or regulation acceptance testing is performed against the regulations which must be adhered to, such as governmental, legal or safety regulations.
Alpha Testing – It takes place at the developer’s site. A cross-section of potential users and members of the developer's organization are invited to use the system. Developers observe the users and note problems. Alpha testing may also be carried out by an independent test team.
Beta Testing – Beta testing, or field testing, sends the system to a cross-section of users who install it and use it under real-world working conditions. The users send records of incidents with the system to the development organization where the defects are repaired.
Acceptance Testing includes
  • Functional Testing (Functionality Testing)
  • Non–Functional Testing (Performance & Reliability Testing)



Testing Glossary – II

Test Policy – A high level document describing the principles, approach and major objectives of the organization regarding testing.

Test Strategy – A high level description of the test levels to be performed and testing within those levels for an organization or program.

Test Approach – The implementation of the test strategy for a specific project. It typically includes the decisions made based on the project’s goal and the risk assessment carried out, starting points regarding the process, the test design techniques to be applied, exit criteria and test types to be performed.

Coverage (Test Coverage) – The degree, expressed as a percentage, to which a specified coverage item has been exercised by a test suite.

Exit Criteria – The set of generic and specific conditions, agreed upon with stakeholders, for permitting a process to be officially completed. The purpose of exit criteria is to prevent a task from being considered completed when there are still outstanding parts of the task which have not been finished. Exit criteria are used by testing to report against and to plan when to stop testing.

Test Control – A test management task that deals with developing and applying a set of corrective actions to get a test project on track when monitoring shows a deviation from what was planned.

Test Monitoring – A test management task that deals with the activities related to periodically checking the status of a test project. Reports are prepared that compare the actual status to that which was planned.

Test Condition – An item or event of a component or system that could be verified by one or more test cases, e.g. a function, transaction, feature, quality attribute, or structural element.

Test Design Specification – A document specifying the test conditions (coverage items) for a test item, the detailed test approach and the associated high–level test cases.

Test Procedure Specification (Test Script, Manual Test Script) – A document specifying a sequence of actions for the execution of a test.

Test Suite – A set of several test cases for a component or system under test, where the post condition of one test is often used as the precondition for the next one.

Test Execution – The process of running a test by the component or system under test, producing actual results.

Test Log – A chronological record of relevant details about the execution of tests.

Incident – Any event occurring that requires investigation.

Re-testing / Confirmation Testing – Testing that runs test cases that failed the last time they were run, in order to verify the success of corrective actions.

Regression Testing – Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software as a result of the changes made.
OR
It is the testing done to ensure that changed functionality is not affecting unchanged functionality.

Test Summary Report – A document summarizing testing activities and results. It also contains an evaluation of the corresponding test items against exit criteria.

Testware – Artifacts produced during the test process required to plan, design, and execute tests, such as documentation, scripts, inputs, expected results, set–up and clear–up procedures, files, databases, environment, and any additional software or utilities used in testing.

Independence – Separation of responsibilities, which encourages the accomplishment of objective testing.

Testing Glossary – I

Software – Computer programs, procedures, and possibly associated documentation and data pertaining to the operation of a computer system.

Risk – A factor that could result in future negative consequences; usually expressed as impact and likelihood.

Error (Mistake) – A human action that produces an incorrect result.

Defects (Bugs, Fault) – A flaw in a component or system that can cause the component or system to fail to perform its required function.

Failure – Deviation of the component or system from its expected delivery, service or result.

Quality – The degree to which a component, system or process meets specified requirements and/or user/ customer needs and expectations.

Exhaustive Testing – A Test approach in which the test suite comprises all combination of input values and preconditions.

Testing– The process consisting of all life cycle activities; both static and dynamic, concerned with planning, preparation and evaluation of software products and related work products to determine that they satisfy specified requirements, to demonstrate that they are fit for purpose and to detect defects.

Code – Computer instructions and data definitions expressed in a programming language or in a form of output by an assembler, compiler, or other translator.

Test Basis – All documents from which the requirements of a component or system can be inferred. The documentation on which the test cases are based. If a document can be amended only by way of formal amendment procedure, then the test basis is called a frozen test basis.

Requirement – A condition or capability needed by a user to solve a problem or achieve an objective that must be met or possessed by a system or system component to satisfy a contract, standard, specification, or other formally imposed document.

Review – An evaluation of product or project status to ascertain discrepancies from planned results and to recommend improvements. E.g. – Management Review, Informal Review, Technical Review, Inspection, and Walkthrough.

Test Case – A set of input values, execution preconditions, expected results and execution post conditions, developed for a particular objective or test condition, such as to exercise a particular program path or to verify compliance with a specific requirement.

Test Objective – A reason or purpose for designing and executing a test.

Debugging – The process of finding, analyzing and removing the causes of failures in software.

Test Plan – A document describing a scope, approach, resources and schedule of intended test activities. It identifies, amongst others, test items, the features to be tested, the testing tasks, who will do each task, degree of tester independence, the test environment, the test design techniques and entry and exit criteria to be used and the rationale for their choice, and any risks requiring contingency planning. It is a record of the test planning process.

Testing Jokes


Conversation between Software Developer and Software Tester J


Board by Software Tester J


Testing Thoughts

While reading various articles on testing i came across below thoughts. J 
  • Developers write codes, testers find defects. It is similar to student writing paper and professor checking them.
  • A picture is worth a thousand test cases.
  • Certification exams do not measure the quality of a tester. Until they do, they merely facilitate discriminatory hiring practices.
  • Software Testers do not make software; they only make them better.
  • To tell somebody that he is wrong is called criticism. To do so officially is called testing.
  • Good programmers write code for humans first and computers next.
  • Just because you’ve counted all the trees doesn’t mean you’ve seen the forest.
  • Software testers succeed where others fail.

Fundamental Test Process

Sr. No.
Fundamental Test Process
1
Test Planning and Control
Test Planning 
·         Determine the scope and risks and identify the objectives of testing.
·         Determine the test approach (techniques, test items, coverage, identifying teams involved in testing, testware).
·         Implement the test policy and/or the test strategy.
·         Determine the required test resources (e.g. people, test environment, PC’s).
·         Schedule test analysis and design tasks, test implementation, execution and evaluation.
·         Determine the exit criteria.
Test Control –
·         Measure and analyze the results of reviews and testing.
·         Monitor and document progress, test coverage and exit criteria.
·         Provide information on testing.
·         Initiate corrective actions.
·         Make decisions.
2
Test Analysis and Design
·         Review the test basis (product risk analysis, requirements, architecture, design specifications, and interfaces).
·         Identify test conditions.
·         Design the tests.
·         Evaluate testability of the requirements and system.
·         Design the test environment set-up and identify any required infrastructure and tools.
3
Test Implementation and Execution
Implementation –
·         Develop and prioritize test cases, create test data for those tests.
·         Create test suites.
·         Implement and verify the environment.
Execution –
·         Execute the test suites and individual test cases (manually or by using test execution tools).
·         Log the outcome of test execution.
·         Compare actual results with expected results.
·         Reports discrepancies as incidents (if there are differences between actual & expected results).
·         Repeat test activities (Confirmation Testing & Regression Testing) as a result of action taken for each discrepancy.
4
Evaluating Exit Criteria and Reporting
·         Check test logs against the exit criteria specified in test planning.
·         Assess if more test are needed or if the exit criteria specified should be changed.
·         Write a test summary report for stakeholders.
5
Test Closure Activities
·         Check which planned deliverables we actually delivered and ensure all incident reports have been resolved through defect repair or deferral.
·         Finalize and archive testware (scripts, test environment).
·         Hand over the testware to the maintenance team.
·         Evaluate how the testing went and analyze lessons learned for future releases and projects.


How much testing is enough?

Testing Principle – Exhaustive Testing is impossible
Testing everything (all combination of input values and preconditions) is not feasible except for trivial cases. Instead of Exhaustive Testing, we can use risks & priorities to focus testing efforts.
Exhaustive Testing – A Test approach in which the test suite comprises all combination of input values and preconditions.

Factors to decide how much testing is enough –
1. Technical & Business risks related to product
2. Project Constraints such as time & budget

Testing & Quality

Quality –
The degree to which a component, system or process meets specified requirements and/or user/customer needs and expectations.

Testing –
Testing helps to measure the quality of software in terms of number of defects found, the test run, and the system covered by tests.
Testing can give confidence in the quality of the software if it finds few or no defects.
Testing helps to find defects and potential failures during software development, maintenance and operations.

Role of testing in Software Development, Maintenance, and Operations –
Rigorous testing is necessary during development and maintenance to identify defects, in order to reduce failures in operational environment and increase the quality of the operational system.

Cost of Quality & Cost of Defects

Cost of Quality –
1. Prevention Cost – Prevention cost is the cost of modifying the process (establishing methods & procedures, training, acquiring tools) to avoid bugs.
2. Appraisal Cost – Appraisal cost is the cost of activities designed (any type of testing) to find quality problems.
3. Failure Cost – Failure cost is the cost of fixing the bugs.

Cost of Defects
Cost of finding and fixing Defects increases over time. If an error is made and the consequent defect is detected in the requirements at Specification stage, then it is relatively cheap to find and fix.

Software Testing Principles

Sr. No.
Software Testing Principles
1
Testing shows presence of Defects
Testing can show that defects are present, but cannot prove that there are no defects. Testing reduces the probability of undiscovered defects remaining in the software but, even if no defects are found, it is not a proof of correctness.
2
Exhaustive Testing is impossible
Testing everything (all combination of input values and preconditions) is not feasible except for trivial cases. Instead of Exhaustive Testing, we can use risks & priorities to focus testing efforts.
Exhaustive Testing – A Test approach in which the test suite comprises all combination of input values and preconditions.
3
Early Testing
Testing activities should start as early as possible in the software or system development life cycle and should be focused on defined objectives.
4
Defect Clustering

A small number of modules contain most of the defects discovered during pre-release testing or show the most operational failures.
5
Pesticide Paradox
If the same tests are repeated over and over again, eventually the same set of test cases will no longer find any new bugs. To overcome this Pesticide Paradox, the test cases need to be regularly reviewed and revised, and new and different tests need to be written to exercise different parts of the software or system to potentially find more defects.
6
Testing is context dependent
Testing is done differently in different contexts. For example, safety-critical software is tested differently from an e-commerce site.
7
Absence of errors fallacy
Finding and fixing defects does not help if the system built is unusable and does not fulfill the user’s needs and expectations.

Error – Defect – Failure

Error (Mistake) A human action that produces an incorrect result.

Defects (Bugs, Fault) – A flaw in a component or system that can cause the component or system to fail to perform its required function.

Failure – Deviation of the component or system from its expected delivery, service or results.