Testing Glossary

Plain-English definitions of software testing terms. ISTQB vocabulary, QA jargon, and industry terms — explained clearly without the fluff.

A
Acceptance Testing ISTQB
Testing performed to determine whether a system satisfies its acceptance criteria and to enable stakeholders to decide whether to accept the system. Usually the final testing phase before go-live.
Actual Result
What the system actually did when a test was executed. Compared against the expected result to determine pass or fail. If they differ, you have a defect.
Ad Hoc Testing
Unstructured testing with no formal test cases, plan, or documentation. Different from exploratory testing — exploratory testing is disciplined and charter-driven. Ad hoc is genuinely random.
Agile Testing
Testing practice that follows agile principles — testing happens continuously throughout the sprint, not at the end. Testers collaborate closely with developers and product owners throughout development.
B
Black Box Testing ISTQB
Testing based on analysis of the specification or requirements — no knowledge of the internal code structure required. Techniques include equivalence partitioning, BVA, and decision tables.
Boundary Value Analysis (BVA) ISTQB
Testing the values at and immediately outside the edges of equivalence partitions. Most defects hide at boundaries — off-by-one errors, < vs ≤ mistakes. See full reference.
Branch Coverage ISTQB
A white box metric measuring whether every branch from every decision point (if/else, switch) has been executed — both the true and false paths. Stronger than statement coverage. See full reference.
Bug / Defect / Fault
An imperfection in a component or system. ISTQB distinguishes: a mistake (human error) produces a fault/defect (in the code), which can cause a failure (observable incorrect behaviour). In practice, "bug" and "defect" are used interchangeably.
C
Cause-Effect Graphing ISTQB
A technique that models the logical relationships between inputs (causes) and outputs (effects) using a graph, then systematically derives test cases. Useful for complex multi-condition business rules.
Checklist-Based Testing ISTQB
Testing using a list of items to check, derived from experience and standards. More flexible than formal test cases — the tester decides how to verify each item. See full reference.
Component Testing (Unit Testing)
Testing of individual software components in isolation. Usually done by developers. The earliest test level — finding defects here is cheapest to fix.
Confirmation Testing (Retest)
Testing a specific defect fix to confirm the defect has been resolved. Not the same as regression testing — confirmation testing targets the specific fix only.
Coverage
The degree to which a specified coverage item has been exercised by a test suite. Expressed as a percentage. Examples: statement coverage, branch coverage, requirements coverage. High coverage doesn't guarantee quality — it means you've exercised what you measured.
CTFL / CTAL ISTQB
CTFL = Certified Tester Foundation Level. The entry-level ISTQB certification. CTAL = Certified Tester Advanced Level, with specialisations including Test Analyst (CTAL-TA) and Test Manager (CTAL-TM).
D
Decision Table Testing ISTQB
Maps every combination of conditions to their expected actions in a table. Each column is a test case. Gold standard for complex business rules. See full reference.
Defect Lifecycle
The states a defect moves through: New → Assigned → Open → Fixed → Retest → Closed (or Rejected / Deferred). Understanding this ensures defects are tracked, prioritised, and resolved systematically. See defect management.
Defect Density
The number of confirmed defects divided by the size of the software component (e.g., defects per 1,000 lines of code or per function point). Used to compare quality across releases or components.
DRE (Defect Removal Efficiency)
The percentage of defects found before release vs total defects (pre-release + post-release). DRE = pre-release defects / (pre-release + post-release) × 100. A good benchmark is 95%+. Lower DRE means defects are escaping to customers.
E
Entry / Exit Criteria
Conditions that must be met to start (entry) or complete (exit) a test activity. Example entry: test environment ready, build deployed, smoke tests pass. Example exit: all critical defects resolved, regression complete, sign-off received.
Equivalence Partitioning (EP) ISTQB
Divides the input domain into groups (partitions) where all values should behave identically. Test one representative per partition instead of every value. See full reference.
Error Guessing ISTQB
Using experience, knowledge of past defects, and intuition to design tests that target likely failure points. Formalised as a fault list. See full reference.
Exploratory Testing ISTQB
Simultaneous test design, execution, and learning — guided by a charter and time box. Not ad hoc: it's disciplined investigation. Most effective for complex systems or incomplete specifications. See full reference.
Expected Result
What should happen when a test is executed, as predicted from the specification or requirements. Without an expected result, you cannot determine pass or fail. One of the 7 testing principles.
F
Failure
Observable incorrect behaviour of a component or system — the deviation between actual and expected result during execution. A failure is caused by a defect; a defect is caused by a mistake (human error).
Flaky Test
A test that passes and fails non-deterministically — same code, same inputs, different results. Caused by race conditions, timing dependencies, shared state, or environment inconsistencies. Flaky tests erode trust in the test suite.
Functional Testing
Testing that evaluates what a system does — its functions and features — against requirements. Contrast with non-functional testing (performance, security, accessibility) which tests how well it does it.
H
Happy Path
The default, error-free flow through a system using valid inputs. The happy path is the most important flow to test first — if it's broken, nothing else matters. But a system that only works on the happy path is not production-ready.
I
Integration Testing
Testing the interfaces and interactions between integrated components or systems. Finds defects in the communication between components — things that work in isolation but break when combined.
ISTQB
International Software Testing Qualifications Board. The global body that defines software testing certifications (CTFL, CTAL-TA, CTAL-TM, etc.) and maintains the Standard Glossary of Terms.
L
Load Testing
Testing a system's behaviour under expected and peak load conditions. Measures response times, throughput, and resource utilisation as concurrent users increase. Part of performance testing.
M
Mutation Testing
A technique that introduces deliberate small faults (mutations) into the source code, then checks whether the test suite detects them. Tests the quality of the tests themselves. A test suite that fails to kill mutations is inadequate.
P
Pairwise Testing ISTQB
A combinatorial technique that ensures every pair of parameter values is covered at least once. Dramatically reduces the number of test cases needed for multi-variable inputs while maintaining strong coverage. See full reference.
Pesticide Paradox ISTQB
One of the 7 testing principles: if the same tests are repeated, they will eventually stop finding new defects. Tests must be regularly reviewed and new tests written to cover different areas.
Priority
How urgently a defect should be fixed, based on business need. Not the same as severity. A typo on the homepage (low severity) may be high priority because it's visible to every visitor.
R
Regression Testing ISTQB
Re-running tests on a modified system to ensure existing functionality hasn't broken. The reason regression matters: fixing one thing often breaks another. See full reference.
Risk-Based Testing ISTQB
Prioritising test effort based on risk exposure (likelihood × impact). High-risk areas get tested first, deepest, and most often. When time runs out, you have evidence of what was and wasn't tested. See full reference.
S
SDLC (Software Development Lifecycle)
The process used to plan, create, test, and deliver software. Models include waterfall, V-model, agile, and DevOps. Testing fits into every phase — not just at the end.
Severity
The degree of impact a defect has on the system. Typical scale: Critical (system crash, data loss) → High → Medium → Low (cosmetic). Not the same as priority.
Smoke Testing ISTQB
A quick, shallow test pass to confirm a build is stable enough for more thorough testing. If smoke fails, you stop and reject the build — it's not worth testing further. See full reference.
State Transition Testing ISTQB
Models a system as a finite set of states and the events that trigger transitions between them. Tests valid transitions and verifies invalid ones are rejected. See full reference.
Statement Coverage ISTQB
The percentage of executable statements in the code that have been run at least once. The most basic white box metric — a floor, not a ceiling. 100% statement coverage does not mean 100% branch coverage. See full reference.
Static Testing ISTQB
Testing work products (requirements, code, design documents) without executing the software. Includes reviews, walkthroughs, and static analysis. Finds defects earlier and more cheaply than dynamic testing.
System Testing
Testing the complete, integrated system against its specified requirements. Typically performed by an independent test team. Validates end-to-end behaviour across the whole application.
T
Test Basis
The body of knowledge used to design tests — requirements, specifications, design documents, user stories, code. If the test basis is vague or missing, your tests will be too.
Test Case
A documented set of preconditions, inputs, actions, expected results, and postconditions for a specific test objective. A good test case has a clear expected result so anyone can determine pass or fail.
Test Charter
A short statement defining the target, approach, and information goal of an exploratory testing session. Format: "Explore [target] using [resources] to discover [information]." Gives structure without scripting every step.
Test Estimation ISTQB
Predicting the effort, time, and resources required for testing. Techniques include WBS (Work Breakdown Structure) and three-point PERT estimation. See full reference.
Test Metrics ISTQB
Quantitative measures used to track testing progress and quality. Examples: defect density, test execution rate, DRE, pass/fail ratio. Metrics are useful only when they drive decisions. See full reference.
Test Plan ISTQB
A document describing the scope, approach, resources, schedule, entry/exit criteria, and risks for a test project. Created per project. Not the same as a test strategy, which is organisation-wide and more stable.
Test Strategy ISTQB
The organisational approach to testing — how testing is done across projects, tools, environments, and levels. More stable than a test plan. The test strategy is reused; the test plan is created per project.
Traceability Matrix
A document that maps requirements to test cases, ensuring every requirement is covered by at least one test. Bidirectional traceability also maps tests back to requirements, revealing orphan tests (no requirement) and untested requirements.
U
Unit Testing
See Component Testing. Testing of the smallest testable parts of an application in isolation. Typically written and run by developers.
Use Case Testing ISTQB
Derives test cases from use case flows — the basic (happy path), alternate, and exception flows for a user goal. Tests end-to-end user journeys, not just individual functions. See full reference.
UAT (User Acceptance Testing)
Acceptance testing performed by actual end users or business stakeholders to confirm the system meets their needs. The final gate before production deployment. Also called business acceptance testing.
V
V-Model
A development lifecycle model where each development phase has a corresponding test phase. Requirements → Acceptance testing; System design → System testing; Component design → Integration testing; Coding → Unit testing. Emphasises test planning in parallel with development.
Verification vs Validation
Verification: "Are we building the product right?" — checking that work products conform to specifications. Validation: "Are we building the right product?" — checking that the system meets stakeholder needs. Both are required for quality software.
W
White Box Testing ISTQB
Testing based on analysis of the internal structure of the code. Requires knowledge of implementation. Techniques include statement coverage and branch coverage.

No terms match your search.

← Back to library