Test Management · Lead

Test Planning

A test plan is the contract between the test team and the project: what will be tested, how, by whom, and with what success criteria. Without it, testing is improvised; with a bad one, testing is theatre. This page covers what a plan must contain, entry and exit criteria, estimation, and the critical distinction between a test strategy and a test plan.

Senior Test Lead ISTQB CTFL 5.1 · CTAL-TM Ch. 2

What it is

A test plan is a document (or set of lightweight artefacts in agile) that defines the scope, approach, resources, schedule, and criteria for a testing effort. Its primary purpose is to make the test team’s intent visible and agreed — by stakeholders, developers, product owners, and management — before testing begins.

Planning is not bureaucracy. Even in a two-person team working in two-week sprints, someone needs to have thought through: what are we testing, when are we done, what happens if we run out of time, and who is responsible for what. A test plan documents those decisions so they are explicit and agreed, not implicit and assumed.

ISTQB definition (CTFL 5.1): “Test planning defines the test objectives and the approach to achieve them within the constraints of the project.” Test planning is a continuous activity — the plan is updated as the project evolves, not written once and filed.

Test strategy vs test plan

These are distinct artefacts that are frequently confused:

  • Test strategy (organisational level) — a high-level document that defines the organisation’s standard approach to testing: which testing levels are performed (unit, integration, system, acceptance), which techniques are preferred, how regression is managed, what automation tooling is standard. It applies across multiple projects. It changes infrequently.
  • Test plan (project level) — a specific document for a specific project or release. It derives from the strategy but specifies scope, schedule, resources, risks, and criteria for this project. It changes as the project changes.

In practice, many agile teams do not maintain a separate strategy document — it lives in the team’s working agreements. The test plan is then the only formal planning artefact, and it should at minimum include the items listed below.

What a test plan must contain

IEEE 829 defines the classic structure (now IEEE 29119-3). Whether you follow the full standard or a lightweight agile variant, these elements must be addressed somewhere:

  1. Scope — what is in scope and what is explicitly out of scope. Both matter. Out-of-scope items prevent scope creep and make explicit what the team is not responsible for testing.
  2. Test approach / strategy — which testing techniques will be applied, at what levels (unit, integration, system, acceptance), manual or automated, and in what sequence.
  3. Entry criteria — conditions that must be met before testing can begin. See the section below.
  4. Exit criteria — conditions that define when testing is complete. See the section below.
  5. Resources — who is doing the testing, what test environments are needed, what data, what tools.
  6. Schedule — timeline for test design, environment setup, execution phases, and reporting milestones.
  7. Risks and contingencies — what could go wrong (environment not ready, requirements incomplete, resource unavailable) and what the response plan is. Derived from a risk register.
  8. Deliverables — what will be produced: test cases, test reports, defect reports, sign-off document.

Entry and exit criteria

These are the most important and most frequently omitted elements of a test plan.

Entry criteria define the pre-conditions for testing to start. Common criteria:

  • Feature code is complete and reviewed (no obvious compile errors, PR merged)
  • Test environment is available and verified as stable
  • Test data is in place (accounts, products, reference data seeded)
  • Unit tests pass (typically ≥ agreed coverage threshold)
  • Requirements/acceptance criteria are documented and signed off

If testing starts before entry criteria are met, the test team wastes time testing against incomplete or broken builds. Define entry criteria and enforce them.

Exit criteria define when testing is complete. Common criteria:

  • All P1 defects resolved and verified closed
  • P2 defects resolved or risk-accepted with documented justification
  • Test execution rate: 100% of planned test cases executed
  • Test pass rate: ≥ 95% overall, 100% of critical path tests
  • Requirements coverage: ≥ 98% of acceptance criteria have at least one passing test
  • Regression suite: passing at agreed threshold
  • Sign-off received from product owner or release authority

Without exit criteria, testing can end for one of two bad reasons: you ran out of time, or someone said “I think it’s good enough.” Exit criteria make the decision objective and auditable.

Suspension criteria — a third criterion type worth adding. Suspension criteria define when to pause testing due to a blocker (e.g., a P1 defect that makes 30% of test cases impossible to execute). Define these upfront so the team knows when to stop rather than executing tests that produce invalid results.

Estimation in test planning

The test plan must include a time and effort estimate. Three common approaches:

  • Work Breakdown Structure (WBS) — decompose testing into tasks (test design, review, environment setup, execution, defect retesting, reporting) and estimate each. Bottom-up, most accurate but most effort to produce.
  • Metrics-based — use historical data from previous similar projects. “Last quarter’s release of similar scope took 80 person-hours of testing.” Requires good historical records.
  • Expert judgement — consult experienced team members. Quickest but most subjective. Mitigate subjectivity by using planning poker or three-point estimation (optimistic, most likely, pessimistic) to average multiple expert views.

Always include assumptions in your estimate. “This estimate assumes the test environment is available from day 1 of the sprint” is more useful than the number alone, because it signals immediately when an assumption is violated and the estimate needs revision.

Worked example: login feature mini test plan

Login feature — lightweight test plan summary
Section Content
Scope (in) Username/password login, “Remember me” toggle, account lockout after 3 failures, password reset flow
Scope (out) SSO / OAuth login (separate sprint), biometric login (separate workstream)
Approach EP + BVA for input fields; decision table for lockout logic; exploratory for session edge cases; Playwright automation for regression
Entry criteria Login feature code merged to staging; test environment stable; test accounts seeded (valid, locked, expired); acceptance criteria signed off
Exit criteria 0 open P1/P2 defects; ≥ 95% pass rate; all AC covered; regression suite green; product owner sign-off
Risks Risk 1: Auth library upgrade may change session behaviour — mitigate by running session tests last. Risk 2: Test environment uses a shared DB — data collisions possible; mitigate by using namespaced test accounts.
Estimate Test design: 4h · Review: 1h · Execution: 6h · Defect retest: 2h · Reporting: 1h — Total: 14h (assumes stable environment from day 1)

Agile test planning

In agile teams, the master test plan is often replaced by a combination of:

  • Team working agreements — cover approach, tooling, and definition of done (which includes testing obligations)
  • Sprint planning — covers scope and schedule for each sprint
  • Per-story acceptance criteria — cover scope and exit criteria for each feature
  • Test charters — cover approach for exploratory sessions

The content of a test plan still needs to exist — it is just distributed across these artefacts rather than in a single document. A common mistake on agile teams is to confuse “we don’t write formal test plans” with “we don’t plan testing.” The planning happens; it just takes a different form.

For larger agile releases (PI planning in SAFe, quarterly planning, major version releases), a lightweight release-level test plan is valuable even in highly agile environments. It documents the overall scope, the regression strategy, the automation coverage goals, and the release criteria — content that does not fit neatly into individual sprint planning.

ISTQB mapping

ISTQB reference
Syllabus refTopicLevel
CTFL 5.1Test planning — purpose, content, and the planning processFoundation
CTFL 5.1 K2Explain the purpose of a test plan and the typical contentFoundation LO
CTAL-TM Ch. 2Test management — full test planning framework, risk-based approach, estimation, strategyAdvanced / Lead
CTAL-TM 2.2 K4Create a test plan for a given project using a risk-based approachAdvanced LO

Foundation candidates must be able to explain the purpose of a test plan and recall its typical contents. Advanced (CTAL-TM) candidates must be able to create a risk-based test plan for a given project scenario — this is a K4 (create) learning objective, the highest level in Bloom’s taxonomy.

Common mistakes

  • Skipping exit criteria — without explicit exit criteria, testing ends when time runs out. This is the single most common cause of releasing with known gaps that “we didn’t have time to test.” Define exit criteria first, then plan the effort to achieve them.
  • No risk register — every project has risks (incomplete requirements, environment instability, resource constraints). Document them in the plan with a mitigation action for each. Teams that don’t plan for risks are surprised when they materialise — and then improvise under pressure, which is the worst time to make testing decisions.
  • Plan written once and never updated — a test plan written at sprint planning and not revisited is wrong by day 3. Update scope, risks, and estimates when things change. The plan is a communication tool, not a historical record.
  • Conflating test strategy with test plan — the strategy says “we use risk-based testing for all projects.” The plan says “for this release, the high-risk areas are X, Y, and Z, and they will receive these specific test techniques.” Both are needed; neither substitutes for the other.
  • Ignoring entry criteria — teams frequently define exit criteria but not entry criteria. Without entry criteria, testing starts before it should, wastes time on broken builds, and produces invalid results that distort the metrics.

Test planning is the home for risk-based decisions. Use Risk-Based Testing to populate the risk register section of your plan and to determine which features get the most test depth.

Exit criteria should be expressed as metrics thresholds. Test Metrics & Reporting provides the KPIs to use as the specific numbers in your exit criteria.

The estimation section of a test plan draws on Test Estimation techniques — WBS, three-point estimation, and historical metrics.

Practice this technique: Try Test Lead Practice 01 — Test strategy review.