Test Strategy · Junior through Lead

Regression Testing

Every change to working software is a risk. Regression testing confirms that changes haven’t broken what was already working. It’s the safety net that enables teams to move fast with confidence.

Junior Senior Test Lead

What it is

Regression testing re-executes previously passing tests to verify that new code changes haven’t introduced regressions — bugs in functionality that was already working. It’s run after every change: bug fixes, new features, refactors, infrastructure updates.

The challenge: the regression suite grows with every release, but testing time stays fixed. This forces decisions about which tests to run.

Selection strategies

  • Full regression — run everything. Accurate but slow. Used for major releases or when change scope is large.
  • Risk-based selection — run tests for the changed areas plus any areas they interact with. Balances coverage and speed.
  • Change-impact analysis — trace which code changed and which tests cover that code. Run only those tests. Requires good traceability.
  • Core / smoke regression — run the most critical tests only. Used for quick confidence checks after small changes.

In practice: most teams run a tiered regression suite — smoke tests on every PR, a targeted suite nightly, and a full regression weekly or before release. The tiers are defined by the test lead based on risk and run time.

Automation and regression

Regression testing is the strongest argument for test automation. Manual regression on a large suite is unsustainable — it’s repetitive, error-prone, and gets skipped under time pressure.

Good automation candidates for regression: stable features, critical paths, high-risk areas, tests that run on every build. Poor candidates: UI tests that change frequently, exploratory or investigation testing.

Maintaining the regression suite

A regression suite that’s never pruned becomes a liability. Signs it needs attention:

  • Flaky tests (intermittent failures) erode confidence in the whole suite
  • Tests for deleted features still run and waste time
  • The suite takes so long to run that teams skip it
  • Duplicate tests cover the same path

Review and prune the regression suite at least quarterly. Delete tests that no longer add value; fix or quarantine flaky tests.

Regression vs confirmation testing

  • Confirmation testing (retest) — verify a specific defect has been fixed. Directly re-run the test case that failed.
  • Regression testing — verify nothing else broke as a side effect of the fix. Broader in scope.

Every defect fix should trigger both: a retest of the original bug, and a regression of the surrounding area.

Practice this technique: Try Senior Practice 09 — Cross-level regression.

Try It — Select the right regression strategy

A NZ insurance portal has a regression suite of 800 tests. Four different change scenarios have come in. For each one, choose the most appropriate regression strategy.

Change scenarioBest regression strategy
A typo was fixed in the "Thank you" confirmation email template — no code logic changed
The premium calculation engine was refactored — no new features, but significant internal changes
A new "add vehicle" feature was added to the policy management screen
Major release: payment gateway switched from Stripe to POLi + 40 other features shipped simultaneously