Maintenance Testing
Regression, Confirmation, and Re-testing — ISTQB CTFL v4.0 Chapter 2
1 The Hook
An Auckland-based insurance company deployed what their developer called a “minor” CSS fix for a button colour on their claims portal. The change touched a shared stylesheet. In Safari on iOS, the fix inadvertently broke the file-upload component on the claims form. Customers in Christchurch and Wellington spent three days unable to submit storm-damage claims. Complaints piled up on social media. The team only noticed when a support agent in Hamilton flagged a pattern.
The root cause? They had no regression test suite. They confirmed the button looked correct, but nobody checked whether the rest of the page still worked. That “minor” CSS fix cost them customer trust, support hours, and a rushed weekend hotfix.
This is why maintenance testing exists. It is not about being paranoid. It is about knowing, with confidence, that a change has not quietly broken something else.
2 The Rule
Maintenance testing is the testing of changes made to a deployed system. It covers corrective, adaptive, perfective, and preventive changes. Whenever code moves, you need to answer three questions:
- Regression testing: Did we break anything that used to work?
- Confirmation testing / Re-testing: Did the fix or change actually work?
- Sanity / Smoke testing: Is the system stable enough to justify deeper testing?
Regression testing is about unchanged areas. Confirmation testing is about changed areas. Sanity testing is a quick health check to decide whether a full test run is worthwhile.
Under ISTQB CTFL v4.0, maintenance testing is triggered by modifications, migrations, or retirements of software. The scope depends on risk, the size of the change, and the existing test assets.
Post-Deployment: Production Verification Testing (PVT)
The tester’s job doesn’t end when the "Deploy" button is clicked. Production Verification Testing (PVT), sometimes called "Smoke Testing in Live," is the final safety check. Its purpose is to ensure the deployment was successful and that the production environment is behaving as expected.
- Read-Only Verification: Wherever possible, use read-only tests (e.g., login, viewing an account, searching) to avoid polluting live financial data or triggering real notifications.
- Sanity Check: Verify critical paths. If you’ve deployed a change to the NZ banking portal, check that the login page loads and the "Current Balance" displays correctly for a test account.
- Feature Flags: If using feature flags, verify that the feature is either "Off" (as intended) or "On" (if it was a dark launch) and visible only to the correct user groups.
Note: PVT is a collaborative effort between Testers, DevOps, and Site Reliability Engineers (SREs). The tester provides the functional confidence that the code is "live and well."
3 The Analogy
Imagine you are renovating a villa in Wellington. You have installed a brand-new kitchen. Before you call the job done, you turn on the oven to check it heats up — that is confirmation testing. But you also walk through every room to make sure the builders did not accidentally cut a wire, crack a pipe, or block a vent while they were working. That is regression testing. And before you even start the full inspection, you flick the main switch to confirm the power is on. That is sanity testing.
Skipping the walk-through because “they only worked in the kitchen” is exactly how you end up with a cold shower and a tripped breaker.
4 Watch Me Do It
An NZ retail site based in Auckland is releasing a new payment method: Paymark (now part of Centrix) integration alongside existing Stripe and bank-transfer options. The release note says:
- New feature: Paymark checkout flow
- Bug fix: shipping-cost calculation rounding error for rural addresses
- Bug fix: expired session token not redirecting to login
Here is the maintenance testing plan:
| Test | Type | Rationale |
|---|---|---|
| Complete a purchase using Paymark | Confirmation | Verifies the new feature works end-to-end |
| Verify shipping to a rural Otago address rounds correctly | Confirmation | Verifies the specific fix |
| Let session expire, confirm redirect to login | Confirmation | Verifies the specific fix |
| Complete purchase with Stripe and bank transfer | Regression | Unchanged payment methods may be affected by shared checkout code |
| Check shipping costs for urban Auckland and suburban Christchurch | Regression | Shared shipping module was modified |
| Run login flow from multiple entry points | Regression | Session-handling code is shared across the site |
| Homepage loads, cart adds items, checkout initiates | Sanity | Quick health check before deeper regression |
Notice that confirmation tests target the change. Regression tests target the neighbourhood of the change. Sanity tests give you a green light to start.
5 When / When-not
Run a full regression suite when:
- The change touches core architecture (authentication, payment, database schema)
- You are preparing for a major release or compliance audit
- The system has a history of “surprise” bugs in seemingly unrelated areas
- You have automated regression tests that run quickly
Targeted regression is enough when:
- The change is isolated to a single module with clear boundaries
- Static analysis and code review confirm no cross-module impact
- You are in a continuous-deployment pipeline with strong monitoring
Re-testing alone is sufficient when:
- You are retesting a single bug fix in a tightly scoped microservice
- The fix is a configuration change with no code modification
- You have strong evidence from prior releases that the module has zero side-effect history
6 Common Mistakes
✗ Running the exact same tests every time without selecting based on risk
Fix: Tailor your regression pack to the change. If you touch the payment module, you do not need to retest the FAQ page. Use impact analysis and risk assessment to prioritise.
✗ Skipping regression for “small” changes
Fix: Size of change is a poor predictor of blast radius. A one-line null-check can prevent a crash; a one-line CSS change can break a form. Always ask “what could this affect?” not “how big is this?”
✗ Confusing re-test with regression
Fix: Re-test proves the fix works. Regression proves you did not break anything else. They are complementary, not interchangeable. Do one, then the other.
✗ Not updating the regression suite after changes
Fix: When you add a new feature, add tests for it to the regression suite. When you retire a feature, remove obsolete tests. A stale regression pack wastes time and creates false confidence.
7 Now You Try
Scenario: A Dunedin-based SaaS team releases the following:
- Bug fix: CSV export now wraps cells containing commas
- Bug fix: email notifications no longer duplicate on retry
- New feature: dark-mode toggle for the dashboard
For each of the tests below, classify it as Confirmation, Regression, or Sanity.
- Export a CSV with commas in cell values and open it in Excel
- Trigger an email retry and verify only one email arrives
- Toggle dark mode on the dashboard and check contrast ratios
- Export a PDF report (unchanged feature)
- Log in and load the homepage
- Verify light mode still works after dark-mode toggle is off
Model Answer:
- Confirmation — directly tests the CSV fix
- Confirmation — directly tests the email fix
- Confirmation — directly tests the new feature
- Regression — unrelated export path may share code
- Sanity — quick health check before deeper testing
- Regression — ensures new feature did not break existing default behaviour
Rationale: Confirmation always targets the change. Regression targets unchanged areas that might be affected. Sanity is a lightweight gate.
8 Self-Check
Q1. What is the primary difference between confirmation testing and regression testing?
Confirmation testing verifies that a specific change or fix works as intended. Regression testing verifies that unchanged parts of the system still work correctly after the change.
Q2. When is sanity testing most useful during maintenance testing?
Sanity testing is most useful as an initial gate. It confirms the system is stable enough to warrant the time and effort of a full regression or confirmation test run.
Q3. Why should the regression suite be updated after every release?
Because new features need coverage in future regressions, retired features create obsolete tests, and changed behaviour may invalidate existing test expectations. A stale suite wastes time and breeds false confidence.