Junior ISTQB CTFL v4.0 — Ch. 1

The 7 ISTQB Testing Principles

The foundational rules every tester lives by. Understand these and you’ll make smarter decisions about what to test, when to stop, and how to communicate risk.

1 The Hook

In 2019, a Christchurch-based SaaS startup launched a new online booking platform for holiday parks across the South Island. The team had run their full test suite the night before launch. Every test passed. Green across the board. They popped the champagne and pushed to production.

By 9 a.m., support tickets were flooding in. Customers in Queenstown and Wanaka were being double-charged. Families who had booked a site at a campground near Lake Tekapo were arriving to find no reservation in the system. The payment gateway integration was failing silently under load, and the database race condition only appeared when two users booked the same site within milliseconds of each other.

The team had fallen for the most common trap in testing: they believed that green tests meant no bugs. They had forgotten the first principle — testing shows the presence of defects, not their absence. Those passing tests were real and valuable, but they did not prove the system was bug-free. They only proved that the specific scenarios they had thought to test were working.

That launch cost the company six figures in refunds, reputational damage, and a frantic all-nighter to roll back. The principles we cover here exist to prevent exactly that kind of mistake.

2 The Rule

ISTQB defines seven principles that guide effective testing. These are not optional suggestions — they are fundamental truths about software and human cognition.

  1. Testing shows the presence of defects, not their absence. Testing can prove that bugs exist, but it cannot prove that no bugs exist. Passing tests raise confidence; they do not guarantee perfection.
  2. Exhaustive testing is impossible. Testing every possible input, path, and combination is not feasible except in the simplest systems. Risk analysis and prioritisation must guide test selection.
  3. Early testing saves time and money. Defects found early in the lifecycle are cheaper to fix. A requirement error caught during review costs far less than one found in production.
  4. Defects cluster together. A small number of modules typically contain most of the defects. This is the Pareto principle in action — focus testing effort where bugs live.
  5. Tests wear out. Repeating the same tests loses effectiveness over time as the same paths are exercised. New tests and variation are needed to find new defects.
  6. Testing is context-dependent. Safety-critical medical software demands different testing than a marketing landing page. The approach must fit the domain, risk, and regulatory environment.
  7. Absence-of-errors is a fallacy. Even a system with no known defects may be unusable, miss user needs, or fail to deliver business value. Testing must verify requirements, not just hunt bugs.

3 The Analogy

Analogy: Building a House Inspection

Imagine you are buying a new build in Auckland. Before settlement, you hire a building inspector to check the property. The inspector spends four hours going through the house with a fine-tooth comb. At the end, they hand you a report: no leaks found, no structural issues, electrical compliance passed.

Does that mean the house is perfect? No. It means the inspector did not find problems in the areas they examined, using the tools and time they had. The weathertightness issue behind the cladding might not show up for two winters. A wiring fault in an uninspected outlet could still exist.

Software testing is the same. The tester is the inspector. The test report is the builder’s report. Both are valuable. Neither is a guarantee. Principle #1, right there.

Now imagine the inspector only checked the ground floor because “the upstairs looks fine.” That’s violating principle #2 — they chose not to exhaustively inspect because time is limited, but they should have used risk to decide what to skip. If the upstairs has a deck over the master bedroom, that’s high risk and must be checked.

4 Watch Me Do It

Let’s apply all seven principles to a real NZ scenario: an online booking system for the Interislander ferry between Wellington and Picton.

PrincipleApplication to Ferry Booking
1. Presence, not absence We test that a booking with a caravan succeeds. That does not prove bookings with motorbikes, oversized vehicles, or pets will also work. We remain humble about what green tests mean.
2. Exhaustive testing impossible We cannot test every combination of departure date, vehicle type, passenger count, promo code, and payment method. We use risk to pick the combinations that matter most: peak holiday weekends, common vehicle types, and popular promo codes.
3. Early testing saves money If we wait until the payment gateway is integrated to discover that the fare calculation logic is wrong, we have to re-test everything downstream. If we review the fare rules during requirements analysis, the fix is a document edit.
4. Defects cluster During pilot testing, 80% of bugs come from the “vehicle dimensions” module. We increase test effort there and consider a focused code review rather than spreading tests evenly across all modules.
5. Tests wear out Running the same “book a car, pay with credit card” test every day will eventually stop finding new bugs. We add variation: different browsers, mobile devices, concurrent bookings, network throttling.
6. Context-dependent This is a transport booking system handling personal and payment data. It needs PCI compliance, privacy act alignment, and high availability. We test differently than we would for an internal staff rostering tool.
7. Absence-of-errors fallacy Even if every test passes, the system might still be unusable if the booking flow takes twelve clicks and customers abandon their carts. Usability and business value matter as much as correctness.

5 When / When-not

When to InvokeWhen Not to Be Dogmatic
When a stakeholder says “all tests passed, so we’re done” — cite principle #1. When a critical production fix needs to go out in 30 minutes. You may run a focused regression rather than a full suite. Principles guide; they do not override business urgency.
When planning test coverage and deciding what to include — use principle #2 to justify why you cannot test everything. When the system is genuinely tiny — a single form with three fields. Exhaustive testing is possible here, and you should do it.
When a project manager questions the value of a requirements review — principle #3 is your argument. When the “early” activity becomes a months-long analysis paralysis. Early testing saves money; premature testing without clear scope wastes it.
When deciding where to focus limited test effort — principle #4 points you at the risky modules. When a previously stable module has just been refactored. Its defect history no longer predicts its current risk.

Before you apply these principles, ask:

  • Are you communicating test results as definitive proof of quality, or as evidence of what was tested?
  • Does your test strategy prioritise based on risk, or do you test everything equally?
  • Are you applying principles dogmatically, or do you balance them against business needs and urgency?

6 Common Mistakes

✗ Thinking tests prove quality

Correction: Tests find defects. Quality is a broader property involving usability, performance, security, and fitness for purpose. A system can pass every test and still be a terrible product. Communicate test results as “no defects found in the areas tested” rather than “the system is high quality.”

✗ Testing everything equally

Correction: This violates principle #2 and #4. Risk-based testing means the payment module gets more attention than the “about us” page. Use failure probability and business impact to prioritise. A defect in the ferry booking engine costs thousands; a typo in the footer costs nothing.

✗ Assuming developers tested enough

Correction: Developers test their own code, but they bring the same assumptions that wrote the code. Principle #6 reminds us that independence matters. A developer might not think to test with a Māori macron in a name field, or with a rural NZ postcode format. Diverse perspectives find diverse bugs.

When these principles fail

The worst application of principles is dogmatic rigidity. A team that insists on exhaustive testing when a production hotfix is needed, or that rejects a pragmatic early release because “tests wear out,” has lost sight of the goal: deliver value safely. Principles guide; they do not replace judgment. The cost of fixing a defect in production may be greater than the cost of shipping with known risk in some cases.

7 Now You Try

? Match the Scenario to the Principle

Read each scenario below and identify which ISTQB testing principle applies. Click to reveal the answer.

Scenario A: A team spends three weeks testing every possible date combination for a campground booking system. They run 50,000 test cases and still have not covered every scenario.

Principle #2 — Exhaustive testing is impossible. The team is attempting the impossible. They should switch to risk-based testing: focus on peak holidays, long weekends, and school holidays where bugs cause the most harm.

Scenario B: A Wellington fintech’s mobile app has zero open defects, but app store reviews complain it is “too confusing” and customers keep calling support for help with transfers.

Principle #7 — Absence-of-errors is a fallacy. The software may be technically correct, but it is not delivering value. Testing should include usability and user acceptance dimensions, not just functional correctness.

Scenario C: A tester notices that 12 of the last 15 bugs were in the GST calculation module of an NZ accounting platform. They recommend a focused regression on that module.

Principle #4 — Defects cluster together. When bugs concentrate in one area, that area deserves extra attention. A code review or refactor of the GST module may be warranted before more testing.

8 Self-Check

Q1: If all tests pass, does that mean the software is bug-free?

No. Testing shows the presence of defects, not their absence (Principle #1). Passing tests only demonstrate that the specific scenarios tested did not reveal defects. Unknown bugs, untested paths, and edge cases may still exist.

Q2: Why is early testing cheaper than late testing?

A requirement error caught during analysis can be fixed with a document edit. The same error found in production may require data migration, customer communication, hotfix deployment, and rollback planning. The cost of fixing a defect increases exponentially the later it is discovered (Principle #3).

Q3: A module has been stable for months with no defects. Should you stop testing it?

Not entirely. Principle #5 reminds us that tests wear out, but it does not mean stable modules are immune. If that module changes, or if it integrates with new components, it needs fresh testing. However, you can reduce routine regression effort on unchanged, stable code and reallocate it to higher-risk areas (Principle #4).

9 Interview Prep — Junior Q&A

Kiwi employers often ask junior testers to articulate these principles. Have short explanations ready.

Q. "Why do testing principles matter if they seem obvious?"

Principles are easy to state but hard to follow under pressure. When a manager says "all tests pass, we're done," principle #1 reminds you to correct them: tests show presence, not absence. When you are underfunded and exhausted, principle #3 reminds you that early testing saves money later. They are decision-making tools when your head is spinning.

Q. "What is the most important principle and why?"

I would say principle #3: early testing saves time and money. Because if you nail it, the other principles follow. Test during requirements review, and you prevent principle #1 (absence-of-errors fallacy). You have less to test late, so principle #2 (exhaustive is impossible) becomes manageable. Early testing is the multiplier.

Q. "How would you explain principle #7 to a non-technical stakeholder?"

I would say: "A bug-free system that nobody wants to use is worse than a buggy system that customers love. Testing is not just about finding defects; it's about verifying the system delivers real value. A payment system can be technically perfect but so confusing that customers abandon their carts. We test for both correctness and usefulness."