ISTQB · Foundation Level

ISTQB CTFL v4.0
Foundation Level

The first and most important ISTQB certification. CTFL is the industry-standard foundation — required by many employers and a prerequisite for all advanced ISTQB certifications.

Grad Junior
CTFL v4.0 — 2023 syllabus

Certification overview

  • Full name: Certified Tester Foundation Level (CTFL)
  • Version: v4.0 (released 2023) — significant update from v3.1
  • Exam: 40 multiple-choice questions, 60 minutes, 65% pass mark (26/40)
  • Prerequisites: None (some boards require ≥6 months work experience)
  • Minimum study: 1,135 minutes (~19 hours) of structured training
  • Validity: Lifetime — no renewal required

Chapter 1 — Fundamentals of Testing

The conceptual foundation. Know these principles cold — they appear in exam questions throughout all chapters.

  • Why testing is necessary — software defects can cause financial loss, time waste, business reputation damage, or harm. Testing reduces risk of failures.
  • Testing vs debugging — testing finds defects; debugging locates and fixes them. Testers test, developers debug.
  • 7 testing principles:
    1. Testing shows the presence of defects — not their absence
    2. Exhaustive testing is impossible
    3. Early testing saves time and money
    4. Defects cluster together (Pareto principle)
    5. Tests wear out (pesticide paradox — vary your tests)
    6. Testing is context-dependent
    7. Absence of defects is a fallacy
  • Test process: planning → monitoring & control → analysis → design → implementation → execution → completion
  • Testware: test plans, test cases, test data, test scripts, defect reports, test completion reports
Key terms — Chapter 1
TermDefinition
ErrorA human mistake that causes a fault/defect to be introduced
Defect / Bug / FaultAn imperfection in the code or documentation
FailureWhen a defect is executed and causes unexpected behaviour
Root causeThe fundamental reason a defect was introduced
Test oracleThe source used to determine expected results (spec, requirements, experience)

Chapter 2 — Testing Throughout the SDLC

  • Test levels: component/unit, component integration, system, system integration, acceptance testing
  • Test types: functional, non-functional (performance, security, usability), white box, regression, change-related
  • SDLC models: how testing fits into Sequential (V-model), Iterative (Agile, Scrum), and Continuous Delivery/DevOps
  • Shift-left: starting testing earlier in the lifecycle — reviews, static analysis, TDD, BDD
  • Test-first: TDD, ATDD, BDD — writing tests before code

Exam tip: know the difference between test levels (unit/integration/system/acceptance — HOW FAR into the stack) and test types (functional/non-functional — WHAT you’re testing). These are independent dimensions.

Chapter 3 — Static Testing

  • Static vs dynamic: static testing doesn’t run the code. Reviews, walkthroughs, inspections — you find defects in documents and code without executing anything.
  • Review types: informal review, walkthrough, technical review, inspection (most formal)
  • Review process: planning → review initiation → individual review → communication & analysis → fixing & reporting
  • Static analysis: automated tools (linters, SAST tools) that check code without running it

Chapter 4 — Test Techniques

The heaviest exam chapter — K3 learning objectives mean you must apply these, not just recognise them.

Chapter 4 technique summary
TechniqueCategoryKey exam pointRef
Equivalence PartitioningBlack boxOne test per partition (valid & invalid)4.2.1
Boundary Value AnalysisBlack box2-value BVA and 3-value BVA4.2.2
Decision Table TestingBlack boxEach column = one test case4.2.3
State Transition TestingBlack boxAll-state and all-transition coverage4.2.4
Use Case TestingBlack boxBasic, alternate, and exception flows4.2.5
Statement CoverageWhite box% of statements executed4.3.1
Branch CoverageWhite boxSubsumes statement coverage4.3.2
Error GuessingExperienceFault lists, defect taxonomies4.4.1
Checklist-Based TestingExperienceHigh-level conditions, flexible4.4.2
Exploratory TestingExperienceCharter, time box, session report4.4.3

Chapter 5 — Test Management

  • Test planning: scope, approach, resources, schedule, risk, entry/exit criteria
  • Estimation: metrics-based, expert judgement, three-point estimation
  • Risk management: product risk (the software) vs project risk (the process)
  • Test monitoring and control: test progress reports, test metrics, go/no-go decisions
  • Defect management: defect lifecycle, defect reports, root cause analysis

Chapter 6 — Test Tools

  • Test management tools (Jira, TestRail, Zephyr)
  • Static analysis tools (linters, SAST)
  • Test design and execution tools
  • Test automation tools (Selenium, Playwright, Cypress)
  • Performance testing tools (JMeter, k6)
  • Tool selection considerations: team capability, integration, cost, open source vs commercial

Exam format & approach

  • 40 questions • 60 minutes • 65% pass (26/40)
  • Multiple choice, single correct answer
  • Questions are scenario-based — read carefully before answering
  • Common trap: picking the "most accurate" answer when the question asks for the "best" one in context
  • ISTQB provides official sample exams — do them all under timed conditions

Study strategy: (1) read the official syllabus, not just summaries. (2) Do every K3 exercise in the syllabus — apply the techniques, don’t just read about them. (3) Take the official sample exam twice: once open-book to learn, once timed to prepare.

Practice questions

These are CTFL-style questions covering all 6 chapters. Answer them before checking the key. The real exam has 40 questions in 60 minutes — that's 90 seconds per question. Practise under time pressure.

Q1 (Ch1 — principles). Which of the 7 testing principles explains why running the same test suite repeatedly over many releases will find fewer new defects over time?
A) Exhaustive testing is impossible    B) Early testing saves time and money    C) Tests wear out (pesticide paradox)    D) Absence of defects is a fallacy

Q2 (Ch2 — test levels). A developer writes and runs tests for a single function in isolation, checking that it returns the correct output for given inputs. Which test level is this?
A) System testing    B) Acceptance testing    C) Integration testing    D) Component (unit) testing

Q3 (Ch3 — static testing). A team reviews a requirements document before any code is written, marking ambiguities and missing acceptance criteria. Which type of testing is this?
A) Dynamic testing    B) Regression testing    C) Static testing    D) Exploratory testing

Q4 (Ch4 — EP/BVA). A field accepts values from 1 to 100. Using 2-value BVA, which test inputs cover all four boundary points?
A) 1, 50, 100    B) 0, 1, 100, 101    C) 1, 2, 99, 100    D) 0, 50, 100

Q5 (Ch5 — risk). A test manager has limited time and must decide what to test first. She ranks features by likelihood of failure multiplied by business impact. Which approach is this?
A) Exploratory testing    B) Regression testing    C) Risk-based testing    D) Checklist-based testing

Q6 (Ch1 — principles). A stakeholder claims that because no defects were found during testing, the software is guaranteed to be defect-free. Which testing principle directly contradicts this belief?
A) Testing shows the presence of defects, not their absence    B) Exhaustive testing is impossible    C) Early testing saves time and money    D) Testing is context-dependent

Q7 (Ch1 — test process). During which phase of the test process does the team decide what will be tested, how it will be tested, and who will do the testing?
A) Test monitoring and control    B) Test planning    C) Test analysis    D) Test design

Q8 (Ch2 — Agile). In an Agile team, who is responsible for quality?
A) The dedicated QA engineer    B) The test manager    C) The whole team    D) The product owner

Q9 (Ch2 — test types). Testing that verifies the system meets business requirements and is ready for deployment is called:
A) Component testing    B) Integration testing    C) System testing    D) Acceptance testing

Q10 (Ch3 — reviews). A formal review where the author walks through the document with reviewers, who ask questions and make notes, but there is no formal preparation required. This is best described as:
A) Inspection    B) Walkthrough    C) Technical review    D) Informal review

Q11 (Ch3 — static analysis). Which of the following can static analysis tools detect WITHOUT executing the code?
A) Memory leaks    B) Unreachable code    C) Race conditions    D) Incorrect calculations

Q12 (Ch4 — EP). An age field accepts values from 18 to 65. Using equivalence partitioning, how many valid partitions and invalid partitions exist?
A) 1 valid, 2 invalid    B) 2 valid, 1 invalid    C) 1 valid, 1 invalid    D) 3 valid, 2 invalid

Q13 (Ch4 — decision tables). A system applies a discount based on customer type (member/non-member) and order value (over/under $100). What is the minimum number of test cases needed for full coverage using decision table testing?
A) 2    B) 3    C) 4    D) 6

Q14 (Ch5 — defect management). A defect is reported as " cosmetic typo in footer." It should be fixed before release but does not block any functionality. Which statement best describes severity and priority?
A) High severity, high priority    B) Low severity, high priority    C) Low severity, low priority    D) High severity, low priority

Q15 (Ch5 — estimation). A test manager estimates testing effort based on the average time taken for similar projects in the past. Which estimation technique is this?
A) Expert-based    B) Metrics-based    C) Risk-based    D) Bottom-up

Q16 (Ch6 — test tools). A tool that records user actions on a web application and replays them automatically is best classified as:
A) A test management tool    B) A static testing tool    C) A test execution automation tool    D) A performance testing tool

Q17 (Ch6 — tool selection). Before adopting a new test automation tool, the team should FIRST:
A) Train all testers on the tool    B) Conduct a proof of concept on a pilot project    C) Purchase a commercial licence    D) Rewrite all existing manual tests

Q18 (Ch1 — principles). Testing a mobile banking app the same way as a hospital patient management system would likely miss critical risks. Which principle explains this?
A) Exhaustive testing is impossible    B) Testing is context-dependent    C) Defects cluster together    D) Absence of defects is a fallacy

Q19 (Ch2 — shift-left). "Shift-left" testing means:
A) Moving testing activities earlier in the software development lifecycle    B) Testing only on the left side of the user interface    C) Transferring test work to developers    D) Testing earlier in the day for better focus

Q20 (Ch4 — experience-based). A senior tester looks at a login form and immediately tests for SQL injection, password enumeration, and brute-force protection because she has seen these fail before. Which technique is she using?
A) Equivalence partitioning    B) Boundary value analysis    C) Error guessing    D) Decision table testing

Answer key:

Q1: C (pesticide paradox — tests wear out; vary them or they stop finding bugs).

Q2: D (component/unit testing tests individual functions in isolation).

Q3: C (static testing examines work products without executing them).

Q4: B (2-value BVA: 0 out, 1 in, 100 in, 101 out).

Q5: C (risk-based testing = likelihood × impact).

Q6: A (testing shows presence of defects, not absence — no testing can prove zero defects).

Q7: B (test planning defines scope, approach, resources, and schedule).

Q8: C (in Agile, quality is the whole team's responsibility).

Q9: D (acceptance testing validates fitness for purpose and readiness for deployment).

Q10: B (walkthroughs are author-led with no formal preparation requirement).

Q11: B (static analysis detects code structure issues like unreachable code without execution; memory leaks and race conditions require runtime analysis).

Q12: A (1 valid partition: 18-65; 2 invalid partitions: <18 and >65).

Q13: C (2 conditions × 2 values each = 4 combinations; minimum 4 test cases for full coverage).

Q14: C (low severity: doesn't affect functionality; low priority: fix before release but not blocking).

Q15: B (metrics-based estimation uses historical data from similar projects).

Q16: C (record/replay is test execution automation).

Q17: B (proof of concept validates the tool fits your environment before commitment).

Q18: B (testing is context-dependent — different domains need different approaches).

Q19: A (shift-left means testing earlier in the SDLC).

Q20: C (error guessing uses experience to predict where defects are likely).

Study resources — on this site

Everything you need to study for CTFL is right here. No external sites required.