Black Box · Specification-Based

Cause-Effect Graphing

A visual technique that maps causes (inputs and conditions) to effects (outputs and actions) using logical operators. Cause-effect graphing is the systematic predecessor to decision table testing — it reveals which condition combinations actually matter before you commit them to a table.

Senior Test Lead ISTQB CTFL 4.4 · CTAL-TA 3.3

What it is

Cause-effect graphing (CEG) was developed in the 1970s as a formal way to model the logical relationships between inputs and outputs in a system specification. Before building a decision table, you draw a graph that makes the logic explicit — causes on the left, effects on the right, connected by logical relationships.

A cause is any condition that can be true or false: a field value, a user permission, a system state, a business rule. An effect is any observable output or action: an error message displayed, a record saved, an email sent, an account locked.

The graph is not the deliverable — the decision table derived from it is. But the graph forces you to think through every logical path before you write a single test case. It prevents the common mistake of building a decision table from intuition and missing interaction paths.

Why senior-level? Cause-effect graphing requires reading a specification carefully enough to extract all causes and effects, then modelling their logical relationships correctly. Getting the operators wrong produces a decision table with gaps or redundant cases. This is a design skill, not a mechanical one.

Logical operators in cause-effect graphs

The graph uses five logical operators to connect causes to effects:

  • AND — all connected causes must be true for the effect to occur. Example: valid username AND valid password AND account not locked → login success.
  • OR — at least one connected cause must be true. Example: invalid username OR invalid password → show generic error.
  • NOT — the effect occurs when the cause is absent (false). Example: NOT account locked → allow login attempt.
  • NAND (NOT AND) — the effect occurs when NOT all causes are true simultaneously. Useful for mutual-exclusion rules.
  • NOR (NOT OR) — the effect occurs only when all causes are false. Example: no errors detected → proceed to next step.

Causes and effects are numbered (C1, C2… for causes; E1, E2… for effects) so they can be referenced unambiguously in the derived decision table.

How to apply it

  1. Read the specification — identify every distinct input condition and system state that can be true or false. These become your causes (C1, C2, C3…).
  2. Identify effects — identify every distinct output or action the system can produce. These become your effects (E1, E2, E3…).
  3. Draw the graph — place causes on the left, effects on the right. Connect them with the appropriate logical operators. An intermediate node can be used when a combination of causes first produces an intermediate condition, which then triggers an effect.
  4. Check for constraints — some combinations of causes are impossible (mutually exclusive) or always occur together. Mark these with constraint notations (E = exclusive, I = inclusive, O = one and only one, R = requires).
  5. Convert to a decision table — enumerate the cause combinations that produce each effect. Each column in the decision table is one row in a test suite.
  6. Derive test cases — one test case per column in the decision table, covering each distinct combination of causes.

Do not skip the graph step. Teams that jump straight to a decision table often miss effect-producing paths because they are reasoning informally. The graph makes every path visible before you commit to a table structure.

Worked example: user login

The specification reads: “A user can log in if their username is valid, their password is correct, and their account is not locked. If the username or password is invalid, show a generic error. If the account is locked, show a lock message. After three failed attempts, lock the account.”

First, extract causes and effects:

  • C1: Username is valid
  • C2: Password is correct
  • C3: Account is not locked
  • C4: This is the third consecutive failed attempt
  • E1: Login succeeds (redirect to dashboard)
  • E2: Show generic “invalid username or password” error
  • E3: Show “account locked” message
  • E4: Lock the account

The logical relationships: E1 fires when C1 AND C2 AND C3. E3 fires when NOT C3. E2 fires when (NOT C1 OR NOT C2) AND C3. E4 fires when C4.

Login cause-effect — derived decision table (8 valid combinations)
Condition / Effect TC1 TC2 TC3 TC4 TC5 TC6
C1 Username valid TTTFFF
C2 Password correct TTFTFF
C3 Account not locked TFTTTF
C4 Third failed attempt FFTTTF
E1 Login success
E2 Generic error shown
E3 Lock message shown
E4 Account locked

Six test cases cover all meaningful cause-effect combinations. TC2 is particularly easy to miss without the graph: the account is already locked (C3 is false) even when the credentials are correct (C1 and C2 are true). The graph forces you to consider this path explicitly.

TC6 represents someone trying to log in when the account was locked by a previous session — another easily overlooked case. The graph surfaces it because C3 being false independently of the other causes produces E3 regardless of C1, C2, and C4.

From graph to decision table: the key step

Once the graph is drawn, converting to a decision table is mechanical:

  1. List every cause as a row in the conditions section.
  2. Enumerate valid combinations of true/false values. Eliminate impossible combinations (marked by constraint notation on the graph).
  3. For each valid combination, evaluate which effects fire using the logical operators from the graph.
  4. Each column = one test case. Collapse columns where the effects are identical and no intermediate distinction matters.

The number of columns in the table is bounded by the number of possible cause combinations minus the constrained/impossible ones. For n boolean causes, you start with 2² combinations and prune down. With four causes, you start at 16 — the login example above pruned to 6 meaningful cases because several combinations produce identical effects.

ISTQB mapping

ISTQB reference
Syllabus refTopicLevel
CTFL 4.4Cause-effect graphing as precursor to decision table designFoundation (awareness)
CTAL-TA 3.3Cause-effect graphing — formal application, logical operators, constraint notationAdvanced / Senior
CTAL-TA 3.3 K4Analyse a specification to create a cause-effect graph and derive a decision tableAdvanced LO

At Foundation level you need to know that cause-effect graphing exists and that decision tables can be derived from it. At Advanced (CTAL-TA) level you must be able to construct the graph and derive the table from a real specification — this is a K4 (analyse) learning objective.

Common mistakes

  • Confusing causes with effects — “account locked” can be both a cause (a system state that prevents login) and an effect (the action of locking after three failures). Be precise: is this something that exists before the action, or something that happens as a result of the action? Number them separately.
  • Missing negation paths — every “if X then Y” in a spec implies a “if NOT X” path. Draw it. Teams routinely miss the false branch of a cause because they focus on the happy path.
  • Treating the graph as the deliverable — the graph is a thinking tool. The decision table is the test design output. Always complete the conversion.
  • Ignoring constraint notation — if two causes are mutually exclusive and you test them as both true, you are testing an impossible scenario. Mark exclusivity constraints on the graph before generating the table.
  • Not revisiting the graph when specs change — a change to one condition can ripple through the graph and invalidate several test cases. Treat the graph as a living document alongside the spec.

Cause-effect graphing leads directly to Decision Table Testing — the graph is the analysis step, the table is the design step. If you are already comfortable with decision tables, you are doing informal cause-effect analysis. Formalising it as a graph is worth the effort when the specification is complex or ambiguous.

The causes in a CEG are equivalence partitions. Combining cause-effect graphing with Equivalence Partitioning ensures each cause is itself a well-defined partition (not an ill-defined or overlapping condition).

When causes are not independent — when knowing one cause changes the probability or meaning of another — consider Domain Analysis to understand the inter-variable relationships before building the graph.

Practice this technique: Try Junior Practice 09 — Checkbox & radio logic.