Junior Automation · Core Skill

Basic Assertions

Assertions are the decision points in your tests. A good assertion tells you exactly what broke. A bad assertion hides failures behind noise. Learn the difference.

Junior Automation ISTQB CTAL-TAE v2.0 — Chapter 6 ~10 min read + exercise

1 The Hook — Why This Matters

Auckland-based SaaS company Vend had a test suite with 400 assertions. When a critical checkout bug reached production, they discovered the test had been passing for three months — because the assertion was assertTrue(true), a placeholder a developer forgot to replace. The test ran green in CI every single day. The bug cost them twelve hours of downtime and damaged customer trust.

An assertion that doesn't assert is worse than no test at all. It gives you false confidence while the application burns.

2 The Rule — The One-Sentence Version

Every test must have at least one assertion that would fail if the bug you're testing for actually exists.

If you can't describe what would make the assertion fail, you don't have a real test. Assertions should validate observable behaviour — not internal state, not implementation details, and definitely not tautologies.

3 The Analogy — Think Of It Like...

Analogy

A smoke alarm that beeps regardless of whether there's smoke.

You install it, test the battery, and feel safe. But when the kitchen catches fire, the alarm stays silent because the sensor was never connected. A test with a weak assertion is exactly that: infrastructure that looks correct but fails its one job.

4 Watch Me Do It — Step by Step

Here are the three assertion patterns every junior automation engineer must know.

  1. Hard assertion (stops on first failure)Use when the precondition must be true for the rest of the test to make sense.
    def test_login_success():
        response = api.login("user@example.com", "password123")
        assert response.status_code == 200, f"Login failed: {response.text}"
        assert "token" in response.json(), "No auth token returned"
  2. Soft assertion (collects all failures)Use when validating multiple independent fields on a form or API response.
    import pytest_check as check
    
    def test_user_profile_fields():
        profile = api.get_profile(1).json()
        check.equal(profile["name"], "Alice", "Name mismatch")
        check.is_true(profile["is_active"], "User should be active")
        check.greater(profile["login_count"], 0, "Login count should be positive")
  3. Meaningful assertion messagesAlways include what you expected and what you got.
    # Bad
    assert user.is_active
    
    # Good
    assert user.is_active, "User should be active after email verification; status={user.status}"
Common assertion types
TypePythonJavaScriptWhen to use
Equalityassert a == bexpect(a).toBe(b)Exact match required
Containsassert "x" in sexpect(s).toContain("x")Partial match OK
True/Falseassert flagexpect(flag).toBeTruthy()Boolean condition
Throwspytest.raises(ValueError)expect(fn).toThrow()Error handling paths
Null checkassert x is not Noneexpect(x).toBeDefined()Required field exists
Pro tip: In pytest, the built-in assert statement gets magic introspection. If assert a == b fails, pytest shows you the actual values of a and b automatically. No need for verbose assertion libraries unless you need soft assertions.

5 When to Use It / When NOT to Use It

✅ Use hard assertions when...

  • The condition is a prerequisite (login, navigation)
  • A failure here invalidates the entire test
  • You want fast fail-fast feedback

❌ Avoid hard assertions when...

  • Validating multiple independent UI fields
  • You want to collect all failures at once
  • The fields are not interdependent

Before you apply this technique, ask:

  • Do you need to verify observable behaviour, not internal state?
  • Are you distinguishing between checks that must pass versus those that should ideally pass?
  • Does your assertion have a clear, descriptive message for when it fails?
 — DON'T DO THIS -->

6 Common Mistakes — Don't Do This

🚫 Asserting implementation details

I used to think: Testing that a private method was called proves the feature works.
Actually: Tests should verify observable behaviour, not internal implementation. If you refactor the code (and you will), implementation-detail assertions break even when the feature still works.

🚫 Multiple unrelated hard assertions

I used to think: More assertions per test means better coverage.
Actually: If assertion #1 fails, you never learn whether assertions #2-5 would have caught additional issues. Use soft assertions for independent checks, or split into separate tests.

🚫 Vague assertion messages

I used to think: assert False is enough because the stack trace shows the line.
Actually: When a test fails in CI at 2am, "AssertionError" with no context means someone has to open the code and guess. Always include expected vs. actual values in the message.

When this technique fails

Hard assertions stop your test immediately—if your check is wrong, you'll miss later bugs. Soft assertions let tests continue, but if you have too many soft assertions, a test that passes is actually masking failures. Choose hard assertions for must-haves (auth, payments, critical UI), soft assertions for nice-to-haves (formatting, optional fields).

7 Now You Try — Interview Warm-Up

🎯 Interactive Exercise

Scenario: You are reviewing a teammate's test. It has one assertion: assert login_page.is_logged_in(). The test has been passing for weeks. Yesterday, a production bug allowed users to access the dashboard without authentication. The test still passed.

What is wrong with this assertion, and how would you fix it?

The problem:

The assertion checks a boolean method on a page object, but we don't know what is_logged_in() actually verifies. If it only checks for the presence of a logout button (which might appear even for unauthenticated users due to a UI bug), the assertion is too shallow. Fix: assert on something the unauthenticated user cannot see — like a user-specific element containing the actual username, or verify the URL contains /dashboard and the welcome message shows the correct name.

8 Self-Check — Can You Actually Do This?

Click each question to reveal the answer. If you got all three, you're ready to practice.

Q1. What is the difference between a hard and soft assertion?

A hard assertion stops the test immediately on failure. A soft assertion collects all failures and reports them at the end. Use hard assertions for critical preconditions; soft assertions for validating multiple independent fields.

Q2. Why is assertTrue(true) dangerous in a test?

It can never fail. It gives the illusion of coverage while providing zero value. Every assertion must have a condition that would evaluate to false if the bug being tested actually exists.

Q3. A form has five fields to validate. Should you use hard or soft assertions?

Soft assertions. If field 1 fails, you still want to know whether fields 2-5 are also broken. Hard assertions would hide that information by aborting the test early.

Interview prep

Interview questions test whether you know when and how to use hard and soft assertions effectively.

Q1: You're testing a payment flow in Xero. The user's balance must not go negative, but we'd also like to verify the transaction ID exists. Which assertion should be hard and which soft?
The balance check must be hard—if it fails, the entire test should fail because the system allowed an invalid state. The transaction ID check can be soft—if it's missing, the payment still succeeded, so the test passes but logs the data gap.
Q2: A soft assertion in your test fails, but the test status shows 'PASS'. How might this bite you in a CI pipeline?
Your CI system might treat 'PASS' as success and merge the code, even though that soft assertion caught a real bug. The bug makes it to production. Always review soft assertion failures in reports, not just test status.
Q3: When you write a hard assertion, what should always accompany it?
A descriptive message. Instead of `expect(result).toBe(true)`, use `expect(result).toBe(true, 'Payment deduction must succeed before returning receipt'). Future developers—or you, under pressure—will understand why this check exists.
Q4: You're testing Trade Me's property search. You assert that at least one property appears, that the price is shown, and that the address is visible. Why is this risky?
You're testing three unrelated concerns in one test. If the test fails, you don't know which feature broke. Split it into three tests: one for search results appearing, one for price display, one for address display. Each test fails for one reason only.