Basic Assertions
Assertions are the decision points in your tests. A good assertion tells you exactly what broke. A bad assertion hides failures behind noise. Learn the difference.
1 The Hook — Why This Matters
Auckland-based SaaS company Vend had a test suite with 400 assertions. When a critical checkout bug reached production, they discovered the test had been passing for three months — because the assertion was assertTrue(true), a placeholder a developer forgot to replace. The test ran green in CI every single day. The bug cost them twelve hours of downtime and damaged customer trust.
An assertion that doesn't assert is worse than no test at all. It gives you false confidence while the application burns.
2 The Rule — The One-Sentence Version
Every test must have at least one assertion that would fail if the bug you're testing for actually exists.
If you can't describe what would make the assertion fail, you don't have a real test. Assertions should validate observable behaviour — not internal state, not implementation details, and definitely not tautologies.
3 The Analogy — Think Of It Like...
A smoke alarm that beeps regardless of whether there's smoke.
You install it, test the battery, and feel safe. But when the kitchen catches fire, the alarm stays silent because the sensor was never connected. A test with a weak assertion is exactly that: infrastructure that looks correct but fails its one job.
4 Watch Me Do It — Step by Step
Here are the three assertion patterns every junior automation engineer must know.
- Hard assertion (stops on first failure)Use when the precondition must be true for the rest of the test to make sense.
def test_login_success(): response = api.login("user@example.com", "password123") assert response.status_code == 200, f"Login failed: {response.text}" assert "token" in response.json(), "No auth token returned" - Soft assertion (collects all failures)Use when validating multiple independent fields on a form or API response.
import pytest_check as check def test_user_profile_fields(): profile = api.get_profile(1).json() check.equal(profile["name"], "Alice", "Name mismatch") check.is_true(profile["is_active"], "User should be active") check.greater(profile["login_count"], 0, "Login count should be positive") - Meaningful assertion messagesAlways include what you expected and what you got.
# Bad assert user.is_active # Good assert user.is_active, "User should be active after email verification; status={user.status}"
| Type | Python | JavaScript | When to use |
|---|---|---|---|
| Equality | assert a == b | expect(a).toBe(b) | Exact match required |
| Contains | assert "x" in s | expect(s).toContain("x") | Partial match OK |
| True/False | assert flag | expect(flag).toBeTruthy() | Boolean condition |
| Throws | pytest.raises(ValueError) | expect(fn).toThrow() | Error handling paths |
| Null check | assert x is not None | expect(x).toBeDefined() | Required field exists |
assert statement gets magic introspection. If assert a == b fails, pytest shows you the actual values of a and b automatically. No need for verbose assertion libraries unless you need soft assertions.5 When to Use It / When NOT to Use It
✅ Use hard assertions when...
- The condition is a prerequisite (login, navigation)
- A failure here invalidates the entire test
- You want fast fail-fast feedback
❌ Avoid hard assertions when...
- Validating multiple independent UI fields
- You want to collect all failures at once
- The fields are not interdependent
Before you apply this technique, ask:
- Do you need to verify observable behaviour, not internal state?
- Are you distinguishing between checks that must pass versus those that should ideally pass?
- Does your assertion have a clear, descriptive message for when it fails?
6 Common Mistakes — Don't Do This
🚫 Asserting implementation details
I used to think: Testing that a private method was called proves the feature works.
Actually: Tests should verify observable behaviour, not internal implementation. If you refactor the code (and you will), implementation-detail assertions break even when the feature still works.
🚫 Multiple unrelated hard assertions
I used to think: More assertions per test means better coverage.
Actually: If assertion #1 fails, you never learn whether assertions #2-5 would have caught additional issues. Use soft assertions for independent checks, or split into separate tests.
🚫 Vague assertion messages
I used to think: assert False is enough because the stack trace shows the line.
Actually: When a test fails in CI at 2am, "AssertionError" with no context means someone has to open the code and guess. Always include expected vs. actual values in the message.
When this technique fails
Hard assertions stop your test immediately—if your check is wrong, you'll miss later bugs. Soft assertions let tests continue, but if you have too many soft assertions, a test that passes is actually masking failures. Choose hard assertions for must-haves (auth, payments, critical UI), soft assertions for nice-to-haves (formatting, optional fields).
7 Now You Try — Interview Warm-Up
Scenario: You are reviewing a teammate's test. It has one assertion: assert login_page.is_logged_in(). The test has been passing for weeks. Yesterday, a production bug allowed users to access the dashboard without authentication. The test still passed.
What is wrong with this assertion, and how would you fix it?
The problem:
The assertion checks a boolean method on a page object, but we don't know what is_logged_in() actually verifies. If it only checks for the presence of a logout button (which might appear even for unauthenticated users due to a UI bug), the assertion is too shallow. Fix: assert on something the unauthenticated user cannot see — like a user-specific element containing the actual username, or verify the URL contains /dashboard and the welcome message shows the correct name.
8 Self-Check — Can You Actually Do This?
Click each question to reveal the answer. If you got all three, you're ready to practice.
Q1. What is the difference between a hard and soft assertion?
A hard assertion stops the test immediately on failure. A soft assertion collects all failures and reports them at the end. Use hard assertions for critical preconditions; soft assertions for validating multiple independent fields.
Q2. Why is assertTrue(true) dangerous in a test?
It can never fail. It gives the illusion of coverage while providing zero value. Every assertion must have a condition that would evaluate to false if the bug being tested actually exists.
Q3. A form has five fields to validate. Should you use hard or soft assertions?
Soft assertions. If field 1 fails, you still want to know whether fields 2-5 are also broken. Hard assertions would hide that information by aborting the test early.