API Testing
The UI is a puppet. The API pulls the strings. If the API is wrong, every screen in the world cannot save you.
1 The Hook — Why This Matters
In 2022, a major New Zealand bank launched a new mobile banking feature that allowed instant transfers between accounts. The front-end looked beautiful. User testing was glowing. On launch day, customers discovered that submitting two transfer requests in rapid succession — before the first response returned — caused both to deduct from the source account. The API had no idempotency key and no server-side deduplication.
Within four hours, the bank's call centre was overwhelmed. Social media exploded. The bank had to freeze the feature, manually reconcile hundreds of duplicate transactions, and issue a public apology. The root cause? The test team had tested the transfer screen exhaustively but had only sent one request at a time via the API. They never tested concurrency.
API testing is not an afterthought. It is where the real business logic lives. When the API fails, the UI cannot compensate.
2 The Rule — The One-Sentence Version
An API contract is only as good as the tests that enforce it.
A specification document that nobody tests against is fiction. Every endpoint, every status code, every field type, and every error message must be verified automatically and repeatedly. The moment you trust the contract without testing it, you are gambling with production data.
3 The Analogy — Think Of It Like...
A restaurant order ticket and the kitchen's output.
The API specification is the order ticket: it says exactly what was requested and what should come back. The API response is the plate that leaves the kitchen. API testing checks that every ticket matches the plate: the right ingredients (fields), the right quantities (values), the right timing (performance), and the right rejection of impossible orders (error handling). A beautiful dining room (the UI) means nothing if the kitchen sends out raw chicken.
4 Watch Me Do It — Step by Step
Here is a real NZ example: testing a bank account API. Follow these steps on every API you test.
- Obtain and verify the API specification Start with OpenAPI/Swagger. Verify that every endpoint, parameter, and response schema is documented. If documentation is missing or outdated, capture real requests with browser dev tools and build your own reference.
- Design positive test cases per endpoint For each endpoint, test the happy path with valid data. Verify that the response status, headers, body schema, and field values match the specification.
- Design negative and error cases Test every way a request can fail: invalid IDs, malformed bodies, missing required fields, and out-of-range values. Verify that error responses match the spec and provide actionable messages.
- Test parameter combinations and edge cases Empty arrays, null values, maximum-length strings, Unicode characters, and very large numeric values. APIs often break at the edges because developers assume "reasonable" input.
- Test authentication and authorization Send requests with missing tokens, expired tokens, invalid tokens, and tokens belonging to other users. Verify that unauthorized access returns 401 or 403, not 200 with a different user's data.
- Test integration chains end-to-end A single API call is not the whole story. Test sequences: create an account, transfer funds, verify balances, and rollback on failure. Verify that partial failures do not leave data in an inconsistent state.
| Test case | Request | Expected |
|---|---|---|
| Valid ID | GET /accounts/12345 (authorised) | 200 + correct schema |
| Invalid ID format | GET /accounts/abc | 400 Bad Request |
| Non-existent ID | GET /accounts/99999 | 404 Not Found |
| Unauthorized user | GET /accounts/12345 (wrong user token) | 403 Forbidden |
| Missing auth header | GET /accounts/12345 (no token) | 401 Unauthorized |
| Test case | Request body | Expected |
|---|---|---|
| Valid transfer | {"from":"A","to":"B","amount":100} | 201 Created; balances updated |
| Negative amount | {"from":"A","to":"B","amount":-50} | 400 Bad Request |
| Same source and destination | {"from":"A","to":"A","amount":50} | 400 Bad Request |
| Insufficient funds | {"from":"A","to":"B","amount":999999} | 422 Unprocessable; correct error message |
| Concurrent transfers | Two identical requests sent simultaneously | One succeeds; one rejected (no double-deduct) |
5 When to Use It / When NOT to Use It
✅ Use API testing when...
- The system exposes REST, GraphQL, or SOAP endpoints
- Multiple clients (web, mobile, third-party) consume the same API
- You are testing microservices or service-oriented architecture
- The API handles financial transactions, personal data, or authentication
- You need fast, repeatable regression tests that bypass the UI
❌ Don't rely on API testing alone when...
- The user journey and visual rendering are the primary risk
- Accessibility or cross-browser behaviour is the focus
- The API is unstable and changes daily (wait for contract freeze)
- You have no way to mock external dependencies and tests are flaky
- You need to verify that the UI correctly interprets API responses
Before you build API tests, ask:
- Do you have an OpenAPI/Swagger specification, or are you reverse-engineering the API from Postman?
- Can you mock external dependencies (payment gateways, third-party services), or will your tests depend on live systems?
- Are there security boundaries between APIs (auth tokens, rate limits) that need testing, or is the API open and stateless?
- What is the API's stability? If it changes weekly, is the test suite maintainable, or will it thrash?
6 Common Mistakes — Don't Do This
🚫 Only testing the happy path
I used to think: If GET /accounts/{id} returns 200 with valid data, the endpoint works.
Actually: APIs fail at the edges: invalid IDs, missing auth headers, concurrent requests, and malformed JSON. A happy-path test tells you the developer finished the feature. Negative tests tell you whether the feature is safe for production. Every endpoint needs at least as many negative cases as positive ones.
🚫 Not verifying response schemas
I used to think: If the response looks right in Postman, the schema is fine.
Actually: Field types matter. A string that should be a boolean, or a missing nullable field, will crash a mobile client that deserialises strictly. Always validate the response against the OpenAPI schema automatically. Schema drift is one of the most common causes of production incidents in API-driven systems.
🚫 Ignoring rate limiting and auth testing
I used to think: Rate limiting is an infrastructure concern, not a test concern.
Actually: If rate limiting is missing or misconfigured, a single script can exhaust your API quota or trigger a denial-of-service. Auth testing is equally critical: verify that tokens expire, that refresh tokens rotate, and that one user's token cannot access another user's resources. These are not "nice to have." They are production blockers.
When this technique fails
API testing fails when tests are only written for the happy path. A deployed API with no negative tests will eventually receive unexpected input—malformed JSON, missing auth headers, or concurrent calls—and it will fail in production rather than in testing. Failure also occurs when schema validation is skipped: systems that parse APIs strictly (mobile apps, microservices) crash when the schema drifts. Finally, if security tests (auth, rate limiting, injection) are not part of the plan, the API becomes a liability.
7 Now You Try — Interview Warm-Up
Scenario: An NZ real estate platform has a POST /listings endpoint that creates a property listing. It accepts price (number, required), address (string, required), and agent_id (string, required). A successful response is 201 with the created listing.
Task: Design three negative test cases that a happy-path tester would miss.
Three negative test cases:
- Negative price: POST with
price: -500000. Expected: 400 Bad Request. Many systems accept negative numbers and create listings that break search sorting. - Agent ID substitution: POST with an
agent_idthat belongs to a different agency. Expected: 403 Forbidden. Without this test, one agent could create listings under another agency's identity. - Extreme address length: POST with an address of 10,000 characters. Expected: 400 or truncated 201 with a length limit. APIs without input validation can crash databases or break downstream consumers that expect reasonable string lengths.
Tip: In interviews, always mention business logic alongside technical validation. A negative price is a technical bug; an agent ID substitution is a security and business integrity bug.
8 Self-Check — Can You Actually Do This?
Click each question to reveal the answer. If you got all three, you're ready to practice.
Q1. What is contract testing and why does it matter?
Contract testing verifies that an API provider's responses match the agreed specification (e.g., OpenAPI/Swagger) and that consumer expectations align with what the provider delivers. It matters because schema drift — adding a required field or changing a type — breaks consumers silently. Tools like Pact enforce contracts between services so breaking changes are caught in CI/CD before deployment.
Q2. How do you test an API with no documentation?
Use browser developer tools or a proxy (like Charles or Fiddler) to capture real requests and responses. Replay them in Postman or Insomnia. Explore endpoints systematically: change IDs, omit fields, and observe behaviour. Document your findings as you go, building an ad-hoc spec. Finally, ask the development team for documentation — but don't wait for it before starting exploratory API testing.
Q3. Why should API tests run in CI/CD rather than only locally?
Running API tests in CI/CD ensures that every code change is validated against the contract before merge. It catches regressions immediately, provides a shared source of truth for the team, and removes the "works on my machine" problem. Local testing is valuable for development; CI/CD testing is what protects production.
9 Interview Prep — What They'll Ask
These are real questions from Test Lead interviews in the NZ market. Click to reveal a strong answer.
Q1. How do you test an API with no documentation?
I start by capturing real traffic with browser dev tools or a proxy. I replay requests in Postman, systematically varying parameters to discover behaviour. I explore edge cases: empty payloads, large payloads, special characters, and invalid auth. I document my findings in a shared spec as I go. Then I ask the development team for formal documentation, but I don't block testing on it. Exploration reveals more than documentation ever will.
Q2. How do you ensure API tests are maintainable as the API evolves?
I use contract testing with Pact to enforce consumer-provider boundaries. I store test data separately from test logic so schema changes require updates in one place. I version my test collections alongside the API version. I also run a subset of smoke tests against every environment so that environment-specific configuration drift is caught early, not during release week.
Q3. What's your approach to testing API performance?
I define SLIs with the team: p95 response time, error rate, and throughput under expected and peak load. I use k6 or JMeter to simulate realistic traffic patterns, including spike tests that mimic marketing campaigns or end-of-month batch jobs. I test in an environment that mirrors production architecture, and I monitor server-side metrics (CPU, memory, database connections) during the test to identify bottlenecks, not just symptoms.
Q4. How do you handle flaky API tests?
First, I identify the root cause: timing issues, unstable test data, external dependencies, or non-deterministic ordering. I mock external services so the test controls its environment. I use polling or retries only where the spec genuinely requires async behaviour. I isolate tests so one failure does not cascade. If a test is flaky and the root cause is the system (not the test), I treat it as a product defect and raise a bug, not a test issue.