Level 2 · Mid-Level Automation Engineer

Mid-level automation techniques

Write it once, reuse it everywhere. At mid-level you stop copy-pasting scripts and start shaping the framework. You test APIs directly, parameterise your data, and take responsibility for why a test is flaky.

Mid-Level ISTQB CT-TAE — Ch. 2 & 3

1. Page Object Model (POM)

Locators belong in one place. A page object wraps the locators and actions for a page behind a clean API — tests call loginPage.loginAs(user), not driver.findElement(...).

  • Tests read like scenarios — no selectors in the test body.
  • UI change → one fix — update the page object, every test benefits.
  • Actions, not just fields — expose business verbs (addItemToCart), not raw setters.
Mini-Hunt: The Right Boundary

Question: Should the page object method return void, or return the next page object?

Example: loginPage.submit() takes you to the dashboard.

Answer: Return the next page object (DashboardPage submit()). This makes navigation explicit in the test, enforces sync (you know you’re on the next page), and enables method chaining.

2. API test automation

Most business rules live in the API, not the UI. API tests are faster, more stable, and more focused. At mid-level you should own the API layer.

  • Tools — REST Assured (Java), requests + pytest (Python), supertest (JS), Postman/Newman for collections.
  • Schema validation — assert the response shape against a JSON schema, not just a single field.
  • Status codes & headers — check the contract, not just the body.
  • Auth — know how to obtain a token, refresh it, and share it across a suite without leaking into test bodies.
  • Negative paths — 400/401/403/404/409/422 deserve tests too. UI rarely exposes all of these clearly.

Full API testing reference →

Mini-Hunt: What’s missing?

Test: POST /orders returns 200 and {"orderId": "abc"}. The test asserts the status code is 200.

What’s the biggest gap?

Answer: It never verifies the order was actually created. Follow up with a GET /orders/abc (or DB check) to prove persistence — a 200 just means the handler didn’t crash.

3. Data-driven testing

One test body, many inputs. Parameterisation turns a single flow into systematic coverage — and turns your EP / BVA / decision tables into tests.

  • JUnit 5@ParameterizedTest with @CsvSource, @MethodSource.
  • pytest@pytest.mark.parametrize.
  • Mocha/Jesttest.each([...]).
  • Sources — inline, CSV, JSON, or a generator function. External files are easier for non-devs to edit but harder to refactor.

Name each parameterised case so failures point straight at the bad input. login[user=alice, expect=success] beats login[0].

4. Test isolation & data setup

Flaky suites almost always fail here. Every test must:

  • Create its own data (via API/DB), not rely on what a previous test left behind.
  • Run in any order — enforce this by running with --random / --shuffle.
  • Clean up or use disposable data (unique IDs, per-test users, teardown via API).
  • Not share state through global variables, singletons, or files on disk.

Prefer API/DB setup over UI setup — it’s orders of magnitude faster and less fragile. Use the UI only for the thing the test is actually asserting.

5. Fighting flakiness

A flaky test is worse than no test — it erodes trust. At mid-level you’re expected to diagnose, not just retry.

  • Sync, not sleep — every Thread.sleep/time.sleep in the framework is a bug waiting to happen.
  • Deterministic data — freeze time, fix random seeds, use fixed fixtures.
  • One concept per test — long end-to-end tests that hit every layer flake for unrelated reasons.
  • Quarantine, don’t ignore — move flaky tests to a separate lane, fix them within a sprint.
  • Measure — track pass rate per test over time. The top offenders are where you focus.

6. Contributing to the framework

Mid-level engineers start giving back:

  • Add a new page object / API client when you hit the second copy-paste.
  • Extract a reusable helper (date builder, unique-ID generator, auth fixture) when three tests do the same thing.
  • Improve a failure message so the next person doesn’t need to read the stack trace to know what broke.
  • Write a short README when you add a capability — your future team-mate needs it.

Rule of thumb: leave the framework cleaner than you found it, but don’t refactor unrelated code in a test-fix PR.

7. Choosing the right layer (the test pyramid)

Before writing a UI test, ask: could this be an API test, or a unit test? Each layer up the pyramid is slower, more brittle, and more expensive to maintain.

  • Unit — fast, many. Pure logic, edge cases, error paths.
  • Integration / API — business rules, contracts, persistence.
  • UI / end-to-end — a small number of critical journeys.

A good mid-level heuristic: if a test would work at the API layer, do it there. Save UI tests for things that only matter in the UI (rendering, accessibility, interaction).

Technique mapping

Mid-level automation concepts — canonical references
AreaReference
Test design techniques applied in codeEP, BVA, Decision tables, Pairwise
API testing foundationsAPI testing reference
Regression strategy & suite structureRegression testing
Smoke vs full runs in CISmoke testing
Tooling at this levelSelenium, Playwright, Cypress, Postman, REST Assured
ISTQB alignmentCT-TAE Ch. 2 (architecture), Ch. 3 (implementation)