Test with AI
Generative AI is not going to replace testers. Testers who use AI well are going to replace testers who don’t. Learn to prompt deliberately, validate AI output rigorously, and integrate it safely into your test process.
Five modules mapped to the ISTQB CT-GenAI syllabus — with NZ business context throughout. IRD validation flows, KiwiSaver enrolment forms, ANZ and BNZ banking scenarios, RealMe authentication.
Mapped to the CT-GenAI syllabus
GenAI Foundations
Symbolic AI, machine learning, deep learning, and generative AI. How large language models work: tokenisation, context windows, and model types. What LLMs can and cannot do for testers.
~35 min read · ~60 min with exercises · CT-GenAI Ch 1
Ch 2Prompt Engineering for Testing
Structure a prompt: role, context, instruction, constraints, output format. Zero-shot to few-shot, prompt chaining, meta-prompting. Apply GenAI to test analysis, design, regression, and monitoring.
~45 min read · ~90 min with exercises · CT-GenAI Ch 2 · Biggest module
Ch 3Managing AI Risks
Hallucinations, reasoning errors, and bias — how to spot and mitigate them. Data privacy under the NZ Privacy Act 2020. Non-determinism, energy impact, and AI regulations.
~35 min read · ~80 min with exercises · CT-GenAI Ch 3
Ch 4LLM-Powered Test Infrastructure
Architectural components of an LLM-powered test tool. Retrieval-Augmented Generation (RAG), LLM agents for multi-step test automation, fine-tuning, and LLMOps.
~30 min read · ~60 min with exercises · CT-GenAI Ch 4
Ch 5Adopting GenAI in Your Test Organisation
Shadow AI risks, defining a GenAI test strategy, selecting the right model, cost and quality trade-offs, building AI capability in a test team, and how tester responsibilities shift.
~35 min read · ~60 min with exercises · CT-GenAI Ch 5
The first NZ-specific CT-GenAI study guide
Every test team in Aotearoa is being asked the same question right now: “how are you using AI?” Most testers answer with either fear or bluster. Both answers are wrong.
The honest answer is that generative AI has become a genuine productivity multiplier for testers who learn to prompt deliberately, validate its output rigorously, and integrate it safely into a test process. It can triple your test case coverage on a user story. It can spot defect patterns a human would miss. It can turn a vague bug report into a reproducible scenario in thirty seconds.
It can also hallucinate test data that looks real but violates your spec. It can leak confidential requirements to a third party. It can give you false confidence in a broken test suite. Testers who do not understand the risks will be the testers who get replaced — not by the AI, but by testers who do understand the risks.
This module teaches you to do it properly — using NZ business context throughout so the examples actually resemble the systems you work on.