Intermediate
AI for Developers
Use AI as a real part of your development workflow — pair programming, code review, debugging, testing, and building AI-powered features into your own apps.
01
The Developer's AI Toolkit
A map of the AI tools available to developers today — what each one is, how they differ architecturally, and how to choose the right tool for the task at hand.
›
02
How AI Coding Tools See Your Code
AI does not read your codebase the way a developer does. Understanding how context windows work — and how to work with them — is the single highest-leverage skill in AI-assisted development.
›
03
AI as a Junior Developer — The Right Mental Model
How you frame your relationship with AI determines how well you work with it. The developers getting the best results are not treating AI as an oracle — they are treating it as a capable but junior collaborator whose output they own.
›
04
Spec-Driven Development with AI
The single highest-leverage habit for AI pair programming: writing a structured spec before you prompt. Two minutes of spec writing saves ten minutes of correction downstream.
›
05
Pair Programming Patterns That Work
Three high-leverage patterns that separate good AI pair programming from frustrating AI pair programming — tests as truth, cross-model review, and managing agentic sessions.
›
06
AI Code Review — Structured Prompt Patterns
Undirected "review this code" requests produce unfocused feedback. These four structured prompt patterns produce actionable, severity-rated findings every time.
›
07
Security and Failure Modes in AI-Generated Code
AI-generated code fails in predictable, statistically documented ways. Knowing the five failure modes — and what to do about each — is the defensive skill that makes the rest of your AI workflow safe.
›
08
Debugging with AI — Systematic Prompt Patterns
Pasting an error message and asking "why does this fail?" is the lowest-quality debugging prompt. These four patterns give you specific, actionable diagnoses every time.
›
09
AI-Assisted Testing — Doing It Right
AI is excellent at generating test boilerplate but has a critical failure mode — tautological tests that pass without actually verifying correct behaviour. This lesson shows how to get the value without the trap.
›
10
Calling AI APIs — Core Concepts
Claude, OpenAI, and Gemini all share the same conceptual model. Learn the five concepts that underpin every production AI API call — and the two decisions that determine most of your output quality.
›
11
Building AI Features — Common Patterns
Most AI features in production applications are variations of eight recurring patterns. Learn the patterns, their implementation decisions, and the failure modes specific to each — including a full breakdown of RAG.
›
12
Prompt Injection and Security for AI Features
Prompt injection is OWASP's #1 LLM vulnerability — present in 73% of production AI deployments. This lesson makes the attack concrete and teaches the primary defences.
›
13
Cost, Latency, and Production Reliability
AI API calls have a different cost, latency, and failure profile than traditional service calls. Four disciplines — output constraints, prompt caching, model routing, and proper error handling — determine whether your AI feature is viable in production.
›
14
Putting It Together — The AI-Augmented Developer Workflow
The patterns from this course are composable skills, not a prescribed workflow. This lesson maps the full development cycle and shows where each pattern fits — and where your judgment is irreplaceable.
›