Putting It Together — The AI-Augmented Developer Workflow
- Map the full development cycle to the specific patterns from this course
- Identify the three mandatory human checkpoints that AI review cannot replace
- Apply the right tool and pattern to each phase of feature development
- Articulate what AI cannot do in the development workflow and where developer judgment remains irreplaceable
A Toolkit, Not a Script
The most important thing to understand about AI-augmented development is that there is no single correct workflow. The patterns in this course are tools — each one applies in specific situations, and the skill is knowing when to reach for which one. This final lesson maps the complete development cycle from requirements to deployment and shows how the patterns fit at each stage. Take what applies to your context, adapt what needs adapting, and leave what does not fit your team or codebase.
Stage 1: Requirements and Spec (Lessons 3 and 4)
Before any code is written, the AI-augmented developer uses the spec-driven pattern from Lesson 4. This is not "prompt engineering" — it is requirements engineering. Writing the spec forces you to define inputs, outputs, constraints, and scope before asking anyone (human or AI) to implement anything.
At this stage, the relevant tools are: a text editor and your own thinking. AI can help pressure-test a spec — "What edge cases does this spec not address?" — but the requirements themselves are a human responsibility. You cannot outsource judgment about what the right thing to build is.
Stage 2: Implementation (Lessons 1–5)
With a clear spec, you use your IDE tool of choice — Cursor for deep multi-file work, Copilot for moment-to-moment editing — to generate the implementation. Apply the pair programming patterns from Lesson 5:
- Write test descriptions before generating code (tests-as-truth)
- Break the feature into AI-sized tasks, one per session
- Commit after each verified step during agentic sessions — do not let the agent run unchecked for long stretches
The Lesson 2 context disciplines apply throughout: start fresh sessions for new tasks, provide curated context rather than dumping the whole codebase, include the relevant file and one or two analogous examples from your codebase.
Stage 3: Review and Testing (Lessons 6, 7, 9)
After generating code, apply the code review patterns from Lesson 6 before the code touches a pull request. Use the attribution prompt since the code is AI-generated. For any code that handles user input, authentication, or data persistence, run the adversarial security pass from Lesson 7.
For tests: the TDD inversion from Lesson 9 means your test descriptions already exist from Stage 2. Generate the test code from those descriptions (not from the implementation), run the tests, and verify they pass. Consider mutation testing on critical paths.
Cross-model review (Lesson 5) is your final check: copy the generated code — not the context — into a second AI and ask it to critique the implementation cold.
Stage 4: Debugging (Lessons 2, 8)
When something goes wrong, apply the debugging patterns from Lesson 8 in order of escalating effort: start with the error-message-explained pattern for unfamiliar errors, move to the full-context debug prompt for bugs with stack traces, use the systematic narrowing pattern for elusive bugs without clear error messages, and apply the rubber duck setup when your own framing might be the problem.
Always start a fresh debug session (Lesson 2 context discipline) with exactly the code and context relevant to the bug — not a session that has accumulated unrelated prior discussion.
Stage 5: Building AI Features (Lessons 10–13)
When the feature you are building includes AI-powered capabilities — a chatbot, a summariser, a classifier, a RAG pipeline — apply the API and feature lessons in sequence. Lesson 10 gives you the API structure. Lesson 11 maps the feature to one of eight patterns. Lesson 12 designs the security model around prompt injection. Lesson 13 plans for production cost and reliability.
The Three Non-Negotiable Human Checkpoints
Across the entire workflow, there are three places where human review is not optional — where the cost of getting it wrong is high enough that AI review alone is insufficient:
- Spec review before implementation: Do the requirements actually describe what should be built? No AI can answer this — it requires your judgment about the product.
- Security-sensitive code before merge: Any code that handles authentication, authorisation, user input, or data persistence needs human security review, not just the AI security pass. AI review plus human review is the minimum for this category.
- Test descriptions before test generation: Do the test descriptions match the actual requirements — not just what the code does? This is the tautological test trap from Lesson 9. A human must verify this before the tests are generated.
What AI Cannot Replace in Your Workflow
To close the course honestly: there are things in this workflow that AI cannot do, and probably will not be able to do for some time. Knowing what they are is the final piece of the junior developer mental model from Lesson 3.
- Architectural judgment: Deciding between competing approaches based on your team's capacity, your deployment environment, and lessons from your specific system's history
- Requirements judgment: Understanding what the right thing to build actually is, based on user needs and product context that no model was trained on
- Security auditing for novel attack patterns: AI is good at known vulnerability patterns. Novel or context-specific attack vectors require human security expertise.
- Knowing when not to use AI: The judgment to recognise that a given feature does not need AI at all — that a database query or an algorithm is faster, cheaper, and more reliable — is a human judgment that requires understanding the full picture
The developers who get the most from AI over the long term are not those who delegate the most. They are the ones who maintain their own judgment, delegate wisely, verify thoroughly, and use AI to execute faster on decisions that are clearly theirs to make.
- AI pair programming, review, debugging, testing, and feature building are composable — the workflow is yours to assemble
- Spec review, security-sensitive code review, and test description verification are the three non-negotiable human checkpoints
- Start a fresh context session for each stage (spec, implementation, debugging) to keep responses focused
- Architectural judgment, requirements judgment, and knowing when not to use AI remain irreplaceable human skills
- The developers who get the most from AI are not those who delegate the most — they delegate wisely, review critically, and keep their own judgment current