Learn AI for Developers AI Code Review — Structured Prompt Patterns

AI Code Review — Structured Prompt Patterns

Intermediate 🕐 22 min Lesson 6 of 14
What you'll learn
  • Write structured code review prompts that produce severity-rated, actionable findings
  • Apply the critical-issues-only filter to focus reviews on what matters
  • Use constraint-aware refactor prompts that preserve required behaviour
  • Understand what AI code review reliably catches versus where human review remains essential

The Problem with "Review This Code"

Ask an AI to "review this code" without any structure and you will get something that looks thorough but usually is not. You will get a paragraph praising the overall structure, a handful of style suggestions, maybe a note about variable naming, and buried somewhere in the middle — if you are lucky — one actual bug. The signal-to-noise ratio is low because the AI has no guidance on what matters.

The fix is structure. A structured code review prompt tells the AI: what role to take, what context it needs, what categories to check, and what format the output should be in. The same code, reviewed with a structured prompt, produces severity-rated, actionable findings that you can actually work through.

Pattern 1: The Structured Review

This is the workhorse pattern for pull request review. Structure it like a ticket:

You are a senior [language] engineer reviewing a pull request. Context: [framework, any relevant architecture notes]. Style guide: [key constraints — "no raw SQL, always use the ORM"]. Review the following code and identify issues in these categories only: 1) Logic bugs or incorrect behaviour, 2) Missing edge cases or error handling, 3) Performance or resource inefficiency, 4) Security or input validation concerns, 5) Style inconsistencies with the codebase. For each issue: explain why it matters, rate severity (Critical / High / Medium / Low), and suggest a fix with a code snippet. [paste code]

The numbered categories prevent the AI from going off on tangents. Requiring severity prevents it from burying a critical bug alongside a whitespace comment. Asking for a code snippet in each fix means you get something you can act on immediately, not just a description of the problem.

Pattern 2: Critical Issues Only

When you want a quick sanity check before committing — not a full review — the structured pattern is overkill. Use this instead:

Review this [language] code and focus only on critical issues: potential bugs, security vulnerabilities, correctness failures, and serious performance problems. If you find critical issues, list them as short bullet points. If nothing critical, say "looks clean" and stop. Do not comment on style, naming, or minor improvements. [paste code]

The explicit instruction to not comment on style matters. Without it, AI defaults to verbose output. "Looks clean" is a valid, useful response — you are giving it permission to be concise.

Pattern 3: The Constraint-Aware Refactor

When you want to improve code but have real-world constraints, state the constraints explicitly or the AI will ignore them:

Current code: [paste code]. Issues I want to address: [list specific issues]. Constraints that must be preserved: [e.g., "must remain backward compatible with v2 API callers", "no new dependencies", "must be a pure function"]. Suggest refactored code with explanations for each change, including trade-offs and edge cases to watch for.

Constraints are the most commonly omitted element in beginner refactoring prompts. Without them, the AI will cheerfully introduce a dependency you cannot add, break an interface that other code depends on, or rewrite a function in a way that is cleaner in isolation but breaks the calling code.

Pattern 4: The Attribution Prompt

When you are reviewing code that was AI-generated, say so in the prompt. This is increasingly a recognised best practice in engineering teams that work with AI:

This code was AI-generated. Review it with extra scrutiny for: logic that looks correct but makes unstated assumptions, API calls or method names that may not exist in the current version of the library, patterns that may not match this codebase's conventions, and edge cases the generation may have missed. [paste code]

Why does framing matter? The same AI model that tends to generate certain types of errors also tends to miss them on review, because the review draws on the same training patterns. Explicitly flagging AI origin and asking for heightened scrutiny has been shown to improve the quality of the critique. It also establishes a useful team habit: AI-generated code is reviewed differently from human-written code, not less rigorously.

What AI Does Well and Poorly in Code Review

AI is reliable for: spotting obvious logic gaps, naming and style inconsistencies, common security patterns like SQL injection templates, and boilerplate error handling. It is less reliable for: catching subtle business logic errors that require domain knowledge, evaluating whether the code correctly implements the requirements it is supposed to, and assessing architectural trade-offs specific to your system. Know the difference — use AI review for the first category, and reserve human review for the second.

Key takeaways
  • Structure your review prompt with role, context, numbered categories, and output format — the same way you would structure a ticket
  • The critical-issues-only pattern filters stylistic noise and focuses the model on bugs, security, and correctness
  • Always specify constraints in refactor prompts — the AI does not know what the calling code depends on
  • When reviewing AI-generated code, say so in the prompt — it measurably increases the rigor of the critique
  • AI review reliably catches obvious logic gaps and common security patterns; it struggles with business logic requiring domain knowledge