Learn AI for Developers Debugging with AI — Systematic Prompt Patterns

Debugging with AI — Systematic Prompt Patterns

Intermediate 🕐 24 min Lesson 8 of 14
What you'll learn
  • Construct a full-context debug prompt with all information the AI needs to be useful
  • Apply the rubber duck pattern with the critical assumption-challenging instruction
  • Use systematic narrowing to generate a ranked hypothesis list for complex bugs
  • Apply preemptive debugging to surface edge cases before they reach production

The Wrong Way to Debug with AI

The most common debugging prompt looks like this: paste an error message, ask "why is this happening?" and wait. The response you get will be a list of generic causes for that type of error, drawn from patterns in training data, with no relationship to your actual code. It is only marginally more useful than a web search.

The reason AI debugging falls flat without structure is the same reason any diagnosis falls flat without information: the model is working with insufficient context. It does not know what the code does, what you expected to happen, what you have already tried, or what the surrounding system looks like. Structured debugging prompts fix this by providing everything the model needs to actually help.

Pattern 1: The Full-Context Debug Prompt

This is the baseline. Use it as your starting point for any bug where you have an error message and a stack trace:

Language and runtime: [e.g., Python 3.11 / Django 4.2]. What I expected: [describe expected behaviour]. What actually happened: [describe actual behaviour]. Error message: [paste exact error]. Stack trace: [paste full stack trace]. Relevant code: [paste the function or block where the error originates]. What I have already tried: [list attempts].

Research on LLM debugging effectiveness confirms that providing the actual source code alongside the error message produces significantly better explanations than providing the error message alone. The "what I have already tried" field is also critical — without it, the AI will suggest the same things you already ruled out, wasting both your time and the response.

Pattern 2: The Rubber Duck Setup

Classic rubber duck debugging works because explaining a problem out loud forces you to articulate your assumptions — and the act of articulation often reveals where the assumption is wrong. AI can serve the same function, but with one important modification: you must explicitly instruct the AI not to just agree with you.

Without this instruction, AI defaults to being agreeable. It will find ways to validate your framing even when your framing is the problem. With this instruction:

You are my rubber duck. Help me debug by asking me guiding questions and challenging my assumptions — do not just give me the answer. I am trying to: [describe goal]. My current approach: [describe what the code does]. Where I am stuck: [describe where intuition breaks down].

The explicit "challenge my assumptions" instruction transforms the interaction from validation to interrogation. It is particularly useful when you have a strong intuition about what is wrong and you want to pressure-test that intuition before going down a rabbit hole.

Pattern 3: Systematic Narrowing

For complex bugs where you have narrowed the problem to a module but not a specific function, use this pattern to generate a ranked hypothesis list:

I have a bug I have narrowed to this module but not the specific function. Help me write a debugging checklist to systematically isolate it. Module: [describe what it does]. Symptoms: [describe what goes wrong]. Inputs that cause the bug: [list them]. Inputs that work fine: [list them]. What should I check first? Give me a prioritised list of hypotheses from most to least likely, and what test I would run to confirm or rule out each one.

The prioritised hypothesis list converts a vague "something is wrong in here" into a structured investigation. You work through the list in order, confirming or ruling out each hypothesis, until you isolate the root cause. This is more efficient than random code changes, and it builds a record of what you eliminated — useful if you need to hand the investigation to someone else.

Pattern 4: Preemptive Debugging

This pattern has the highest return on investment of any debugging approach because it operates before bugs exist rather than after. Use it on code you have just written that has not failed yet but might in production:

This function works in my tests, but I am worried about production edge cases. What inputs or conditions could make this fail silently or produce incorrect output? Focus on: null or undefined inputs, boundary values (zero, negative, very large), concurrent calls, unexpected types, very long strings, and missing keys in objects. [paste function].

AI is better at generating failure scenarios than humans who have just written the happy path and are pattern-matched to it. The preemptive debug prompt surfaces edge cases that you are likely to miss precisely because you wrote the code with the happy path in mind. Catching these before deployment is orders of magnitude cheaper than catching them in production.

One More Pattern: Error Message Explained

When you hit an error type you have not seen before — especially from a library or framework you are less familiar with — this pattern is faster than documentation:

Explain this error message to an intermediate developer who has not seen it before. Error: [paste exact error]. Explain: (1) what it literally means, (2) the three most common causes in [language / framework], and (3) what to look for in my code to identify which cause applies.

The three-cause structure is the key element. It prevents you from fixating on the first explanation and missing the more likely one for your specific context.

Key takeaways
  • Pasting an error message without code context produces generic advice — the full-context paste with code, expected/actual, and already-tried produces specific diagnoses
  • Explicitly instruct the AI to challenge your assumptions in rubber duck debugging — without this, it defaults to validating your framing
  • Systematic narrowing converts 'something is wrong in this module' into an ordered, testable hypothesis list
  • Preemptive debugging is the highest-ROI debugging pattern — it catches edge cases before they exist in production
  • The error-message-explained pattern (literal meaning + 3 common causes + what to look for) is faster than documentation for unfamiliar errors