Debugging with AI is different from debugging alone. The mistake most developers make is treating Claude Code like a search engine — paste the error, get the fix. That works on simple errors. On complex bugs, it produces plausible-looking fixes that don't address the root cause, and you end up in an expensive loop of applying patches to symptoms.
The workflow below is built on a different mental model: Claude Code as a thought partner that helps you reason through what's happening, not just what to try next.
Why context quality determines debug quality
The AI's ability to help you debug is directly proportional to the context you give it. "My code is broken" produces a generic response. "This function returns NaN when items is an empty array, but only in the production build — here's the stack trace, here's the function, and here's what I've already tried" produces useful debugging.
Three things to always provide:
- The exact error — full stack trace, not paraphrased
- The relevant code — the function, component, or module where the error originates, plus any code that calls it
- What you've already tried — this prevents the AI from suggesting the same things you've ruled out
Use Claude Code's file reading capabilities to pull in relevant files automatically rather than copying and pasting. Claude Code can read the full codebase context and often catches bugs in calling code that you didn't realize was relevant.
The 4-step debugging workflow
Step 1: Give the full error + context
Don't summarize the error — show it. The exact error message, the full stack trace, the file and line number.
I'm getting this error when running the payment processing flow:
Error: Cannot read properties of undefined (reading 'customerId')
at processPayment (src/lib/payments.ts:47:35)
at checkout (src/app/checkout/route.ts:23:18)
at async POST (src/app/checkout/route.ts:15:5)
Here's the processPayment function (payments.ts:40-60):
[file contents]
Here's the checkout route that calls it (route.ts):
[file contents]
This error only appears when the user hasn't completed their profile before checkout.
That last line — the circumstance under which the error appears — is often the key to diagnosing it.
Step 2: Ask for root cause analysis, not just a fix
This is the step most developers skip, and it's the most important one. Before asking "what should I change?", ask "why is this happening?"
Before suggesting a fix, can you explain what's causing this error?
Walk me through what the code is doing step by step, and where the assumption
breaks down.
Root cause analysis often reveals that the obvious fix addresses a symptom while the real problem is elsewhere. A Cannot read properties of undefined error might look like a missing null check, but the actual bug might be that an async operation isn't being awaited properly upstream.
Step 3: Request a hypothesis or minimal reproduction
For non-obvious bugs, ask Claude Code to form a testable hypothesis:
What's your hypothesis for why this only fails when the user hasn't completed their profile?
Can you write a minimal test case that would reproduce this specific scenario?
The test case serves two purposes: it forces the AI to think through the bug concretely, and it gives you something to run that confirms whether the proposed fix actually works.
# Example minimal reproduction the AI might suggest
def test_process_payment_without_profile():
# Arrange: user without completed profile
user = User(id="123", email="test@example.com", profile=None)
cart = Cart(items=[CartItem(product_id="prod_1", quantity=1)])
# Act & Assert: should raise a specific error, not crash with AttributeError
with pytest.raises(PaymentValidationError, match="Profile must be completed"):
process_payment(user=user, cart=cart)
Step 4: Verify the fix doesn't introduce new issues
After Claude Code proposes a fix, don't just apply it. Ask it to review the fix for side effects:
Before I apply this fix, can you check:
1. Does this change affect any other callers of processPayment?
2. Could this cause issues in edge cases I haven't mentioned?
3. Is there anything in the existing tests I should update?
This step catches the most common failure mode of AI debugging: fixes that solve the immediate error while introducing a subtler bug elsewhere.
Prompts for specific bug types
Runtime errors (null reference, key errors, type errors)
I'm getting: [EXACT ERROR]
Relevant code:
[CODE]
This happens when: [CIRCUMSTANCES]
Can you explain why this is happening and what assumption in the code is wrong?
Don't suggest a fix yet — I want to understand the root cause first.
Logic bugs (wrong output, off-by-one errors)
Logic bugs are harder because there's no error — the code runs but produces wrong results. Give Claude Code both the expected and actual behavior:
This function is supposed to [WHAT IT SHOULD DO], but it's returning [WHAT IT ACTUALLY RETURNS].
Expected: [EXACT EXPECTED OUTPUT for a specific input]
Actual: [EXACT ACTUAL OUTPUT for the same input]
Here's the function:
[CODE]
Can you trace through the logic with the input [SPECIFIC INPUT] and show me where it diverges from what I expect?
Performance issues (slow queries, memory leaks)
This endpoint takes 8-12 seconds to respond when [CONDITION]. It should be under 500ms.
Here's the query/code: [CODE]
The table has approximately [RECORD COUNT] rows. Current indexes: [LIST INDEXES].
Can you identify potential performance bottlenecks and suggest what to investigate first?
Async/concurrency bugs
These are the hardest to debug alone. Give Claude Code the full async context:
I'm seeing a race condition / inconsistent state / deadlock in this async code.
It happens intermittently — roughly 1 in 20 runs under load.
Here's the async code: [CODE]
Here's the calling code: [CODE]
Can you identify the potential race conditions and explain what sequence of events
would produce the behavior I'm seeing?
Using git context to help Claude Code
When a bug was introduced by a recent change, show Claude Code what changed:
# In the terminal
git diff HEAD~1 -- src/lib/payments.ts
Then in your Claude Code session:
Here's what changed in the last commit in the file where the bug appears:
[PASTE GIT DIFF]
The bug started appearing after this commit. Can you identify what in this diff
might have introduced the issue?
This narrows the search space dramatically and often leads to faster diagnosis than examining the full current state of the file.
When to start a fresh session
Context bloat is real. After 20-30 turns of debugging, the session's context window contains all your failed attempts, the AI's previous wrong guesses, and accumulated noise. The AI starts anchoring on that noise when forming new hypotheses.
Signs it's time for a fresh session:
- The AI is suggesting things it already suggested 10 turns ago
- Responses are getting longer but less specific
- You're no longer making progress
When starting fresh, write a clean summary prompt that includes: the bug description, what you've ruled out, and what the leading hypothesis is. Don't carry the failed attempts forward.
The CLAUDE.md role in debugging
Good CLAUDE.md files include project-specific gotchas that come up repeatedly in debugging:
## Common debugging gotchas
- Prisma Client is a singleton — if you see "PrismaClientInitializationError",
it's usually because PrismaClient is being instantiated multiple times in dev.
Check that all imports come from `src/server/db/client.ts`.
- NextAuth v5: session is null in server components if the page isn't under
the auth() wrapper. Always check auth() is called in the layout, not just the page.
- tRPC errors: TRPC_CLIENT_ERROR in the browser console means the server threw a
TRPCError. Check server logs for the actual error — the client error message is
often stripped in production.
Each time you solve a tricky bug, add it to the CLAUDE.md. Future debugging sessions — and future team members — will benefit from it.
The verification step most developers skip
After applying a fix, run the tests. Specifically, run the tests that cover the code path you just changed:
I've applied the fix. Before we close this out, can you suggest:
1. What existing tests I should run to verify the fix?
2. What test case I should add to prevent this regression?
3. Any related code paths I should manually test?
A bug that's fixed without a new test is a bug that will probably come back.
The most valuable thing Claude Code does in debugging isn't generating fixes — it's externalizing your reasoning. Explaining the bug to Claude Code often forces you to articulate something you'd been vague about, which leads to the insight you needed. That's not the AI being magic. That's the rubber duck effect, with a rubber duck that can read your code.