Vibe coding — using AI to write code from natural language descriptions — went from novelty to normal practice surprisingly fast. The workflow is genuinely useful: describe what you want, get working code, iterate. But it hits a wall predictably, and knowing where the wall is saves a lot of time.
What Vibe Coding Is Actually Good At
Prototypes and proof-of-concept work. When you want to quickly see if an idea is viable, vibe coding lets you spin up a working version in minutes instead of hours. Perfect for validating assumptions before investing in a real implementation.
Boilerplate elimination. Setting up a new project, writing CRUD operations, wiring up standard patterns — things that are tedious and repetitive. AI handles these well because they follow conventions it's seen thousands of times.
Filling in specific functions. "Write a function that takes a list of user objects and returns them sorted by last_login, with users who've never logged in at the end" — well-bounded, specific, testable. Works reliably.
First drafts of anything unfamiliar. Working in a language or framework you don't know well? Vibe coding gets you to a working first version you can then understand and modify.
Where It Consistently Fails
Large existing codebases. Once a codebase has significant existing structure, the AI doesn't know about it. You'll get code that ignores your existing patterns, duplicates functionality, or conflicts with how things already work. Without reading the relevant files first, the AI is building blindly.
Debugging subtle issues. AI is good at fixing obvious bugs but poor at diagnosing subtle ones — especially those involving state, timing, or complex interactions between components. You'll often get suggested fixes that look plausible but don't address the root cause.
Cross-cutting concerns. "Make this work" is fine for small things. "Make this work in a way that's consistent with our authentication pattern, our error handling conventions, our logging approach, and our database layer" requires the AI to know things it doesn't know unless you tell it.
Complex architecture decisions. When the problem requires choosing between approaches with non-obvious trade-offs, AI tends toward the conventional answer, not necessarily the right one for your specific constraints.
Prompting Patterns That Work
Start With Context Injection
Before asking for any code, give the AI the relevant context it can't see:
I'm building a Next.js 15 app using App Router. We use Prisma with PostgreSQL.
Auth is handled by NextAuth.js v5. Our convention is to put database queries
in lib/db/ files and API logic in server actions in app/actions/.
Current task: [specific task]
This one habit dramatically improves output quality for anything beyond trivial tasks.
Describe the Constraint, Not Just the Goal
Weak prompt: "Write a function to fetch user data"
Better prompt: "Write a function to fetch user data from our Prisma database. It should:
- Accept a userId parameter (string)
- Return null if not found rather than throwing
- Include the user's profile and the last 10 activity logs (using select to avoid overfetching)
- Be a server action (use server directive at the top)"
The constraint specification is where most of the value is. The AI can write "fetch user data" a hundred ways — you want it to write it your way.
Use Iteration Rather Than Mega-Prompts
Instead of writing one enormous prompt, start with something that works and iterate:
Turn 1: Write a basic React component for a user profile card
showing name, avatar, and bio.
Turn 2: Add a follow button that calls the /api/follow endpoint.
Use optimistic updates — the button should change immediately
even before the server responds.
Turn 3: Handle the error case — if the follow fails, revert the
optimistic update and show a toast notification.
Each turn is specific and testable. You can verify each piece before adding the next.
Ask for Code, Then Ask for What You Can't See
After getting code, ask:
What edge cases does this not handle?
What error states could occur that aren't caught?
Is there anything in this implementation I should be aware of
for production use?
This prompts the AI to surface the things it optimistically skipped. You'll often get useful warnings about what the happy-path implementation is missing.
Give It Tests to Work Against
If you have specific behavior you need:
Here are the test cases I need this function to pass:
input: [] → output: []
input: [null, 1, 2] → output: [1, 2] (nulls filtered)
input: [3, 1, 2] → output: [1, 2, 3] (sorted ascending)
input: [1, 1, 1] → output: [1, 1, 1] (handles duplicates)
Write a function that passes all of these.
Test-driven prompting is reliable because you've specified exactly what success looks like.
When to Stop Vibe Coding
Vibe coding has a breaking point. When to stop and shift to careful implementation:
- When you've spent more time fixing AI-generated bugs than you would have spent writing it yourself
- When the codebase grows large enough that the AI doesn't have context for how the pieces connect
- When the task involves subtle correctness requirements (cryptography, financial calculations, distributed systems)
- When you need to own and understand the code for the long term — not just make it work
The honest version of vibe coding is knowing when it's the right tool. It's excellent for prototyping, miserable for production-critical components you'll be maintaining for years.
The Practical Stack
For web prototypes: Cursor or Claude with the full project in context. Describe the feature, iterate on the output, test manually as you go.
For specific functions: Copilot or direct API calls. Smaller context, faster iteration, easier to verify correctness.
For debugging: Give the AI the error, the stack trace, the code that triggers it, and the context about what you expected. Don't just paste the error — that's usually not enough.
For architecture: Don't vibe code architecture decisions. Think them through, document them, then vibe code the implementation.

