The first time I shipped a Cursor-assisted app to real users, it lasted about four hours before a SQL injection vulnerability got flagged in a security review. The code looked fine. It ran fine locally. The AI had generated clean-looking query logic. But it was concatenating user input directly into a query string, and I'd accepted the output without reading it carefully.
That's the thing about vibe coding — the velocity is real, but so are the failure modes. The AI generates plausible-looking code fast enough that your review instincts don't fire. Here are the 10 patterns I've seen cause the most damage in production.
1. Accepting first output without reading it
This is the meta-anti-pattern that enables all the others. You describe a feature, the AI generates 80 lines, it looks roughly right, you tab-complete and move on.
What breaks: everything. The AI might have imported a deprecated library, hardcoded a URL, added a console.log with your API key, or written logic that works for the happy path but crashes on null inputs.
The fix: read every line of generated code before accepting it. Not skimming — reading. This sounds obvious but the whole point of vibe coding is to stay in flow, and reading carefully breaks flow. Do it anyway for anything touching data, auth, or external services.
2. No types on AI-generated interfaces
AI-generated TypeScript frequently uses any as the path of least resistance. Sometimes it's explicit (data: any), sometimes it's implicit (missing return types, untyped function parameters). The code compiles. Tests pass if you have them. Then a shape change in an API response silently breaks your app at runtime.
Real example: an AI-generated fetch wrapper that typed the response as any, passed it directly into a rendering function, and crashed 6 weeks later when the API added a pagination wrapper and the field moved from data.results to data.page.results.
The fix: never accept any in interfaces that cross system boundaries. Add explicit types for API responses, database query results, and function parameters. Run tsc --strict and fix what it finds.
3. Skipping env var validation at startup
AI-generated apps almost never validate environment variables at startup. They reference process.env.DATABASE_URL inline, deep inside a function, and fail at runtime with a cryptic error that has nothing to do with the missing variable.
I've watched a staging deploy fail for 20 minutes because someone added a new required env var, the AI generated the code to use it, nobody added it to the deployment config, and the error showed up as a null pointer exception three layers deep.
The fix: validate all required env vars at startup with a schema. Use something like zod to parse process.env once at boot and throw a clear error immediately if anything's missing. The AI won't do this for you — add it as a standard project template.
import { z } from "zod";
const env = z.object({
DATABASE_URL: z.string().url(),
API_SECRET: z.string().min(32),
STRIPE_KEY: z.string().startswith("sk_"),
}).parse(process.env);
4. Letting AI write auth logic
Auth is the one area where "it looks right" isn't good enough. JWT verification, session management, OAuth flows, role checks — these need to be correct, not approximately correct.
AI-generated auth code tends to have subtle bugs: forgetting to verify the JWT signature algorithm (accepting alg: none), checking authorization after loading the resource instead of before, using == instead of === for token comparison, or storing sensitive tokens in localStorage instead of httpOnly cookies.
The fix: use a battle-tested auth library (NextAuth, Auth.js, Clerk, Supabase Auth) and let the AI wire it up, not design it. If you must write custom auth, have a human who knows auth review every line. This is not a place for speed.
5. No rate limiting on AI-generated API routes
AI generates API routes that respond to every request without question. No rate limiting, no request size limits, no protection against enumeration attacks. On localhost this is fine. On a public endpoint it's an open invitation.
A Next.js API route that calls OpenAI for every request, with no rate limiting, will drain your OpenAI credits in minutes if someone finds it and loops against it. I've seen this happen — a public demo app got scraped and generated $800 in API costs overnight.
The fix: add rate limiting to every public API route that touches external services or does expensive computation. upstash/ratelimit with Redis is the easiest pattern for Next.js. Add it as a middleware wrapper. The AI should be prompted explicitly to include it.
6. AI-generated SQL without parameterization
This is the one that got me. AI-generated database queries frequently use string interpolation when the developer's prompt is casual: "write a function that finds users by email." The AI writes:
// Don't do this
const query = `SELECT * FROM users WHERE email = '${email}'`;
This is a textbook SQL injection vulnerability. It's not theoretical — it's exploitable by anyone who can reach your endpoint.
The fix: always use parameterized queries or a query builder. If you're using raw SQL, the query string should never contain variables — only $1, ?, or named placeholders. Review every database query the AI generates and reject any that interpolate user input directly.
7. Untested AI-generated regex
Regular expressions generated by AI look authoritative. They're dense enough that most developers don't try to parse them manually. And they're often wrong in edge cases that only appear in production data.
I've seen AI-generated email validation regex reject valid addresses with subdomains. I've seen URL parsing regex miss protocol-relative URLs. I've seen phone number validation that passed for US numbers but broke when an international number showed up.
The fix: test every regex the AI generates against a realistic dataset before shipping. For anything that touches user input validation, run at least 20 real-world examples through it. Use a tool like regex101.com to step through the logic. Never ship untested regex on a path that validates or rejects user data.
8. Letting AI manage DB migrations blindly
AI-generated migrations are dangerous because they look like infrastructure-as-code but operate on persistent state. An AI that generates a migration to "rename the users table to accounts" doesn't know you have foreign key constraints, running queries, or three weeks of data you'd like to keep.
The worst pattern: asking the AI to "fix the schema" and letting it generate a migration that drops and recreates a table. That looks like a valid migration. In development it works fine. In production it destroys data.
The fix: never let the AI generate migrations without you reading them character by character. For anything destructive (drop, rename, type change), run it against a production backup first. Use --dry-run flags where available. Add a review step in your CI pipeline that flags migrations containing DROP or TRUNCATE.
9. Accepting AI error diagnosis without checking the stack trace
When something breaks, the AI is eager to explain why and suggest a fix. The problem is it's guessing based on the error message you pasted. It doesn't have your actual stack trace, your environment, or your data.
I've watched developers spend two hours implementing an AI-suggested fix for a database connection error that was actually a misconfigured connection pool — not a missing index, not a query timeout, not any of the three things the AI confidently suggested first.
The fix: before asking the AI to diagnose an error, give it the full stack trace, the relevant code, and the input that triggered it. Be specific about your environment. "I'm getting an error" gives the AI enough room to hallucinate confidently. "Here's the full stack trace, here's the function, here's what the input looks like" gets you closer to the real answer.
10. Shipping without a single end-to-end test
Vibe coding produces working demos. The happy path works because that's what you tested while building. Edge cases, error states, and the specific combination of actions that real users take — those aren't covered.
The failure mode isn't dramatic. It's a steady stream of user reports that something "doesn't work" in a way you can't reproduce, because you've never actually walked through the full user flow with realistic data.
The fix: before shipping anything to real users, write one end-to-end test that covers the critical path. Not unit tests — a full flow test using Playwright or Cypress that signs in, does the thing, and verifies the outcome. One test is infinitely better than zero. It also forces you to think through the actual user experience, which often reveals the problems the AI introduced.
None of this means vibe coding is bad. The velocity is genuinely useful, especially for prototyping and solo projects. But the gap between "demo that works" and "production app that doesn't lose data, leak credentials, or get exploited" is where these anti-patterns live. The AI doesn't know the difference between a throwaway prototype and a system handling real user data. You do. That's still your job.
The best vibe coding workflow I've found: generate fast, review hard, test the boundaries. Use the AI for speed on the 80% that's straightforward, and slow down for anything touching auth, data persistence, or external services.



